title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 11. Installation and Booting
Chapter 11. Installation and Booting A new network-scripts option: IFDOWN_ON_SHUTDOWN This update adds the IFDOWN_ON_SHUTDOWN option for network-scripts . Setting this option to yes , true , or leaving it empty has no effect. If you set this option to no , or false , it causes the ifdown calls to not be issued when stopping or restarting the network service. This can be useful in situations where NFS (or other network file system) mounts are in a stale state, because the network was shut down before the mount was cleanly unmounted. (BZ# 1583677 ) Improved content of error messages in network-scripts The network-scripts now display more verbose error messages when the installation of bonding drivers fails. (BZ#1542514) Booting from an iSCSI device that is not configured using iBFT is now supported This update provides a new installer boot option inst.nonibftiscsiboot that supports the installation of boot loader on an iSCSI device that has not been configured in the iSCSI Boot Firmware Table (iBFT). This update helps when the iBFT is not used for booting the installed system from an iSCSI device, for example, an iPXE boot from SAN features is used instead. The new installer boot option allows you to install the boot loader on an iSCSI device that is not automatically added as part of the iBFT configuration but is manually added using the iscsi Kickstart command or the installer GUI. (BZ# 1562301 ) Installing and booting from NVDIMM devices is now supported Prior to this update, Nonvolatile Dual Inline Memory Module (NVDIMM) devices in any mode were ignored by the installer. With this update, kernel improvements to support NVDIMM devices provide improved system performance capabilities and enhanced file system access for write-intensive applications like database or analytic workloads, as well as reduced CPU overhead. This update introduces support for: The use of NVDIMM devices for installation using the nvdimm Kickstart command and the GUI, making it possible to install and boot from NVDIMM devices in sector mode and reconfigure NVDIMM devices into sector mode during installation. The extension of Kickstart scripts for Anaconda with commands for handling NVDIMM devices. The ability of grub2 , efibootmgr , and efivar system components to handle and boot from NVDIMM devices. (BZ# 1612965 , BZ#1280500, BZ#1590319, BZ#1558942) The --noghost option has been added to the rpm -V command This update adds the --noghost option to the rpm -V command. If used with this option, rpm -V verifies only the non-ghost files that were altered, which helps diagnose system problems. (BZ# 1395818 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/new_features_installation_and_booting
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The AMQ C++ examples require a running message broker with a queue named examples . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named examples . USD <broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-05-07 10:16:13 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name examples --address examples --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_cpp_client/using_the_broker_with_the_examples
5.53. docbook-utils
5.53. docbook-utils 5.53.1. RHBA-2012:1321 - docbook-utils bug fix update Updated docbook-utils packages that fix one bug are now available for Red Hat Enterprise Linux 6. The docbook-utils packages provide a set of utility scripts to convert and analyze SGML documents in general, and DocBook files in particular. The scripts are used to convert from DocBook or other SGML formats into file formats like HTML, man, info, RTF and many more. Bug Fixes BZ# 639866 Prior to this update, the Perl script used for generating manpages contained a misprint in the header. As a consequence, the header syntax of all manual pages that docbook-utils built was wrong. This update corrects the script. Now the manual page headers have the right syntax. All users of docbook-utils are advised to upgrade to these updated packages, which fix this bug.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/docbook-utils
Chapter 6. Known issues
Chapter 6. Known issues This section lists the known issues for AMQ Streams 2.1. 6.1. AMQ Streams Cluster Operator on IPv6 clusters The AMQ Streams Cluster Operator does not start on Internet Protocol version 6 (IPv6) clusters. Workaround There are two workarounds for this issue. Workaround one: Set the KUBERNETES_MASTER environment variable Display the address of the Kubernetes master node of your OpenShift Container Platform cluster: oc cluster-info Kubernetes master is running at <master_address> # ... Copy the address of the master node. List all Operator subscriptions: oc get subs -n <operator_namespace> Edit the Subscription resource for AMQ Streams: oc edit sub amq-streams -n <operator_namespace> In spec.config.env , add the KUBERNETES_MASTER environment variable, set to the address of the Kubernetes master node. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: OPERATOR-NAMESPACE spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: mirror-amq-streams sourceNamespace: openshift-marketplace config: env: - name: KUBERNETES_MASTER value: MASTER-ADDRESS Save and exit the editor. Check that the Subscription was updated: oc get sub amq-streams -n <operator_namespace> Check that the Cluster Operator Deployment was updated to use the new environment variable: oc get deployment <cluster_operator_deployment_name> Workaround two: Disable hostname verification List all Operator subscriptions: oc get subs -n OPERATOR-NAMESPACE Edit the Subscription resource for AMQ Streams: oc edit sub amq-streams -n <operator_namespace> In spec.config.env , add the KUBERNETES_DISABLE_HOSTNAME_VERIFICATION environment variable, set to true . For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: OPERATOR-NAMESPACE spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: mirror-amq-streams sourceNamespace: openshift-marketplace config: env: - name: KUBERNETES_DISABLE_HOSTNAME_VERIFICATION value: "true" Save and exit the editor. Check that the Subscription was updated: oc get sub amq-streams -n <operator_namespace> Check that the Cluster Operator Deployment was updated to use the new environment variable: oc get deployment <cluster_operator_deployment_name> 6.2. Cruise Control CPU utilization estimation Cruise Control for AMQ Streams has a known issue that relates to the calculation of CPU utilization estimation. CPU utilization is calculated as a percentage of the defined capacity of a broker pod. The issue occurs when running Kafka brokers across nodes with varying CPU cores. For example, node1 might have 2 CPU cores and node2 might have 4 CPU cores. In this situation, Cruise Control can underestimate and overestimate CPU load of brokers The issue can prevent cluster rebalances when the pod is under heavy load. Workaround There are two workarounds for this issue. Workaround one: Equal CPU requests and limits You can set CPU requests equal to CPU limits in Kafka.spec.kafka.resources . That way, all CPU resources are reserved upfront and are always available. This configuration allows Cruise Control to properly evaluate the CPU utilization when preparing the rebalance proposals based on CPU goals. Workaround two: Exclude CPU goals You can exclude CPU goals from the hard and default goals specified in the Cruise Control configuration. Example Cruise Control configuration without CPU goals apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal For more information, see Insufficient CPU capacity .
[ "cluster-info Kubernetes master is running at <master_address>", "get subs -n <operator_namespace>", "edit sub amq-streams -n <operator_namespace>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: OPERATOR-NAMESPACE spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: mirror-amq-streams sourceNamespace: openshift-marketplace config: env: - name: KUBERNETES_MASTER value: MASTER-ADDRESS", "get sub amq-streams -n <operator_namespace>", "get deployment <cluster_operator_deployment_name>", "get subs -n OPERATOR-NAMESPACE", "edit sub amq-streams -n <operator_namespace>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: amq-streams namespace: OPERATOR-NAMESPACE spec: channel: amq-streams-1.8.x installPlanApproval: Automatic name: amq-streams source: mirror-amq-streams sourceNamespace: openshift-marketplace config: env: - name: KUBERNETES_DISABLE_HOSTNAME_VERIFICATION value: \"true\"", "get sub amq-streams -n <operator_namespace>", "get deployment <cluster_operator_deployment_name>", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {} cruiseControl: brokerCapacity: inboundNetwork: 10000KB/s outboundNetwork: 10000KB/s config: hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.MinTopicLeadersPerBrokerGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundCapacityGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.PotentialNwOutGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.DiskUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkInboundUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.NetworkOutboundUsageDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.TopicReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderReplicaDistributionGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.LeaderBytesInDistributionGoal" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/release_notes_for_amq_streams_2.1_on_openshift/known-issues-str
Chapter 15. Security
Chapter 15. Security New packages: tang , clevis , jose , luksmeta Network Bound Disk Encryption (NBDE) allows the user to encrypt root volumes of the hard drives on physical and virtual machines without requiring to manually enter password when systems are rebooted. Tang is a server for binding data to network presence. It includes a daemon which provides cryptographic operations for binding to a remote service. The tang package provides the server side of the NBDE project. Clevis is a pluggable framework for automated decryption. It can be used to provide automated decryption of data or even automated unlocking of LUKS volumes. The clevis package provides the client side of the NBDE project. Jose is a C-language implementation of the Javascript Object Signing and Encryption standards. The jose package is a dependency of the clevis and tang packages. LUKSMeta is a simple library for storing metadata in the LUKSv1 header. The luksmeta package is a dependency of the clevis and tang packages. Note that the tang-nagios and clevis-udisk2 subpackages are available only as a Technology Preview. (BZ# 1300697 , BZ#1300696, BZ#1399228, BZ#1399229) New package: usbguard The USBGuard software framework provides system protection against intrusive USB devices by implementing basic whitelisting and blacklisting capabilities based on device attributes. To enforce a user-defined policy, USBGuard uses the Linux kernel USB device authorization feature. The USBGuard framework provides the following components: The daemon component with an inter-process communication (IPC) interface for dynamic interaction and policy enforcement The command-line interface to interact with a running USBGuard instance The rule language for writing USB device authorization policies The C++ API for interacting with the daemon component implemented in a shared library (BZ#1395615) openssh rebased to version 7.4 The openssh package has been updated to upstream version 7.4, which provides a number of enhancements, new features, and bug fixes, including: Added support for the resumption of interrupted uploads in SFTP . Added the extended log format for the authentication failure messages. Added a new fingerprint type that uses the SHA-256 algorithm. Added support for using PKCS#11 devices with external PIN entry devices. Removed support for the SSH-1 protocol from the OpenSSH server. Removed support for the legacy v00 cert format. Added the PubkeyAcceptedKeyTypes and HostKeyAlgorithms configuration options for the ssh utility and the sshd daemon to allow disabling key types selectively. Added the AddKeysToAgent option for the OpenSSH client. Added the ProxyJump ssh option and the corresponding -J command-line flag. Added support for key exchange methods for the Diffie-Hellman 2K, 4K, and 8K groups. Added the Include directive for the ssh_config file. Removed support for the UseLogin option. Removed support for the pre-authentication compression in the server. The seccomp filter is now used for the pre-authentication process. (BZ#1341754) audit rebased to version 2.7.6 The audit packages have been updated to upstream version 2.7.6, which provides a number of enhancements, new features, and bug fixes, including: The auditd service now automatically adjusts logging directory permissions when it starts up. This helps keep directory permissions correct after performing a package upgrade. The ausearch utility has a new --format output option. The --format text option presents an event as an English sentence describing what is happening. The --format csv option normalizes logs into a subject, object, action, results, and how it occurred in addition to some metadata fields which is output in the Comma Separated Value (CSV) format. This is suitable for pushing event information into a database, spreadsheet, or other analytic programs to view, chart, or analyze audit events. The auditctl utility can now reset the lost event counter in the kernel through the --reset-lost command-line option. This makes checking for lost events easier since you can reset the value to zero daily. ausearch and aureport now have a boot option for the --start command-line option to find events since the system booted. ausearch and aureport provide a new --escape command-line option to better control what kind of escaping is done to audit fields. It currently supports raw , tty , shell , and shell_quote escaping. auditctl no longer allows rules with the entry filter. This filter has not been supported since Red Hat Enterprise Linux 5. Prior to this release, on Red Hat Enterprise Linux 6 and 7, auditctl moved any entry rule to the exit filter and displayed a warning that the entry filter is deprecated. (BZ# 1381601 ) opensc rebased to version 0.16.0 The OpenSC set of libraries and utilities provides support for working with smart cards. OpenSC focuses on cards that support cryptographic operations and enables their use for authentication, mail encryption, or digital signatures. Notable enhancements in Red Hat Enterprise Linux 7.4 include: OpenSC adds support for Common Access Card (CAC) cards. OpenSC implements the PKCS#11 API and now provides also the CoolKey applet functionality. The opensc packages replace the coolkey packages. Note that the coolkey packages will remain supported for the lifetime of Red Hat Enterprise Linux 7, but new hardware enablement will be provided through the opensc packages. (BZ# 1081088 , BZ# 1373164 ) openssl rebased to version 1.0.2k The openssl package has been updated to upstream version 1.0.2k, which provides a number of enhancements, new features, and bug fixes, including: Added support for the Datagram Transport Layer Security TLS (DTLS) protocol version 1.2. Added support for the automatic elliptic curve selection for the ECDHE key exchange in TLS. Added support for the Application-Layer Protocol Negotiation (ALPN). Added Cryptographic Message Syntax (CMS) support for the following schemes: RSA-PSS, RSA-OAEP, ECDH, and X9.42 DH. Note that this version is compatible with the API and ABI in the OpenSSL library version in releases of Red Hat Enterprise Linux 7. (BZ# 1276310 ) openssl-ibmca rebased to version 1.3.0 The openssl-ibmca package has been updated to upstream version 1.3.0, which provides a number of bug fixes and enhancements over the version. Notable changes include: Added support for SHA-512. Cryptographic methods are dynamically loaded when the ibmca engine starts. This enables ibmca to direct cryptographic methods if they are supported in hardware through the libica library. Fixed a bug in block-size handling with stream cipher modes. (BZ#1274385) OpenSCAP 1.2 is NIST-certified OpenSCAP 1.2, the Security Content Automation Protocol (SCAP) scanner, has been certified by the National Institute of Standards and Technology (NIST) as a U. S. government-evaluated configuration and vulnerability scanner for Red Hat Enterprise Linux 6 and 7. OpenSCAP analyzes and evaluates security automation content correctly and it provides the functionality and documentation required by NIST to run in sensitive, security-conscious environments. Additionally, OpenSCAP is the first NIST-certified configuration scanner for evaluating Linux containers. Use cases include evaluating the configuration of Red Hat Enterprise Linux 7 hosts for PCI and DoD Security Technical Implementation Guide (STIG) compliance, as well as performing known vulnerability scans using Red Hat Common Vulnerabilities and Exposures (CVE) data. (BZ#1363826) libreswan rebased to version 3.20 The libreswan packages have been upgraded to upstream version 3.20, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: Added support for Opportunistic IPsec (Mesh Encryption), which enables IPsec deployments that cover a large number of hosts using a single simple configuration on all hosts. FIPS further tightened. Added support for routed-based VPN using Virtual Tunnel Interface (VTI). Improved support for non-root configurations. Improved Online Certificate Status Protocol (OCSP) and Certificate Revocation Lists (CRL) support. Added new whack command options: --fipsstatus , --fetchcrls , --globalstatus , and --shuntstatus . Added support for the NAT Opportunistic Encryption (OE) Client Address Translation: leftcat=yes . Added support for the Traffic Flow Confidentiality mechanism: tfc= . Updated cipher preferences as per RFC 4307bis and RFC 7321bis. Added support for Extended Sequence Numbers (ESN): esn=yes . Added support for disabling and increasing the replay window: replay-window= . (BZ# 1399883 ) Audit now supports filtering based on session ID With this update, the Linux Audit system supports user rules to filter audit messages based on the sessionid value. (BZ#1382504) libseccomp now supports IBM Power architectures With this update, the libseccomp library supports the IBM Power, 64-bit IBM Power, and 64-bit little-endian IBM Power architectures, which enables the GNOME rebase. (BZ# 1425007 ) AUDIT_KERN_MODULE now records module loading The AUDIT_KERN_MODULE auxiliary record has been added to AUDIT_SYSCALL records for the init_module() , finit_module() , and delete_module() functions. This information is stored in the audit_context structure. (BZ#1382500) OpenSSH now uses SHA-2 for public key signatures Previously, OpenSSH used the SHA-1 hash algorithm for public key signatures using RSA and DSA keys. SHA-1 is no longer considered secure, and new SSH protocol extension allows to use SHA-2. With this update, SHA-2 is the default algorithm for public key signatures. SHA-1 is available only for backward compatibility purposes. (BZ#1322911) firewalld now supports additional IP sets With this update of the firewalld service daemon, support for the following ipset types has been added: hash:ip,port hash:ip,port,ip hash:ip,port,net hash:ip,mark hash:net,net hash:net,port hash:net,port,net hash:net,iface The following ipset types that provide a combination of sources and destinations at the same time are not supported as sources in firewalld . IP sets using these types are created by firewalld , but their usage is limited to direct rules: hash:ip,port,ip hash:ip,port,net hash:net,net hash:net,port,net The ipset packages have been rebased to upstream version 6.29, and the following ipset types are now additionally supported: hash:mac hash:net,port,net hash:net,net hash:ip,mark (BZ# 1419058 ) firewalld now supports actions on ICMP types in rich rules With this update, the firewalld service daemon allows using Internet Control Message Protocol (ICMP) types in rich rules with the accept, log and mark actions. (BZ# 1409544 ) firewalld now supports disabled automatic helper assignment This update of the firewalld service daemon introduces support for the disabled automatic helper assignment feature. firewalld helpers can be now used without adding additional rules also if automatic helper assignment is turned off. (BZ#1006225) nss and nss-util now use SHA-256 by default With this update, the default configuration of the NSS library has been changed to use a stronger hash algorithm when creating digital signatures. With RSA, EC, and 2048-bit (or longer) DSA keys, the SHA-256 algorithm is now used. Note that also the NSS utilities, such as certutil , crlutil , and cmsutil , now use SHA-256 in their default configurations. (BZ# 1309781 ) Audit filter exclude rules now contain additional fields The exclude filter has been enhanced, and it now contains not only the msgtype field, but also the pid , uid , gid , auid , sessionID , and SELinux types. (BZ#1382508) PROCTITLE now provides the full command in Audit events This update introduces the PROCTITLE record addition to Audit events. PROCTITLE provides the full command being executed. The PROCTITLE value is encoded so it is not able to circumvent the Audit event parser. Note that the PROCTITLE value is still not trusted since it is manipulable by the user-space date. (BZ#1299527) nss-softokn rebased to version 3.28.3 The nss-softokn packages have been upgraded to upstream version 3.28.3, which provides a number of bug fixes and enhancements over the version: Added support for the ChaCha20-Poly1305 (RFC 7539) algorithm used by TLS (RFC 7905), the Internet Key Exchange Protocol (IKE), and IPsec (RFC 7634). For key exchange purposes, added support for the Curve25519/X25519 curve. Added support for the Extended Master Secret (RFC 7627) extension. (BZ# 1369055 ) libica rebased to version 3.0.2 The libica package has been upgraded to upstream version 3.0.2, which provides a number of fixes over the version. Notable additions include support for Federal Information Processing Standards (FIPS) mode support for generating pseudorandom numbers, including enhanced support for Deterministic Random Bit Generator compliant with the updated security specification NIST SP 800-90A. (BZ#1391558) opencryptoki rebased to version 3.6.2 The opencryptoki packages have been upgraded to upstream version 3.6.2, which provides a number of bug fixes and enhancements over the version: Added support for OpenSSL 1.1 Replaced deprecated OpenSSL interfaces. Replaced deprecated libica interfaces. Improved performance for IBM Crypto Accelerator (ICA). Added support for the rc=8, reasoncode=2028 error message in the icsf token. (BZ#1391559) AUDIT_NETFILTER_PKT events are now normalized The AUDIT_NETFILTER_PKT audit events are now simplified and message fields are now displayed in a consistent manner. (BZ#1382494) p11tool now supports writing objects by specifying a stored ID With this update, the p11tool GnuTLS PKCS#11 tool supports the new --id option to write objects by specifying a stored ID. This allows the written object to be addressable by more applications than p11tool . (BZ# 1399232 ) new package: nss-pem This update introduces the nss-pem package, which previously was part of the nss packages, as a separate package. The nss-pem package provides the PEM file reader for Network Security Services (NSS) implemented as a PKCS#11 module. (BZ#1316546) pmrfc3164 replaces pmrfc3164sd in rsyslog With the update of the rsyslog packages, the pmrfc3164sd module, which is used for parsing logs in the BSD syslog protocol format (RFC 3164), has been replaced by the official pmrfc3164 module. The official module does not fully cover the pmrfc3164sd functionality, and thus it is still available in rsyslog . However, it is recommended to use new pmrfc3164 module wherever possible. The pmrfc3164sd module is not supported anymore. (BZ#1431616) libreswan now supports right=%opportunisticgroup With this update, the %opportunisticgroup value for the right option in the conn part of Libreswan configuration is supported. This allows the opportunistic IPsec with X.509 authentication, which significantly reduces the administrative overhead in large environments. (BZ#1324458) ca-certificates now meet Mozilla Firefox 52.2 ESR requirements The Network Security Services (NSS) code and Certificate Authority (CA) list have been updated to meet the recommendations as published with the latest Mozilla Firefox Extended Support Release (ESR). The updated CA list improves compatibility with the certificates that are used in the Internet Public Key Infrastructure (PKI). To avoid certificate validation refusals, Red Hat recommends installing the updated CA list on June 12, 2017. (BZ#1444413) nss now meets Mozilla Firefox 52.2 ESR requirements for certificates The Certificate Authority (CA) list have been updated to meet the recommendations as published with the latest Mozilla Firefox Extended Support Release (ESR). The updated CA list improves compatibility with the certificates that are used in the Internet Public Key Infrastructure (PKI). To avoid certificate validation refusals, Red Hat recommends installing the updated CA list on June 12, 2017. (BZ#1444414) scap-security-guide rebased to version 0.1.33 The scap-security-guide packages have been upgraded to upstream version 0.1.33, which provides a number of bug fixes and enhancements over the version. In particular, this new version enhances existing compliance profiles and expands the scope of coverage to include two new configuration baselines: Extended support for PCI-DSS v3 Control Baseline Extended support for United States Government Commercial Cloud Services (C2S). Extended support for Red Hat Corporate Profile for Certified Cloud Providers. Added support for the Defense Information Systems Agency (DISA) Security Technical Implementation Guide (STIG) for Red Hat Enterprise Linux 7 profile, aligning to the DISA STIG for Red Hat Enterprise Linux V1R1 profile. Added support for the Unclassified Information in Non-federal Information Systems and Organizations (NIST 800-171) profile configures Red Hat Enterprise Linux 7 to the NIST Special Publication 800-53 controls identified for securing Controlled Unclassified Information (CUI). Added support for the United States Government Configuration Baseline (USGCB/STIG) profile, developed in partnership with the U. S. National Institute of Standards and Technology (NIST), U. S. Department of Defense, the National Security Agency, and Red Hat. The USGCB/STIG profile implements configuration requirements from the following documents: Committee on National Security Systems Instruction No. 1253 (CNSSI 1253) NIST Controlled Unclassified Information (NIST 800-171) NIST 800-53 control selections for moderate impact systems (NIST 800-53) U. S. Government Configuration Baseline (USGCB) NIAP Protection Profile for General Purpose Operating Systems v4.0 (OSPP v4.0) DISA Operating System Security Requirements Guide (OS SRG) Note that several previously-contained profiles have been removed or merged. (BZ# 1410914 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.4_release_notes/new_features_security
Chapter 82. Vaults in IdM
Chapter 82. Vaults in IdM This chapter describes vaults in Identity Management (IdM). It introduces the following topics: The concept of the vault . The different roles associated with a vault . The different types of vaults available in IdM based on the level of security and access control . The different types of vaults available in IdM based on ownership . The concept of vault containers . The basic commands for managing vaults in IdM . Installing the key recovery authority (KRA), which is a prerequisite for using vaults in IdM . 82.1. Vaults and their benefits A vault is a useful feature for those Identity Management (IdM) users who want to keep all their sensitive data stored securely but conveniently in one place. There are various types of vaults and you should choose which vault to use based on your requirements. A vault is a secure location in (IdM) for storing, retrieving, sharing, and recovering a secret. A secret is security-sensitive data, usually authentication credentials, that only a limited group of people or entities can access. For example, secrets include: Passwords PINs Private SSH keys A vault is comparable to a password manager. Just like a password manager, a vault typically requires a user to generate and remember one primary password to unlock and access any information stored in the vault. However, a user can also decide to have a standard vault. A standard vault does not require the user to enter any password to access the secrets stored in the vault. Note The purpose of vaults in IdM is to store authentication credentials that allow you to authenticate to external, non-IdM-related services. Other important characteristics of the IdM vaults are: Vaults are only accessible to the vault owner and those IdM users that the vault owner selects to be the vault members. In addition, the IdM administrator has access to the vault. If a user does not have sufficient privileges to create a vault, an IdM administrator can create the vault and set the user as its owner. Users and services can access the secrets stored in a vault from any machine enrolled in the IdM domain. One vault can only contain one secret, for example, one file. However, the file itself can contain multiple secrets such as passwords, keytabs or certificates. Note Vault is only available from the IdM command line (CLI), not from the IdM Web UI. 82.2. Vault owners, members, and administrators Identity Management (IdM) distinguishes the following vault user types: Vault owner A vault owner is a user or service with basic management privileges on the vault. For example, a vault owner can modify the properties of the vault or add new vault members. Each vault must have at least one owner. A vault can also have multiple owners. Vault member A vault member is a user or service that can access a vault created by another user or service. Vault administrator Vault administrators have unrestricted access to all vaults and are allowed to perform all vault operations. Note Symmetric and asymmetric vaults are protected with a password or key and apply special access control rules (see Vault types ). The administrator must meet these rules to: Access secrets in symmetric and asymmetric vaults. Change or reset the vault password or key. A vault administrator is any user with the Vault Administrators privilege. In the context of the role-based access control (RBAC) in IdM, a privilege is a group of permissions that you can apply to a role. Vault User The vault user represents the user in whose container the vault is located. The Vault user information is displayed in the output of specific commands, such as ipa vault-show : For details on vault containers and user vaults, see Vault containers . Additional resources See Standard, symmetric and asymmetric vaults for details on vault types. 82.3. Standard, symmetric, and asymmetric vaults Based on the level of security and access control, IdM classifies vaults into the following types: Standard vaults Vault owners and vault members can archive and retrieve the secrets without having to use a password or key. Symmetric vaults Secrets in the vault are protected with a symmetric key. Vault owners and members can archive and retrieve the secrets, but they must provide the vault password. Asymmetric vaults Secrets in the vault are protected with an asymmetric key. Users archive the secret using a public key and retrieve it using a private key. Vault members can only archive secrets, while vault owners can do both, archive and retrieve secrets. 82.4. User, service, and shared vaults Based on ownership, IdM classifies vaults into several types. The table below contains information about each type, its owner and use. Table 82.1. IdM vaults based on ownership Type Description Owner Note User vault A private vault for a user A single user Any user can own one or more user vaults if allowed by IdM administrator Service vault A private vault for a service A single service Any service can own one or more user vaults if allowed by IdM administrator Shared vault A vault shared by multiple users and services The vault administrator who created the vault Users and services can own one or more user vaults if allowed by IdM administrator. The vault administrators other than the one that created the vault also have full access to the vault. 82.5. Vault containers A vault container is a collection of vaults. The table below lists the default vault containers that Identity Management (IdM) provides. Table 82.2. Default vault containers in IdM Type Description Purpose User container A private container for a user Stores user vaults for a particular user Service container A private container for a service Stores service vaults for a particular service Shared container A container for multiple users and services Stores vaults that can be shared by multiple users or services IdM creates user and service containers for each user or service automatically when the first private vault for the user or service is created. After the user or service is deleted, IdM removes the container and its contents. 82.6. Basic IdM vault commands You can use the basic commands outlined below to manage Identity Management (IdM) vaults. The table below contains a list of ipa vault-* commands with the explanation of their purpose. Note Before running any ipa vault-* command, install the Key Recovery Authority (KRA) certificate system component on one or more of the servers in your IdM domain. For details, see Installing the Key Recovery Authority in IdM . Table 82.3. Basic IdM vault commands with explanations Command Purpose ipa help vault Displays conceptual information about IdM vaults and sample vault commands. ipa vault-add --help , ipa vault-find --help Adding the --help option to a specific ipa vault-* command displays the options and detailed help available for that command. ipa vault-show user_vault --user idm_user When accessing a vault as a vault member, you must specify the vault owner. If you do not specify the vault owner, IdM informs you that it did not find the vault: ipa vault-show shared_vault --shared When accessing a shared vault, you must specify that the vault you want to access is a shared vault. Otherwise, IdM informs you it did not find the vault: 82.7. Installing the Key Recovery Authority in IdM Follow this procedure to enable vaults in Identity Management (IdM) by installing the Key Recovery Authority (KRA) Certificate System (CS) component on a specific IdM server. Prerequisites You are logged in as root on the IdM server. An IdM certificate authority is installed on the IdM server. You have the Directory Manager credentials. Procedure Install the KRA: Important You can install the first KRA of an IdM cluster on a hidden replica. However, installing additional KRAs requires temporarily activating the hidden replica before you install the KRA clone on a non-hidden replica. Then you can hide the originally hidden replica again. Note To make the vault service highly available and resilient, install the KRA on two IdM servers or more. Maintaining multiple KRA servers prevents data loss. Additional resources Demoting or promoting hidden replicas The hidden replica mode
[ "ipa vault-show my_vault Vault name: my_vault Type: standard Owner users: user Vault user: user", "[admin@server ~]USD ipa vault-show user_vault ipa: ERROR: user_vault: vault not found", "[admin@server ~]USD ipa vault-show shared_vault ipa: ERROR: shared_vault: vault not found", "ipa-kra-install" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_and_managing_identity_management/vaults-in-idm_configuring-and-managing-idm
Chapter 11. LocalSubjectAccessReview [authorization.k8s.io/v1]
Chapter 11. LocalSubjectAccessReview [authorization.k8s.io/v1] Description LocalSubjectAccessReview checks whether or not a user or group can perform an action in a given namespace. Having a namespace scoped resource makes it much easier to grant namespace scoped policy that includes permissions checking. Type object Required spec 11.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard list metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set status object SubjectAccessReviewStatus 11.1.1. .spec Description SubjectAccessReviewSpec is a description of the access request. Exactly one of ResourceAuthorizationAttributes and NonResourceAuthorizationAttributes must be set Type object Property Type Description extra object Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. extra{} array (string) groups array (string) Groups is the groups you're testing for. nonResourceAttributes object NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface resourceAttributes object ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface uid string UID information about the requesting user. user string User is the user you're testing for. If you specify "User" but not "Groups", then is it interpreted as "What if User were not a member of any groups 11.1.2. .spec.extra Description Extra corresponds to the user.Info.GetExtra() method from the authenticator. Since that is input to the authorizer it needs a reflection here. Type object 11.1.3. .spec.nonResourceAttributes Description NonResourceAttributes includes the authorization attributes available for non-resource requests to the Authorizer interface Type object Property Type Description path string Path is the URL path of the request verb string Verb is the standard HTTP verb 11.1.4. .spec.resourceAttributes Description ResourceAttributes includes the authorization attributes available for resource requests to the Authorizer interface Type object Property Type Description group string Group is the API Group of the Resource. "*" means all. name string Name is the name of the resource being requested for a "get" or deleted for a "delete". "" (empty) means all. namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces "" (empty) is defaulted for LocalSubjectAccessReviews "" (empty) is empty for cluster-scoped resources "" (empty) means "all" for namespace scoped resources from a SubjectAccessReview or SelfSubjectAccessReview resource string Resource is one of the existing resource types. "*" means all. subresource string Subresource is one of the existing resource types. "" means none. verb string Verb is a kubernetes resource API verb, like: get, list, watch, create, update, delete, proxy. "*" means all. version string Version is the API Version of the Resource. "*" means all. 11.1.5. .status Description SubjectAccessReviewStatus Type object Required allowed Property Type Description allowed boolean Allowed is required. True if the action would be allowed, false otherwise. denied boolean Denied is optional. True if the action would be denied, otherwise false. If both allowed is false and denied is false, then the authorizer has no opinion on whether to authorize the action. Denied may not be true if Allowed is true. evaluationError string EvaluationError is an indication that some error occurred during the authorization check. It is entirely possible to get an error and be able to continue determine authorization status in spite of it. For instance, RBAC can be missing a role, but enough roles are still present and bound to reason about the request. reason string Reason is optional. It indicates why a request was allowed or denied. 11.2. API endpoints The following API endpoints are available: /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews POST : create a LocalSubjectAccessReview 11.2.1. /apis/authorization.k8s.io/v1/namespaces/{namespace}/localsubjectaccessreviews Table 11.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a LocalSubjectAccessReview Table 11.2. Body parameters Parameter Type Description body LocalSubjectAccessReview schema Table 11.3. HTTP responses HTTP code Reponse body 200 - OK LocalSubjectAccessReview schema 201 - Created LocalSubjectAccessReview schema 202 - Accepted LocalSubjectAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authorization_apis/localsubjectaccessreview-authorization-k8s-io-v1
8.3. Backing up Keys on Hardware Security Modules
8.3. Backing up Keys on Hardware Security Modules It is not possible to export keys and certificates stored on an HSM to a .p12 file. If such an instance is to be backed-up, contact the manufacturer of your HSM for support.
null
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/backing_up_keys_on_hardware_security_modules_bis
Chapter 5. Integrating by using generic webhooks
Chapter 5. Integrating by using generic webhooks With Red Hat Advanced Cluster Security for Kubernetes, you can send alert notifications as JSON messages to any webhook receiver. When a violation occurs, Red Hat Advanced Cluster Security for Kubernetes makes an HTTP POST request on the configured URL. The POST request body includes JSON-formatted information about the alert. The webhook POST request's JSON data includes a v1.Alert object and any custom fields that you configure, as shown in the following example: { "alert": { "id": "<id>", "time": "<timestamp>", "policy": { "name": "<name>", ... }, ... }, "<custom_field_1>": "<custom_value_1>" } You can create multiple webhooks. For example, you can create one webhook for receiving all audit logs and another webhook for alert notifications. To forward alerts from Red Hat Advanced Cluster Security for Kubernetes to any webhook receiver: Set up a webhook URL to receive alerts. Use the webhook URL to set up notifications in Red Hat Advanced Cluster Security for Kubernetes. Identify the policies you want to send notifications for, and update the notification settings for those policies. 5.1. Configuring integrations by using webhooks Create a new integration in Red Hat Advanced Cluster Security for Kubernetes by using the webhook URL. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the Notifier Integrations section and select Generic Webhook . Click New integration . Enter a name for Integration name . Enter the webhook URL in the Endpoint field. If your webhook receiver uses an untrusted certificate, enter a CA certificate in the CA certificate field. Otherwise, leave it blank. Note The server certificate used by the webhook receiver must be valid for the endpoint DNS name. You can click Skip TLS verification to ignore this validation. Red Hat does not suggest turning off TLS verification. Without TLS verification, data could be intercepted by an unintended recipient. Optional: Click Enable audit logging to receive alerts about all the changes made in Red Hat Advanced Cluster Security for Kubernetes. Note Red Hat suggests using separate webhooks for alerts and audit logs to handle these messages differently. To authenticate with the webhook receiver, enter details for one of the following: Username and Password for basic HTTP authentication Custom Header , for example: Authorization: Bearer <access_token> Use Extra fields to include additional key-value pairs in the JSON object that Red Hat Advanced Cluster Security for Kubernetes sends. For example, if your webhook receiver accepts objects from multiple sources, you can add "source": "rhacs" as an extra field and filter on this value to identify all alerts from Red Hat Advanced Cluster Security for Kubernetes. Select Test to send a test message to verify that the integration with your generic webhook is working. Select Save to create the configuration. 5.2. Configuring policy notifications Enable alert notifications for system policies. Procedure In the RHACS portal, go to Platform Configuration Policy Management . Select one or more policies for which you want to send alerts. Under Bulk actions , select Enable notification . In the Enable notification window, select the webhook notifier. Note If you have not configured any other integrations, the system displays a message that no notifiers are configured. Click Enable . Note Red Hat Advanced Cluster Security for Kubernetes sends notifications on an opt-in basis. To receive notifications, you must first assign a notifier to the policy. Notifications are only sent once for a given alert. If you have assigned a notifier to a policy, you will not receive a notification unless a violation generates a new alert. Red Hat Advanced Cluster Security for Kubernetes creates a new alert for the following scenarios: A policy violation occurs for the first time in a deployment. A runtime-phase policy violation occurs in a deployment after you resolved the runtime alert for a policy in that deployment.
[ "{ \"alert\": { \"id\": \"<id>\", \"time\": \"<timestamp>\", \"policy\": { \"name\": \"<name>\", }, }, \"<custom_field_1>\": \"<custom_value_1>\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/integrating/integrate-using-generic-webhooks
Chapter 11. Data privacy for Projects
Chapter 11. Data privacy for Projects OpenStack is designed to support multi-tenancy between projects with different data requirements. A cloud operator will need to consider their applicable data privacy concerns and regulations. This chapter addresses aspects of data residency and disposal for OpenStack deployments. 11.1. Data privacy concerns This section describes some concerns that can arise for data privacy in OpenStack deployments. 11.1.1. Data residency The privacy and isolation of data has consistently been cited as the primary barrier to cloud adoption over the past few years. Concerns over who owns data in the cloud and whether the cloud operator can be ultimately trusted as a custodian of this data have been significant issues in the past. Certain OpenStack services have access to data and metadata belonging to projects or reference project information. For example, project data stored in an OpenStack cloud might include the following items: Object Storage objects. Compute instance ephemeral filesystem storage. Compute instance memory. Block Storage volume data. Public keys for Compute access. Virtual machine images in the Image service. Instance snapshots. Data passed to Compute's configuration-drive extension. Metadata stored by an OpenStack cloud includes the following items (this list is non-exhaustive): Organization name. User's "Real Name". Number or size of running instances, buckets, objects, volumes, and other quota-related items. Number of hours running instances or storing data. IP addresses of users. Internally generated private keys for compute image bundling. 11.1.2. Data disposal Good practices suggest that the operator must sanitize cloud system media (digital and non-digital) prior to disposal, prior to release out of organization control, or prior to release for reuse. Sanitization methods should implement an appropriate level of strength and integrity given the specific security domain and sensitivity of the information. Note The NIST Special Publication 800-53 Revision 4 takes a particular view on this topic: Cloud operators should consider the following when developing general data disposal and sanitization guidelines (as per the NIST recommended security controls): Track, document and verify media sanitization and disposal actions. Test sanitation equipment and procedures to verify proper performance. Sanitize portable, removable storage devices prior to connecting such devices to the cloud infrastructure. Destroy cloud system media that cannot be sanitized. As a result, an OpenStack deployment will need to address the following practices (among others): Secure data erasure Instance memory scrubbing Block Storage volume data Compute instance ephemeral storage Bare metal server sanitization 11.1.3. Data not securely erased Within OpenStack some data might be deleted, but not securely erased in the context of the NIST standards outlined above. This is generally applicable to most or all of the above-defined metadata and information stored in the database. This might be remediated with database and/or system configuration for auto vacuuming and periodic free-space wiping. 11.1.4. Instance memory scrubbing Specific to various hypervisors is the treatment of instance memory. This behavior is not defined in Compute, although it is generally expected of hypervisors that they will make a best effort to scrub memory either upon deletion of an instance, upon creation of an instance, or both. 11.1.5. Encrypting cinder volume data Use of the OpenStack volume encryption feature is highly encouraged. This is discussed below in the Data Encryption section under Volume Encryption. When this feature is used, destruction of data is accomplished by securely deleting the encryption key. The end user can select this feature while creating a volume, but note that an admin must perform a one-time set up of the volume encryption feature first. If the OpenStack volume encryption feature is not used, then other approaches generally would be more difficult to enable. If a back-end plug-in is being used, there might be independent ways of doing encryption or non-standard overwrite solutions. Plug-ins to OpenStack Block Storage will store data in a variety of ways. Many plug-ins are specific to a vendor or technology, whereas others are more DIY solutions around filesystems (such as LVM or ZFS). Methods for securely destroying data will vary between plug-ins, vendors, and filesystems. Some back ends (such as ZFS) will support copy-on-write to prevent data exposure. In these cases, reads from unwritten blocks will always return zero. Other back ends (such as LVM) might not natively support this, so the cinder plug-in takes the responsibility to override previously written blocks before handing them to users. It is important to review what assurances your chosen volume back-end provides and to see what remediation might be available for those assurances not provided. 11.1.6. Image service delay delete features Image Service has a delayed delete feature, which will pend the deletion of an image for a defined time period. Consider disabling this feature if this behavior is a security concern; you can do this by editing glance-api.conf file and setting the delayed_delete option to False . 11.1.7. Compute soft delete features Compute has a soft-delete feature, which enables an instance that is deleted to be in a soft-delete state for a defined time period. The instance can be restored during this time period. To disable the soft-delete feature, edit the /var/lib/config-data/puppet-generated/nova_libvirt/etc/nova/nova.conf file and leave the reclaim_instance_interval option empty. 11.1.8. Ephemeral storage for Compute instances Note that the OpenStack Ephemeral disk encryption feature provides a means of improving ephemeral storage privacy and isolation, during both active use as well as when the data is to be destroyed. As in the case of encrypted block storage, one can simply delete the encryption key to effectively destroy the data. Alternate measures to provide data privacy, in the creation and destruction of ephemeral storage, will be somewhat dependent on the chosen hypervisor and the Compute plug-in. The libvirt driver for compute might maintain ephemeral storage directly on a filesystem, or in LVM. Filesystem storage generally will not overwrite data when it is removed, although there is a guarantee that dirty extents are not provisioned to users. When using LVM backed ephemeral storage, which is block-based, it is necessary that the Compute software securely erases blocks to prevent information disclosure. There have previously been information disclosure vulnerabilities related to improperly erased ephemeral block storage devices. Filesystem storage is a more secure solution for ephemeral block storage devices than LVM as dirty extents cannot be provisioned to users. However, it is important to be mindful that user data is not destroyed, so it is suggested to encrypt the backing filesystem. 11.2. Security hardening for bare metal provisioning For your bare metal provisioning infrastructure, you should consider security hardening the baseboard management controllers (BMC) in general, and IPMI in particular. For example, you might isolate these systems within a provisioning network, configure non-default and strong passwords, and disable unwanted management functions. For more information, you can refer to the vendor's guidance on security hardening these components. Note If possible, consider evaluating Redfish-based BMCs over legacy ones. 11.2.1. Hardware identification When deploying a server, there might not always have a reliable way to distinguish it from an attacker's server. This capability might be dependent on the hardware/BMC to some extent, but generally it seems that there is no verifiable means of identification built into servers. 11.3. Data encryption The option exists for implementers to encrypt project data wherever it is stored on disk or transported over a network, such as the OpenStack volume encryption feature described below. This is above and beyond the general recommendation that users encrypt their own data before sending it to their provider. The importance of encrypting data on behalf of projects is largely related to the risk assumed by a provider that an attacker could access project data. There might be requirements here in government, as well as requirements per-policy, in private contract, or even in case law in regard to private contracts for public cloud providers. Consider getting a risk assessment and legal advice before choosing project encryption policies. Per-instance or per-object encryption is preferable over, in descending order, per-project, per-host, and per-cloud aggregations. This recommendation is inverse to the complexity and difficulty of implementation. Presently, in some projects it is difficult or impossible to implement encryption as loosely granular as even per-project. Implementers should give serious consideration to encrypting project data. Often, data encryption relates positively to the ability to reliably destroy project and per-instance data, simply by throwing away the keys. It should be noted that in doing so, it becomes of great importance to destroy those keys in a reliable and secure manner. Opportunities to encrypt data for users are present: Object Storage objects Network data 11.3.1. Volume encryption A volume encryption feature in OpenStack supports privacy on a per-project basis. The following features are supported: Creation and usage of encrypted volume types, initiated through the dashboard or a command line interface Enable encryption and select parameters such as encryption algorithm and key size Volume data contained within iSCSI packets is encrypted Supports encrypted backups if the original volume is encrypted Dashboard indication of volume encryption status. Includes indication that a volume is encrypted, and includes the encryption parameters such as algorithm and key size Interface with the Key management service 11.3.2. Object Storage objects Object Storage (swift) supports the optional encryption of object data at rest on storage nodes. The encryption of object data is intended to mitigate the risk of user's` data being read if an unauthorized party were to gain physical access to a disk. Encryption of data at rest is implemented by middleware that may be included in the proxy server WSGI pipeline. The feature is internal to a swift cluster and not exposed through the API. Clients are unaware that data is encrypted by this feature internally to the swift service; internally encrypted data should never be returned to clients through the swift API. The following data are encrypted while at rest in swift: Object content, for example, the content of an object PUT request's body. The entity tag ( ETag ) of objects that have non-zero content. All custom user object metadata values. For example, metadata sent using X-Object-Meta- prefixed headers with PUT or POST requests. Any data or metadata not included in the list above is not encrypted, including: Account, container, and object names Account and container custom user metadata values All custom user metadata names Object Content-Type values Object size System metadata 11.3.3. Block Storage performance and back ends When enabling the operating system, OpenStack Volume Encryption performance can be enhanced by using the hardware acceleration features currently available in both Intel and AMD processors. Both the OpenStack Volume Encryption feature and the OpenStack Ephemeral Disk Encryption feature use dm-crypt to secure volume data. dm-crypt is a transparent disk encryption capability in Linux kernel versions 2.6 and later. When using hardware acceleration, the performance impact of both of the encryption features is minimized. 11.3.4. Network data Project data for Compute nodes could be encrypted over IPsec or other tunnels. This practice is not common or standard in OpenStack, but is an option available to motivated and interested implementers. Likewise, encrypted data remains encrypted as it is transferred over the network. 11.4. Key management To address the often mentioned concern of project data privacy, there is significant interest within the OpenStack community to make data encryption more ubiquitous. It is relatively easy for an end-user to encrypt their data prior to saving it to the cloud, and this is a viable path for project objects such as media files, database archives among others. In some instances, client-side encryption is used to encrypt data held by the virtualization technologies which requires client interaction, such as presenting keys, to decrypt data for future use. Barbican can help projects more seamlessly encrypt the data and have it accessible without burdening the user with key management. Providing encryption and key management services as part of OpenStack eases data-at-rest security adoption and can help address customer concerns about privacy or misuse of data. The volume encryption and ephemeral disk encryption features rely on a key management service (for example, barbican) for the creation and security-hardened storage of keys.
[ "The sanitization process removes information from the media such that the information cannot be retrieved or reconstructed. Sanitization techniques, including clearing, purging, cryptographic erase, and destruction, prevent the disclosure of information to unauthorized individuals when such media is reused or released for disposal." ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/security_and_hardening_guide/data_privacy_for_projects
Chapter 1. Image Builder description
Chapter 1. Image Builder description 1.1. Introduction to Image Builder You can use Image Builder to create customized system images of Red Hat Enterprise Linux, including system images prepared for deployment on cloud platforms. Image Builder automatically handles details of setup for each output type and is thus easier to use and faster to work with than manual methods of image creation. You can access Image Builder functionality through a command-line interface in the composer-cli tool, or a graphical user interface in the RHEL 7 web console. See managing the web console for more information. Image Builder runs as a system service lorax-composer . You can interact with this service through two interfaces: CLI tool composer-cli for running commands in the terminal. This method is recommended. GUI plugin for the RHEL 7 web console.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/image_builder_guide/chap-documentation-image_builder-test_chapter
Chapter 18. Monitoring the replication topology using the web console
Chapter 18. Monitoring the replication topology using the web console To monitor the state of the directory data replication between suppliers, consumers, and hubs, you can use replication topology report that provides information on the replication progress, replica IDs, number of changes, and other parameters. To generate the report faster and make it more readable, you can configure your own credentials and aliases. 18.1. Displaying a replication topology report using the web console To view overall information about the replication status for each agreement in your replication topology, you can display the replication topology report. Prerequisites The host is a member of replication topology. You initialized the consumers. You are logged in to the web console. Procedure Navigate to Monitoring Replication . The Replication Monitoring page opens. Click Generate Report . Enter the passwords for login to remote instances and click Confirm Credentials Input . Directory Server uses bind DNs values from existing replication agreements. The replication topology report will be generated on the Report Result tab. Note To generate another replication topology report, go to the Prepare Report tab. Additional resources Setting credentials for replication monitoring using the web console Configuring replication naming aliases using the web console Displaying a replication topology report using the command line 18.2. Setting credentials for replication monitoring using the web console To generate the replication topology report faster and easier, you can set your own bind DNs, and optionally passwords, for each server in the topology for authentication. In this case, you do not need to confirm replication credentials each time you want to generate a replication topology report. By default, Directory Server takes these credentials from existing replication agreements. Prerequisites The host is a member of replication topology. You initialized the consumer. You are logged in to the web console. Procedure Navigate to Monitoring Replication . The Replication Monitoring page opens. Click Add Credentials . Enter replication login credentials you want to use for authentication to remote instances: Hostname . A remote instance hostname you want the server to authenticate to. Port . A remote instance port. Bind DN . Bind DN used for authentication to the remote instance. Password . A password used for authentication. Interactive Input . If checked, Directory Server will ask for a password every time you generate a replication topology report. Click Save . Verification Generate the replication topology report to see If the report asks for the credentials. For more information, see Displaying a replication topology report using the web console . 18.3. Configuring replication naming aliases using the web console To make the report more readable, you can set your own aliases that will be displayed in the report output. By default, the replication monitoring report contains the hostnames of servers. Prerequisites The host is a member of replication topology. You initialized the consumers. You are logged in to the web console. Procedure Navigate to Monitoring Replication . The Replication Monitoring page opens. Click Add Alias . Enter alias details: Alias . An alias that will be displayed in the replication topology report. Hostname . An instance hostname. Port . An instance port. Click Save . Verification Generate the replication topology report to see If the report uses new aliases. For more information, see Displaying a replication topology report using the web console .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_monitoring-the-replication-topology-using-the-web-console_configuring-and-managing-replication
Chapter 1. Networking
Chapter 1. Networking Learn about network requirements for both the hub cluster and the managed cluster. Hub cluster network configuration Managed cluster network configuration Advanced network configuration Submariner multicluster networking and service discovery 1.1. Hub cluster network configuration Important: The trusted CA bundle is available in the Red Hat Advanced Cluster Management namespace, but that enhancement requires changes to your network. The trusted CA bundle ConfigMap uses the default name of trusted-ca-bundle . You can change this name by providing it to the operator in an environment variable named TRUSTED_CA_BUNDLE . See Configuring the cluster-wide proxy in the Networking section of Red Hat OpenShift Container Platform for more information. You can refer to the configuration for your hub cluster network. 1.1.1. Hub cluster network configuration table See the hub cluster network requirements in the following table: Direction Protocol Connection Port (if specified) Source address Destination address Outbound to the managed cluster HTTPS Retrieval of logs dynamically from Search console for the pods of the managed cluster, uses the klusterlet-addon-workmgr service that is running on the managed cluster 443 None IP address to access managed cluster route Outbound to the managed cluster HTTPS Kubernetes API server of the managed cluster that is provisioned during installation to install the klusterlet 6443 None IP of Kubernetes managed cluster API server Outbound to the channel source HTTPS The channel source, including GitHub, Object Store, and Helm repository, which is only required when you are using Application lifecycle, OpenShift GitOps, or Argo CD to connect 443 None IP of the channel source Inbound from the managed cluster HTTPS Managed cluster to push metrics and alerts that are gathered only for managed clusters that are running on a supported OpenShift Container Platform version 443 None IP address to hub cluster access route Inbound from the managed cluster HTTPS Kubernetes API Server of hub cluster that is watched for changes from the managed cluster 6443 None IP address of hub cluster Kubernetes API Server Outbound to the ObjectStore HTTPS Sends Observability metric data for long term storage when the Cluster Backup Operator is running 443 None IP address of ObjectStore Outbound to the image repository HTTPS Access images for OpenShift Container Platform and Red Hat Advanced Cluster Management 443 None IP address of image repository 1.2. Managed cluster network configuration You can refer to the configuration for your managed cluster network. 1.2.1. Managed cluster network configuration table See the managed cluster network requirements in the following table: Direction Protocol Connection Port (if specified) Source address Destination address Inbound from the hub cluster HTTPS Sending of logs dynamically from Search console for the pods of the managed cluster, uses the klusterlet-addon-workmgr service that is running on the managed cluster 443 None IP address to access managed cluster route Inbound from the hub cluster HTTPS Kubernetes API server of the managed cluster that is provisioned during installation to install the klusterlet 6443 None IP of Kubernetes managed cluster API server Outbound to the image repository HTTPS Access images for OpenShift Container Platform and Red Hat Advanced Cluster Management 443 None IP address of image repository Outbound to the hub cluster HTTPS Managed cluster to push metrics and alerts that are gathered only for managed clusters that are running on a supported OpenShift Container Platform version 443 None IP address to hub cluster access route Outbound to the hub cluster HTTPS Watches the Kubernetes API server of the hub cluster for changes 6443 None IP address of hub cluster Kubernetes API Server Outbound to the channel source HTTPS The channel source, including GitHub, Object Store, and Helm repository, which is only required when you are using Application lifecycle, OpenShift GitOps, or Argo CD to connect 443 None IP of the channel source 1.3. Advanced network configuration Additional networking requirements for infrastructure operator table Submariner networking requirements table Additional networking requirements for Hive table Application deployment network requirements table Namespace connection network requirements table 1.3.1. Additional networking requirements for infrastructure operator table When you are installing bare metal managed clusters with the Infrastructure Operator, see Network configuration in the multicluster engine for Kubernetes operator documentation for additional networking requirements. 1.3.2. Submariner networking requirements table Clusters that are using Submariner require three open ports. The following table shows which ports you might use: Direction Protocol Connection Port (if specified) Outbound and inbound UDP Each of the managed clusters 4800 Outbound and inbound UDP Each of the managed clusters 4500, 500, and any other ports that are used for IPSec traffic on the gateway nodes Inbound TCP Each of the managed clusters 8080 1.3.3. Additional networking requirements for Hive table When you are installing bare metal managed clusters with the Hive Operator, which includes using central infrastructure management, you must configure a layer 2 or layer 3 port connection between the hub cluster and the libvirt provisioning host. This connection to the provisioning host is required during the creation of a base metal cluster with Hive. See the following table for more information: Direction Protocol Connection Port (if specified) Hub cluster outbound and inbound to the libvirt provisioning host IP Connects the hub cluster, where the Hive operator is installed, to the libvirt provisioning host that serves as a bootstrap when creating the bare metal cluster Note: These requirements only apply when installing, and are not required when upgrading clusters that were installed with Infrastructure Operator. 1.3.4. Application deployment network requirements table In general, the application deployment communication is one way from a managed cluster to the hub cluster. The connection uses kubeconfig , which is configured by the agent on the managed cluster. The application deployment on the managed cluster needs to access the following namespaces on the hub cluster: The namespace of the channel resource The namespace of the managed cluster 1.3.5. Namespace connection network requirements table Application lifecycle connections: The namespace open-cluster-management needs to access the console API on port 4000. The namespace open-cluster-management needs to expose the Application UI on port 3001. Application lifecycle backend components (pods): On the hub cluster, all of the application lifecycle pods are installed in the open-cluster-management namespace, including the following pods: multicluster-operators-hub-subscription multicluster-operators-standalone-subscription multicluster-operators-channel multicluster-operators-application multicluster-integrations As a result of these pods being in the open-cluster-management namespace: The namespace open-cluster-management needs to access the Kube API on port 6443. On the managed cluster, only the klusterlet-addon-appmgr application lifecycle pod is installed in the open-cluster-management-agent-addon namespace: The namespace open-cluster-management-agent-addon needs to access the Kube API on port 6443. Governance and risk: On the hub cluster, the following access is required: The namespace open-cluster-management needs to access the Kube API on port 6443. The namespace open-cluster-management needs to access the OpenShift DNS on port 5353. On the managed cluster, the following access is required: The namespace open-cluster-management-addon needs to access the Kube API on port 6443. 1.4. Submariner multicluster networking and service discovery Submariner is an open source tool that you can use with Red Hat Advanced Cluster Management for Kubernetes to provide direct networking and service discovery between two or more managed clusters in your environment, either on-premises or in the cloud. Submariner is compatible with Multi-Cluster Services API ( Kubernetes Enhancements Proposal #1645 ). For more information about Submariner, see the Submariner site . Submariner is not supported with all of the infrastructure providers that Red Hat Advanced Cluster Management can manage. For full support information, see the Red Hat Advanced Cluster Management support matrix for details about the supported levels of infrastructure providers, including which providers support automated console deployments or require manual deployment . See the following topics to learn more about how to use Submariner: Deploying Submariner on disconnected clusters Configuring Submariner Installing the subctl command utility Deploying Submariner by using the console Deploying Submariner manually Customizing Submariner deployments Managing service discovery for Submariner Uninstalling Submariner 1.4.1. Deploying Submariner on disconnected clusters Deploying Submariner on disconnected clusters can help with security concerns by reducing the risk of external attacks on clusters. To deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes on disconnected clusters, you must first complete the steps outlined in Install in disconnected network environments . 1.4.1.1. Configuring Submariner on disconnected clusters After completing the steps in Install in disconnected network environments , configure Submariner during the installation to support deployment on disconnected clusters. Complete the following steps: Mirror the Submariner Operator bundle image in the local registry before you deploy Submariner on disconnected clusters. Choose the Submariner Operator version that is compatible with your Red Hat Advanced Cluster Management version. For instance, use 0.19.0 for Red Hat Advanced Cluster Management version 2.12. Customize catalogSource names. By default, submariner-addon searches for a catalogSource with the name redhat-operators . When you use a catalogSource with a different name, you must update the value of the SubmarinerConfig.Spec.subscriptionConfig.Source parameter in the SubmarinerConfig associated with your managed cluster with the custom name of the catalogSource . Enable airGappedDeployment in SubmarinerConfig .When installing submariner-addon on a managed cluster from the Red Hat Advanced Cluster Management for Kubernetes console, you can select the Disconnected cluster option so that Submariner does not make API queries to external servers. Note: If you are installing Submariner by using the APIs, you must set the airGappedDeployment parameter to true in the SubmarinerConfig associated with your managed cluster. 1.4.2. Configuring Submariner Red Hat Advanced Cluster Management for Kubernetes provides Submariner as an add-on for your hub cluster. To learn how to configure Submariner, read the following topics: Prerequisites Submariner ports table Globalnet 1.4.2.1. Prerequisites Ensure that you have the following prerequisites before using Submariner: A credential to access the hub cluster with cluster-admin permissions. IP connectivity must be configured between the gateway nodes. When connecting two clusters, at least one of the clusters must be accessible to the gateway node by using its public or private IP address designated to the gateway node. See Submariner NAT Traversal for more information. If you are using OVN Kubernetes, clusters must be on a supported version of OpenShift Container Platform. If your OpenShift Container Platform clusters use OpenShift SDN CNI, the firewall configuration across all nodes in each of the managed clusters must allow 4800/UDP in both directions. The firewall configuration must allow 4500/UDP and 4490/UDP on the gateway nodes for establishing tunnels between the managed clusters. For OpenShift Container Platform ARM deployments, you must use the c6g.large instanceType or any other available instance type. If the gateway nodes are directly reachable over their private IPs without any NAT in between, make sure that the firewall configuration allows the ESP protocol on the gateway nodes. Notes: This update is configured automatically when your clusters are deployed in an Amazon Web Services, Google Cloud Platform, Microsoft Azure, or Red Hat OpenStack environment. You must manually configure it for clusters on other environments and for the firewalls that protect private clouds. If you do not want to allow the ESP protocol in the firewall, you can force Submariner to encapsulate IPSec traffic in UDP by editing SubmarinerConfig and adding forceUDPEncaps: true to the spec section. The managedcluster name must follow the DNS label standard as defined in RFC 1123 and meet the following requirements: Contain 63 characters or fewer Contain only lowercase alphanumeric characters or '-' Start with an alphanumeric character End with an alphanumeric character 1.4.2.2. Submariner ports table View the following table to see the Submariner ports that you must enable: Name Default value Customizable Optional or required IPsec NATT 4500/UDP Yes Required VXLAN 4800/UDP No Required NAT discovery port 4490/UDP No Required 1.4.2.3. Configuring Submariner for an existing VPC If you installed your cluster into an existing Amazon Virtual Private Cloud (VPC) on Amazon Web Services (AWS), you must complete the following steps to configure Subarminer: Open and edit the SubmarinerConfig file by running the following command. Replace values where needed: oc edit submarinerconfig -n <managed-cluster-ns> submariner Add the following annotations in the metadata field. Replace values where needed: Note: All the IDs you need to add are alphanumeric. annotations: submariner.io/control-plane-sg-id: <control-plane-group-id> 1 submariner.io/subnet-id-list: <subnet-id-list> 2 submariner.io/vpc-id: <custom-vpc-id> 3 submariner.io/worker-sg-id: <worker-security-group-id> 4 1 Replace with your control plane security group ID. Typically, you can find the ID from the control plane security group that has a name similar to <infra-id>-master-sg> . 2 Replace with a comma-separated list of public subnet IDs in your custom VPC. 3 Replace with your custom VPC ID. 4 Replace with your worker security group ID. Typically, you can find the ID from the worker security group that has a name similar to <infra-id>-worker-sg . 1.4.2.4. Globalnet Globalnet is a Submariner add-on feature that allows you to connect clusters with overlapping Classless Inter-Domain Routings (CIDRs), without changing the CIDRs on existing clusters. Globalnet is a cluster set-wide configuration that you can select when you add the first managed cluster to a cluster set. If you enable Globalnet, every managed cluster receives a global CIDR from the virtual Global Private Network, which is used to facilitate inter-cluster communication. Important: You must enable Globalnet when clusters in a cluster set might have overlapping CIDRs. The ClusterAdmin can enable Globalnet in the console by selecting the option Enable Globalnet when enabling the Submariner add-on for clusters in the cluster set. If you want to disable Globalnet after enabling it, you must first remove all managed clusters from your cluster set. 1.4.2.4.1. Enabling Globalnet by creating the submariner-broker object When using the Red Hat Advanced Cluster Management APIs, the ClusterAdmin can enable Globalnet by creating a submariner-broker object in the <ManagedClusterSet>-broker namespace. The ClusterAdmin role has the required permissions to create the submariner-broker object in the broker namespace. The ManagedClusterSetAdmin role, which is sometimes created to act as a proxy administrator for the cluster set, does not have the required permissions. To provide the required permissions, the ClusterAdmin must associate the role permissions for the access-to-brokers-submariner-crd to the ManagedClusterSetAdmin user. Complete the following steps to enable Globalnet by creating the submariner-broker object: Retrieve the <broker-namespace> by running the following command: Create a submariner-broker object that specifies the Globalnet configuration by creating a YAML file named submariner-broker . Add content that resembles the following lines to the YAML file: apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker 1 namespace: broker-namespace 2 spec: globalnetEnabled: true-or-false 3 1 The name must be submariner-broker . 2 Replace broker-namespace with the name of your broker namespace. 3 Replace true-or-false with true to enable Globalnet. Apply the file by running the following command: 1.4.2.4.2. Configuring the number of global IPs You can assign a configurable number of global IPs by changing the value of the numberOfIPs field in the ClusterGlobalEgressIP resource. The default value is 8. See the following example: apiVersion: submariner.io/v1 kind: ClusterGlobalEgressIP metadata: name: cluster-egress.submariner.io spec: numberOfIPs: 8 1.4.2.4.3. Additional resources See the Submariner documentation to learn more about Submariner See Submariner NAT Traversal for more information about IP connectivity between gateway nodes. See the Submariner prerequisites documentation for more detailed information about the prerequisites. See Globalnet controller in the Submariner documentation for more information about Globalnet. 1.4.3. Installing the subctl command utility The subctl utility is published on the Red Hat Developers page. To install the subctl utility locally, complete the following steps: Go to the subctl publication directory . Click the folder that matches the version of Submariner that you are using. Click the tar.xz archive for the platform that you are using to download a compressed version of the subctl binary. If your platform is not listed, go to the Red Hat Ecosystem Catalog subctl page and extract subctl from the relevant image. For example, you can extract the macos-arm64 binary from the arm64 subctl image. Decompress the subctl utility by entering the following command. Replace <name> with the name of the archive that you downloaded: tar -C /tmp/ -xf <name>.tar.xz Install the subctl utility by entering the following command. Replace <name> with the name of the archive that you downloaded. Replace <version> with the subctl version that you downloaded: install -m744 /tmp/<version>/<name> /USDHOME/.local/bin/subctl Notes: Make sure that the subctl and Submariner versions match. For disconnected environments only, make sure to mirror the submariner-nettest image. 1.4.3.1. Using the subctl commands After adding the utility to your path, view the following table for a brief description of the available commands: export service Creates a ServiceExport resource for the specified service, which enables other clusters in the Submariner deployment to discover the corresponding service. unexport service Removes the ServiceExport resource for the specified service, which prevents other clusters in the Submariner deployment from discovering the corresponding service. show Provides information about Submariner resources. verify Verifies connectivity, service discovery, and other Submariner features when Submariner is configured across a pair of clusters. benchmark Benchmarks throughput and latency across a pair of clusters that are enabled with Submariner or within a single cluster. diagnose Runs checks to identify issues that prevent the Submariner deployment from working correctly. gather Collects information from the clusters to help troubleshoot a Submariner deployment. version Displays the version details of the subctl binary tool. Note : The Red Hat build of subctl only includes the commands that are relevant to Red Hat Advanced Cluster Management for Kubernetes. For more information about the subctl utility and its commands, see subctl in the Submariner documentation . 1.4.4. Deploying Submariner by using the console Before you deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes, you must prepare the clusters on the hosting environment. You can use the SubmarinerConfig API or the Red Hat Advanced Cluster Management for Kubernetes console to automatically prepare Red Hat OpenShift Container Platform clusters on the following providers: Amazon Web Services Bare Metal with hosted control planes (Technology Preview) Google Cloud Platform IBM Power Systems Virtual Server Red Hat OpenShift on IBM Cloud (Technology Preview) Red Hat OpenStack Platform Red Hat OpenShift Service on AWS with hosted control planes (Technology Preview) Red Hat OpenShift Virtualization (Technology Preview) Red Hat OpenShift Virtualization with hosted control planes (Technology Preview) Microsoft Azure VMware vSphere Notes: Only non-NSX deployments are supported on VMware vSphere. You must install the Calico API server on your cluster if you are using Red Hat OpenShift on IBM Cloud. Alternatively, you can manually create the the IP pools required for cross-cluster communication by following the CALICO CNI topic in the Submariner upstream documentation. To deploy Submariner on other providers, follow the instructions in Deploying Submariner manually . Complete the following steps to deploy Submariner with the Red Hat Advanced Cluster Management for Kubernetes console: Required access: Cluster administrator From the console, select Infrastructure > Clusters . On the Clusters page, select the Cluster sets tab. The clusters that you want enable with Submariner must be in the same cluster set. If the clusters on which you want to deploy Submariner are already in the same cluster set, skip to step 5. If the clusters on which you want to deploy Submariner are not in the same cluster set, create a cluster set for them by completing the following steps: Select Create cluster set . Name the cluster set, and select Create . Select Manage resource assignments to assign clusters to the cluster set. Select the managed clusters that you want to connect with Submariner to add them to the cluster set. Select Review to view and confirm the clusters that you selected. Select Save to save the cluster set, and view the resulting cluster set page. On the cluster set page, select the Submariner add-ons tab. Select Install Submariner add-ons . Select the clusters on which you want to deploy Submariner. See the fields in the following table and enter the required information in the Install Submariner add-ons editor: Field Notes AWS Access Key ID Only visible when you import an AWS cluster. AWS Secret Access Key Only visible when you import an AWS cluster. Base domain resource group name Only visible when you import an Azure cluster. Client ID Only visible when you import an Azure cluster. Client secret Only visible when you import an Azure cluster. Subscription ID Only visible when you import an Azure cluster. Tenant ID Only visible when you import an Azure cluster. Google Cloud Platform service account JSON key Only visible when you import a Google Cloud Platform cluster. Instance type The instance type of the gateway node that is created on the managed cluster. IPsec NAT-T port The default value for the IPsec NAT traversal port is port 4500 . If your managed cluster environment is VMware vSphere, ensure that this port is opened on your firewall. Gateway count The number of gateway nodes to be deployed on the managed cluster. For AWS, GCP, Azure, and OpenStack clusters, dedicated Gateway nodes are deployed. For VWware clusters, existing worker nodes are tagged as gateway nodes. The default value is 1 . If the value is greater than 1, the Submariner gateway High Availability (HA) is automatically enabled. Cable driver The Submariner gateway cable engine component that maintains the cross-cluster tunnels. The default value is Libreswan IPsec . Disconnected cluster If enabled, tells Submariner to not access any external servers for public IP resolution. Globalnet CIDR Only visible when the Globalnet configuration is selected on the cluster set. The Globalnet CIDR to be used for the managed cluster. If left blank, a CIDR is allocated from the cluster set pool. Select at the end of the editor to move to the editor for the cluster, and complete the editor for each of the remaining clusters that you selected. Verify your configuration for each managed cluster. Click Install to deploy Submariner on the selected managed clusters. It might take several minutes for the installation and configuration to complete. You can check the Submariner status in the list on the Submariner add-ons tab: Connection status indicates how many Submariner connections are established on the managed cluster. Agent status indicates whether Submariner is successfully deployed on the managed cluster. The console might report a status of Degraded until it is installed and configured. Gateway nodes labeled indicates the number of gateway nodes on the managed cluster. Submariner is now deployed on the selected clusters. 1.4.5. Deploying Submariner manually Before you deploy Submariner with Red Hat Advanced Cluster Management for Kubernetes, you must prepare the clusters on the hosting environment for the connection. See Deploying Submariner by using the console to learn how to automatically deploy Submariner on supported clusters by using the console. If your cluster is hosted on a provider that does not support automatic Submariner deployment, see the following sections to prepare the infrastructure manually. Each provider has unique steps for preparation, so make sure to select the correct provider. 1.4.5.1. Preparing bare metal for Submariner To prepare bare metal clusters for deploying Submariner, complete the following steps: Ensure that the firewall allows inbound/outbound traffic for external clients on the 4500/UDP and 4490/UDP ports for the Gateway nodes. Also, if the cluster is deployed with OpenShiftSDN CNI, allow inbound/outbound UDP/4800 traffic within the local cluster nodes. Customize and apply YAML content that is similar to the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: gatewayConfig: gateways: 1 Replace managed-cluster-namespace with the name of your managed cluster. The name of the SubmarinerConfig must be submariner , as shown in the example. This configuration labels one of the worker nodes as the Submariner gateway on your bare metal cluster. By default, Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port, 4500/UDP is used for the connections. Identify the Gateway node configured by Submariner and enable firewall configurations to allow the IPsec NATT (UDP/4500) and NatDiscovery (UDP/4490) ports for external traffic. See Customizing Submariner deployments for information about the customization options. 1.4.5.2. Preparing Microsoft Azure Red Hat OpenShift for Submariner by using the command line interface The Microsoft Azure Red Hat OpenShift service combines various tools and resources that you can use to simplify the process of building container-based applications. To prepare Azure Red Hat OpenShift clusters for deploying Submariner by using the command line interface, complete the following steps: Install the Azure CLI . From the Azure CLI, run the following command to install the extension: Replace path-to-extension with the path to where you downloaded the .whl extension file. Run the following command to verify that the CLI extension is being used: If the extension is being used, the output might resemble the following example: From the Azure CLI, register the preview feature by running the following command: Retrieve the administrator kubeconfig by running the following command: Note: The az aro command saves the kubeconfig to the local directory and uses the name kubeconfig . To use it, set the environment variable KUBECONFIG to match the path of the file. See the following example: Import your Azure Red Hat OpenShift cluster. See Cluster import introduction to learn more about how to import a cluster. 1.4.5.2.1. Preparing Microsoft Azure Red Hat OpenShift for Submariner by using the API To prepare Azure Red Hat OpenShift clusters for deploying Submariner by using the API, customize and apply YAML content that is similar to the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: loadBalancerEnable: true Replace managed-cluster-namespace with the name of your managed cluster. The name of the SubmarinerConfig must be submariner , as shown in the example. This configuration labels one of the worker nodes as the Submariner gateway on your Azure Red Hat OpenShift cluster. By default, Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port, port 4500/UDP is used for the connections. See Customizing Submariner deployments for information about the customization options. 1.4.5.3. Preparing Red Hat OpenShift Service on AWS for Submariner by using the command line interface Red Hat OpenShift Service on AWS provides a stable and flexible platform for application development and modernization. To prepare OpenShift Service on AWS clusters for deploying Submariner, complete the following steps: Log in to OpenShift Service on AWS by running the following commands: Create a kubeconfig for your OpenShift Service on AWS cluster by running the following command: Import your OpenShift Service on AWS cluster. See Cluster import introduction to learn more about how to import a cluster. 1.4.5.3.1. Preparing Red Hat OpenShift Service on AWS for Submariner by using the API To prepare OpenShift Service on AWS clusters for deploying Submariner by using the API, customize and apply YAML content that is similar to the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: loadBalancerEnable: true Replace managed-cluster-namespace with the name of your managed cluster. The name of the SubmarinerConfig must be submariner , as shown in the example. By default, Submariner uses IP security (IPsec) to establish the secure tunnels between the clusters on the gateway nodes. You can either use the default IPsec NATT port, or you can specify a different port that you configured. When you run this procedure without specifying an IPsec NATT port, port 4500/UDP is used for the connections. See Customizing Submariner deployments for information about the customization options. 1.4.5.4. Deploy Submariner with the ManagedClusterAddOn API After manually preparing your selected hosting environment, you can deploy Submariner with the ManagedClusterAddOn API by completing the following steps: Create a ManagedClusterSet resource on the hub cluster by using the instructions provided in the Creating a ManagedClusterSet documentation. Make sure your entry for the ManagedClusterSet resembles the following content: apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name> Replace managed-cluster-set-name with a name for the ManagedClusterSet that you are creating. Important: The maximum character length of a Kubernetes namespace is 63 characters. The maximum character length you can use for the <managed-cluster-set-name> is 56 characters. If the character length of <managed-cluster-set-name> exceeds 56 characters, the <managed-cluster-set-name> is cut off from the head. After the ManagedClusterSet is created, the submariner-addon creates a namespace called <managed-cluster-set-name>-broker and deploys the Submariner broker to it. Create the Broker configuration on the hub cluster in the <managed-cluster-set-name>-broker namespace by customizing and applying YAML content that is similar to the following example: apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker namespace: <managed-cluster-set-name>-broker labels: cluster.open-cluster-management.io/backup: submariner spec: globalnetEnabled: <true-or-false> Replace managed-cluster-set-name with the name of the managed cluster. Set the value of globalnetEnabled to true if you want to enable Submariner Globalnet in the ManagedClusterSet . Add one managed cluster to the ManagedClusterSet by running the following command: Replace <managed-cluster-name> with the name of the managed cluster that you want to add to the ManagedClusterSet . Replace <managed-cluster-set-name> with the name of the ManagedClusterSet to which you want to add the managed cluster. Customize and apply YAML content that is similar to the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec:{} Replace managed-cluster-namespace with the namespace of your managed cluster. Note: The name of the SubmarinerConfig must be submariner , as shown in the example. Deploy Submariner on the managed cluster by customizing and applying YAML content that is similar to the following example: apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: submariner namespace: <managed-cluster-name> spec: installNamespace: submariner-operator Replace managed-cluster-name with the name of the managed cluster that you want to use with Submariner. The installNamespace field in the spec of the ManagedClusterAddOn is the namespace on the managed cluster where it installs Submariner. Currently, Submariner must be installed in the submariner-operator namespace. After the ManagedClusterAddOn is created, the submariner-addon deploys Submariner to the submariner-operator namespace on the managed cluster. You can view the deployment status of Submariner from the status of this ManagedClusterAddOn . Note: The name of ManagedClusterAddOn must be submariner . Repeat steps three, four, and five for all of the managed clusters that you want to enable Submariner on. After Submariner is deployed on the managed cluster, you can verify the Submariner deployment status by checking the status of submariner ManagedClusterAddOn by running the following command: Replace managed-cluster-name with the name of the managed cluster. In the status of the Submariner ManagedClusterAddOn , three conditions indicate the deployment status of Submariner: SubmarinerGatewayNodesLabeled condition indicates whether there are labeled Submariner gateway nodes on the managed cluster. SubmarinerAgentDegraded condition indicates whether the Submariner is successfully deployed on the managed cluster. SubmarinerConnectionDegraded condition indicates how many connections are established on the managed cluster with Submariner. 1.4.6. Customizing Submariner deployments You can customize some of the settings of your Submariner deployments, including your Network Address Translation-Traversal (NATT) port, number of gateway nodes, and instance type of your gateway nodes. These customizations are consistent across all of the providers. 1.4.6.1. NATT port If you want to customize your NATT port, customize and apply the following YAML content for your provider environment: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds IPSecNATTPort: <NATTPort> Replace managed-cluster-namespace with the namespace of your managed cluster. Replace managed-cluster-name with the name of your managed cluster AWS: Replace provider with aws . The value of <managed-cluster-name>-aws-creds is your AWS credential secret name, which you can find in the cluster namespace of your hub cluster. GCP: Replace provider with gcp . The value of <managed-cluster-name>-gcp-creds is your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. OpenStack: Replace provider with osp . The value of <managed-cluster-name>-osp-creds is your Red Hat OpenStack Platform credential secret name, which you can find in the cluster namespace of your hub cluster. Azure: Replace provider with azure . The value of <managed-cluster-name>-azure-creds is your Microsoft Azure credential secret name, which you can find in the cluster namespace of your hub cluster. Replace managed-cluster-namespace with the namespace of your managed cluster. Replace managed-cluster-name with the name of your managed cluster. The value of managed-cluster-name-gcp-creds is your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. Replace NATTPort with the NATT port that you want to use. Note: The name of the SubmarinerConfig must be submariner , as shown in the example. 1.4.6.2. Number of gateway nodes If you want to customize the number of your gateway nodes, customize and apply YAML content that is similar to the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds gatewayConfig: gateways: <gateways> Replace managed-cluster-namespace with the namespace of your managed cluster. Replace managed-cluster-name with the name of your managed cluster. AWS: Replace provider with aws . The value of <managed-cluster-name>-aws-creds is your AWS credential secret name, which you can find in the cluster namespace of your hub cluster. GCP: Replace provider with gcp . The value of <managed-cluster-name>-gcp-creds is your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. OpenStack: Replace provider with osp . The value of <managed-cluster-name>-osp-creds is your Red Hat OpenStack Platform credential secret name, which you can find in the cluster namespace of your hub cluster. Azure: Replace provider with azure . The value of <managed-cluster-name>-azure-creds is your Microsoft Azure credential secret name, which you can find in the cluster namespace of your hub cluster. Replace gateways with the number of gateways that you want to use. If the value is greater than 1, the Submariner gateway automatically enables high availability. Note: The name of the SubmarinerConfig must be submariner , as shown in the example. 1.4.6.3. Instance types of gateway nodes If you want to customize the instance type of your gateway node, customize and apply YAML content that is similar to the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds gatewayConfig: instanceType: <instance-type> Replace managed-cluster-namespace with the namespace of your managed cluster. Replace managed-cluster-name with the name of your managed cluster. AWS: Replace provider with aws . The value of <managed-cluster-name>-aws-creds is your AWS credential secret name, which you can find in the cluster namespace of your hub cluster. GCP: Replace provider with gcp . The value of <managed-cluster-name>-gcp-creds is your Google Cloud Platform credential secret name, which you can find in the cluster namespace of your hub cluster. OpenStack: Replace provider with osp . The value of <managed-cluster-name>-osp-creds is your Red Hat OpenStack Platform credential secret name, which you can find in the cluster namespace of your hub cluster. Azure: Replace provider with azure . The value of <managed-cluster-name>-azure-creds is your Microsoft Azure credential secret name, which you can find in the cluster namespace of your hub cluster. Replace instance-type with the AWS instance type that you want to use. Note: The name of the SubmarinerConfig must be submariner , as shown in the example. 1.4.6.4. Cable driver The Submariner Gateway Engine component creates secure tunnels to other clusters. The cable driver component maintains the tunnels by using a pluggable architecture in the Gateway Engine component. You can use the Libreswan or VXLAN implementations for the cableDriver configuration of the cable engine component. See the following example: apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: cableDriver: vxlan credentialsSecret: name: <managed-cluster-name>-<provider>-creds Best practice: Do not use the VXLAN cable driver on public networks. The VXLAN cable driver is unencrypted. Only use VXLAN to avoid unnecessary double encryption on private networks. For example, some on-premise environments might handle the tunnel's encryption with a dedicated line-level hardware device. 1.4.6.5. Using a customized Submariner subscription The Submariner add-on automatically configures a subscription for Submariner; this ensures that the version of Submariner appropriate for the installed version of Red Hat Advanced Cluster Management is installed and kept up-to-date. If you want to change this behavior, or if you want to manually control Submariner upgrades, you can customize the Submariner subscription. When you use a customized Submariner subscription, you must complete the following fields: Source: The catalog source to use for the Submariner subscription. For example, redhat-operators . source Namespace: The namespace of the catalog source. For example, openshift-marketplace . Channel: The channel to follow for the subscription. For example, for Red Hat Advanced Cluster Management 2.12, stable-0.19 . Starting CSV (Optional): The initial ClusterServiceVersion . Install Plan Approval: The decision to manually or automatically approve install plans. Note: If you want to manually approve the install plan, you must use a customized Submariner subscription. 1.4.7. Managing service discovery for Submariner After Submariner is deployed into the same environment as your managed clusters, the routes are configured for secure IP routing between the pod and services across the clusters in the managed cluster set. 1.4.7.1. Enabling service discovery for Submariner To make a service from a cluster visible and discoverable to other clusters in the managed cluster set, you must create a ServiceExport object. After a service is exported with a ServiceExport object, you can access the service by the following format: <service>.<namespace>.svc.clusterset.local . If multiple clusters export a service with the same name, and from the same namespace, they are recognized by other clusters as a single logical service. This example uses the nginx service in the default namespace, but you can discover any Kubernetes ClusterIP service or headless service: Apply an instance of the nginx service on a managed cluster that is in the ManagedClusterSet by entering the following commands: Export the service by creating a ServiceExport entry by entering a command with the subctl tool that is similar to the following command: Replace service-namespace with the name of the namespace where the service is located. In this example, it is default . Replace service-name with the name of the service that you are exporting. In this example, it is nginx . See export in the Submariner documentation for more information about other available flags. Run the following command from a different managed cluster to confirm that it can access the nginx service: The nginx service discovery is now configured for Submariner. 1.4.7.2. Disabling service discovery for Submariner To disable a service from being exported to other clusters, enter a command similar to the following example for nginx : Replace service-namespace with the name of the namespace where the service is located. Replace service-name with the name of the service that you are exporting. See unexport in the Submariner documentation for more information about other available flags. The service is no longer available for discovery by clusters. 1.4.8. Uninstalling Submariner You can uninstall the Submariner components from your clusters using the Red Hat Advanced Cluster Management for Kubernetes console or the command-line. For Submariner versions earlier than 0.12, additional steps are needed to completely remove all data plane components. The Submariner uninstall is idempotent, so you can repeat steps without any issues. 1.4.8.1. Uninstalling Submariner by using the console To uninstall Submariner from a cluster by using the console, complete the following steps: From the console navigation, select Infrastructure > Clusters , and select the Cluster sets tab. Select the cluster set that contains the clusters from which you want to remove the Submariner components. Select the Submariner Add-ons tab to view the clusters in the cluster set that have Submariner deployed. In the Actions menu for the cluster that you want to uninstall Submariner, select Uninstall Add-on . In the Actions menu for the cluster that you want to uninstall Submariner, select Delete cluster sets . Repeat those steps for other clusters from which you are removing Submariner. Tip: You can remove the Submariner add-on from multiple clusters in the same cluster set by selecting multiple clusters and clicking Actions . Select Uninstall Submariner add-ons . If the version of Submariner that you are removing is earlier than version 0.12, continue with Uninstalling Submariner manually . If the Submariner version is 0.12 or later, Submariner is removed. Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information. 1.4.8.2. Uninstalling Submariner by using the CLI To uninstall Submariner by using the command line, complete the following steps: Remove the Submariner deployment for the cluster by running the following command: Replace managed-cluster-namespace with the namespace of your managed cluster. Remove the cloud resources of the cluster by running the following command: Replace managed-cluster-namespace with the namespace of your managed cluster. Delete the cluster set to remove the broker details by running the following command: Replace managedclusterset with the name of your managed cluster set. If the version of Submariner that you are removing is earlier than version 0.12, continue with Uninstalling Submariner manually . If the Submariner version is 0.12 or later, Submariner is removed. Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information. 1.4.8.3. Uninstalling Submariner manually When uninstalling versions of Submariner that are earlier than version 0.12, complete steps 5-8 in the Manual Uninstall section in the Submariner documentation. After completing those steps, your Submariner components are removed from the cluster. Important: Verify that all of the cloud resources are removed from the cloud provider to avoid additional charges by your cloud provider. See Verifying Submariner resource removal for more information. 1.4.8.4. Verifying Submariner resource removal After uninstalling Submariner, verify that all of the Submariner resources are removed from your clusters. If they remain on your clusters, some resources continue to accrue charges from infrastructure providers. Ensure that you have no additional Submariner resourceson your cluster by completing the following steps: Run the following command to list any Submariner resources that remain on the cluster: Replace CLUSTER_NAME with the name of your cluster. Remove any resources on the list by entering the following command: Replace RESOURCE_NAME with the name of the Submariner resource that you want to remove. Repeat steps 1-2 for each of the clusters until your search does not identify any resources. The Submariner resources are removed from your cluster.
[ "edit submarinerconfig -n <managed-cluster-ns> submariner", "annotations: submariner.io/control-plane-sg-id: <control-plane-group-id> 1 submariner.io/subnet-id-list: <subnet-id-list> 2 submariner.io/vpc-id: <custom-vpc-id> 3 submariner.io/worker-sg-id: <worker-security-group-id> 4", "get ManagedClusterSet <cluster-set-name> -o jsonpath=\"{.metadata.annotations['cluster\\.open-cluster-management\\.io/submariner-broker-ns']}\"", "apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker 1 namespace: broker-namespace 2 spec: globalnetEnabled: true-or-false 3", "apply -f submariner-broker.yaml", "apiVersion: submariner.io/v1 kind: ClusterGlobalEgressIP metadata: name: cluster-egress.submariner.io spec: numberOfIPs: 8", "tar -C /tmp/ -xf <name>.tar.xz", "install -m744 /tmp/<version>/<name> /USDHOME/.local/bin/subctl", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: gatewayConfig: gateways: 1", "az extension add --upgrade -s <path-to-extension>", "az extension list", "\"experimental\": false, \"extensionType\": \"whl\", \"name\": \"aro\", \"path\": \"<path-to-extension>\", \"preview\": true, \"version\": \"1.0.x\"", "az feature registration create --namespace Microsoft.RedHatOpenShift --name AdminKubeconfig", "az aro get-admin-kubeconfig -g <resource group> -n <cluster resource name>", "export KUBECONFIG=<path-to-kubeconfig> get nodes", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: loadBalancerEnable: true", "rosa login login <rosa-cluster-url>:6443 --username cluster-admin --password <password>", "config view --flatten=true > rosa_kube/kubeconfig", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: loadBalancerEnable: true", "apiVersion: cluster.open-cluster-management.io/v1beta2 kind: ManagedClusterSet metadata: name: <managed-cluster-set-name>", "apiVersion: submariner.io/v1alpha1 kind: Broker metadata: name: submariner-broker namespace: <managed-cluster-set-name>-broker labels: cluster.open-cluster-management.io/backup: submariner spec: globalnetEnabled: <true-or-false>", "label managedclusters <managed-cluster-name> \"cluster.open-cluster-management.io/clusterset=<managed-cluster-set-name>\" --overwrite", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec:{}", "apiVersion: addon.open-cluster-management.io/v1alpha1 kind: ManagedClusterAddOn metadata: name: submariner namespace: <managed-cluster-name> spec: installNamespace: submariner-operator", "-n <managed-cluster-name> get managedclusteraddons submariner -oyaml", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds IPSecNATTPort: <NATTPort>", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds gatewayConfig: gateways: <gateways>", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: credentialsSecret: name: <managed-cluster-name>-<provider>-creds gatewayConfig: instanceType: <instance-type>", "apiVersion: submarineraddon.open-cluster-management.io/v1alpha1 kind: SubmarinerConfig metadata: name: submariner namespace: <managed-cluster-namespace> spec: cableDriver: vxlan credentialsSecret: name: <managed-cluster-name>-<provider>-creds", "-n default create deployment nginx --image=nginxinc/nginx-unprivileged:stable-alpine -n default expose deployment nginx --port=8080", "subctl export service --namespace <service-namespace> <service-name>", "-n default run --generator=run-pod/v1 tmp-shell --rm -i --tty --image quay.io/submariner/nettest -- /bin/bash curl nginx.default.svc.clusterset.local:8080", "subctl unexport service --namespace <service-namespace> <service-name>", "-n <managed-cluster-namespace> delete managedclusteraddon submariner", "-n <managed-cluster-namespace> delete submarinerconfig submariner", "delete managedclusterset <managedclusterset>", "get cluster <CLUSTER_NAME> grep submariner", "delete resource <RESOURCE_NAME> cluster <CLUSTER_NAME>" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_management_for_kubernetes/2.12/html/networking/networking
Chapter 34. Authenticating a RHEL client to the network by using the 802.1X standard with a certificate stored on the file system
Chapter 34. Authenticating a RHEL client to the network by using the 802.1X standard with a certificate stored on the file system Administrators frequently use port-based Network Access Control (NAC) based on the IEEE 802.1X standard to protect a network from unauthorized LAN and Wi-Fi clients. If the network uses the Extensible Authentication Protocol Transport Layer Security (EAP-TLS) mechanism, you require a certificate to authenticate the client to this network. 34.1. Configuring 802.1X network authentication on an existing Ethernet connection by using nmcli You can use the nmcli utility to configure an Ethernet connection with 802.1X network authentication on the command line. Prerequisites The network supports 802.1X network authentication. The Ethernet connection profile exists in NetworkManager and has a valid IP configuration. The following files required for TLS authentication exist on the client: The client key stored is in the /etc/pki/tls/private/client.key file, and the file is owned and only readable by the root user. The client certificate is stored in the /etc/pki/tls/certs/client.crt file. The Certificate Authority (CA) certificate is stored in the /etc/pki/tls/certs/ca.crt file. The wpa_supplicant package is installed. Procedure Set the Extensible Authentication Protocol (EAP) to tls and the paths to the client certificate and key file: Note that you must set the 802-1x.eap , 802-1x.client-cert , and 802-1x.private-key parameters in a single command. Set the path to the CA certificate: Set the identity of the user used in the certificate: Optional: Store the password in the configuration: Important By default, NetworkManager stores the password in clear text in the connection profile on the disk, but the file is readable only by the root user. However, clear text passwords in a configuration file can be a security risk. To increase the security, set the 802-1x.password-flags parameter to agent-owned . With this setting, on servers with the GNOME desktop environment or the nm-applet running, NetworkManager retrieves the password from these services, after you unlock the keyring. In other cases, NetworkManager prompts for the password. Activate the connection profile: Verification Access resources on the network that require network authentication. Additional resources Configuring an Ethernet connection 34.2. Configuring a static Ethernet connection with 802.1X network authentication by using nmstatectl Use the nmstatectl utility to configure an Ethernet connection with 802.1X network authentication through the Nmstate API. The Nmstate API ensures that, after setting the configuration, the result matches the configuration file. If anything fails, nmstatectl automatically rolls back the changes to avoid leaving the system in an incorrect state. Note The nmstate library only supports the TLS Extensible Authentication Protocol (EAP) method. Prerequisites The network supports 802.1X network authentication. The managed node uses NetworkManager. The following files required for TLS authentication exist on the client: The client key stored is in the /etc/pki/tls/private/client.key file, and the file is owned and only readable by the root user. The client certificate is stored in the /etc/pki/tls/certs/client.crt file. The Certificate Authority (CA) certificate is stored in the /etc/pki/tls/certs/ca.crt file. Procedure Create a YAML file, for example ~/create-ethernet-profile.yml , with the following content: --- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false 802.1x: ca-cert: /etc/pki/tls/certs/ca.crt client-cert: /etc/pki/tls/certs/client.crt eap-methods: - tls identity: client.example.org private-key: /etc/pki/tls/private/client.key private-key-password: password routes: config: - destination: 0.0.0.0/0 -hop-address: 192.0.2.254 -hop-interface: enp1s0 - destination: ::/0 -hop-address: 2001:db8:1::fffe -hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb These settings define an Ethernet connection profile for the enp1s0 device with the following settings: A static IPv4 address - 192.0.2.1 with a /24 subnet mask A static IPv6 address - 2001:db8:1::1 with a /64 subnet mask An IPv4 default gateway - 192.0.2.254 An IPv6 default gateway - 2001:db8:1::fffe An IPv4 DNS server - 192.0.2.200 An IPv6 DNS server - 2001:db8:1::ffbb A DNS search domain - example.com 802.1X network authentication using the TLS EAP protocol Apply the settings to the system: Verification Access resources on the network that require network authentication. 34.3. Configuring a static Ethernet connection with 802.1X network authentication by using the network RHEL system role Network Access Control (NAC) protects a network from unauthorized clients. You can specify the details that are required for the authentication in NetworkManager connection profiles to enable clients to access the network. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client, and then use the network RHEL system role to configure a connection profile with 802.1X network authentication. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The network supports 802.1X network authentication. The managed nodes use NetworkManager. The following files required for the TLS authentication exist on the control node: The client key is stored in the /srv/data/client.key file. The client certificate is stored in the /srv/data/client.crt file. The Certificate Authority (CA) certificate is stored in the /srv/data/ca.crt file. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.key" dest: "/etc/pki/tls/private/client.key" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.crt" dest: "/etc/pki/tls/certs/client.crt" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/ca.crt" dest: "/etc/pki/ca-trust/source/anchors/ca.crt" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: "/etc/pki/tls/private/client.key" private_key_password: "{{ pwd }}" client_cert: "/etc/pki/tls/certs/client.crt" ca_cert: "/etc/pki/ca-trust/source/anchors/ca.crt" domain_suffix_match: example.com state: up The settings specified in the example playbook include the following: ieee802_1x This variable contains the 802.1X-related settings. eap: tls Configures the profile to use the certificate-based TLS authentication method for the Extensible Authentication Protocol (EAP). For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Verification Access resources on the network that require network authentication. Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory Ansible vault 34.4. Configuring a wifi connection with 802.1X network authentication by using the network RHEL system role Network Access Control (NAC) protects a network from unauthorized clients. You can specify the details that are required for the authentication in NetworkManager connection profiles to enable clients to access the network. By using Ansible and the network RHEL system role, you can automate this process and remotely configure connection profiles on the hosts defined in a playbook. You can use an Ansible playbook to copy a private key, a certificate, and the CA certificate to the client, and then use the network RHEL system role to configure a connection profile with 802.1X network authentication. Prerequisites You have prepared the control node and the managed nodes You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. The network supports 802.1X network authentication. You installed the wpa_supplicant package on the managed node. DHCP is available in the network of the managed node. The following files required for TLS authentication exist on the control node: The client key is stored in the /srv/data/client.key file. The client certificate is stored in the /srv/data/client.crt file. The CA certificate is stored in the /srv/data/ca.crt file. Procedure Store your sensitive variables in an encrypted file: Create the vault: After the ansible-vault create command opens an editor, enter the sensitive data in the <key> : <value> format: pwd: <password> Save the changes, and close the editor. Ansible encrypts the data in the vault. Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.key" dest: "/etc/pki/tls/private/client.key" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/client.crt" dest: "/etc/pki/tls/certs/client.crt" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: "/srv/data/ca.crt" dest: "/etc/pki/ca-trust/source/anchors/ca.crt" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: "Example-wifi" key_mgmt: "wpa-eap" ieee802_1x: identity: <user_name> eap: tls private_key: "/etc/pki/tls/client.key" private_key_password: "{{ pwd }}" private_key_password_flags: none client_cert: "/etc/pki/tls/client.pem" ca_cert: "/etc/pki/tls/cacert.pem" domain_suffix_match: "example.com" The settings specified in the example playbook include the following: ieee802_1x This variable contains the 802.1X-related settings. eap: tls Configures the profile to use the certificate-based TLS authentication method for the Extensible Authentication Protocol (EAP). For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.network/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Additional resources /usr/share/ansible/roles/rhel-system-roles.network/README.md file /usr/share/doc/rhel-system-roles/network/ directory
[ "nmcli connection modify enp1s0 802-1x.eap tls 802-1x.client-cert /etc/pki/tls/certs/client.crt 802-1x.private-key /etc/pki/tls/certs/certs/client.key", "nmcli connection modify enp1s0 802-1x.ca-cert /etc/pki/tls/certs/ca.crt", "nmcli connection modify enp1s0 802-1x.identity [email protected]", "nmcli connection modify enp1s0 802-1x.private-key-password password", "nmcli connection up enp1s0", "--- interfaces: - name: enp1s0 type: ethernet state: up ipv4: enabled: true address: - ip: 192.0.2.1 prefix-length: 24 dhcp: false ipv6: enabled: true address: - ip: 2001:db8:1::1 prefix-length: 64 autoconf: false dhcp: false 802.1x: ca-cert: /etc/pki/tls/certs/ca.crt client-cert: /etc/pki/tls/certs/client.crt eap-methods: - tls identity: client.example.org private-key: /etc/pki/tls/private/client.key private-key-password: password routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.0.2.254 next-hop-interface: enp1s0 - destination: ::/0 next-hop-address: 2001:db8:1::fffe next-hop-interface: enp1s0 dns-resolver: config: search: - example.com server: - 192.0.2.200 - 2001:db8:1::ffbb", "nmstatectl apply ~/create-ethernet-profile.yml", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure an Ethernet connection with 802.1X authentication hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0600 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Ethernet connection profile with static IP address settings and 802.1X ansible.builtin.include_role: name: rhel-system-roles.network vars: network_connections: - name: enp1s0 type: ethernet autoconnect: yes ip: address: - 192.0.2.1/24 - 2001:db8:1::1/64 gateway4: 192.0.2.254 gateway6: 2001:db8:1::fffe dns: - 192.0.2.200 - 2001:db8:1::ffbb dns_search: - example.com ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/private/client.key\" private_key_password: \"{{ pwd }}\" client_cert: \"/etc/pki/tls/certs/client.crt\" ca_cert: \"/etc/pki/ca-trust/source/anchors/ca.crt\" domain_suffix_match: example.com state: up", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "pwd: <password>", "--- - name: Configure a wifi connection with 802.1X authentication hosts: managed-node-01.example.com tasks: - name: Copy client key for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.key\" dest: \"/etc/pki/tls/private/client.key\" mode: 0400 - name: Copy client certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/client.crt\" dest: \"/etc/pki/tls/certs/client.crt\" - name: Copy CA certificate for 802.1X authentication ansible.builtin.copy: src: \"/srv/data/ca.crt\" dest: \"/etc/pki/ca-trust/source/anchors/ca.crt\" - name: Wifi connection profile with dynamic IP address settings and 802.1X ansible.builtin.import_role: name: rhel-system-roles.network vars: network_connections: - name: Wifi connection profile with dynamic IP address settings and 802.1X interface_name: wlp1s0 state: up type: wireless autoconnect: yes ip: dhcp4: true auto6: true wireless: ssid: \"Example-wifi\" key_mgmt: \"wpa-eap\" ieee802_1x: identity: <user_name> eap: tls private_key: \"/etc/pki/tls/client.key\" private_key_password: \"{{ pwd }}\" private_key_password_flags: none client_cert: \"/etc/pki/tls/client.pem\" ca_cert: \"/etc/pki/tls/cacert.pem\" domain_suffix_match: \"example.com\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/authenticating-a-rhel-client-to-the-network-using-the-802-1x-standard-with-a-certificate-stored-on-the-file-system_configuring-and-managing-networking
Chapter 3. Getting started
Chapter 3. Getting started This chapter guides you through the steps to set up your environment and run a simple messaging program. 3.1. Prerequisites To build the example, Maven must be configured to use the Red Hat repository or a local repository . You must install the examples . You must have a message broker listening for connections on localhost . It must have anonymous access enabled. For more information, see Starting the broker . You must have a queue named queue . For more information, see Creating a queue . 3.2. Running Hello World The Hello World example calls createConnection() for each character of the string "Hello World", transferring one at a time. Because AMQ JMS Pool is in use, each call reuses the same underlying JMS Connection object. Procedure Use Maven to build the examples by running the following command in the <source-dir> /pooled-jms-examples directory. USD mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests The addition of dependency:copy-dependencies results in the dependencies being copied into the target/dependency directory. Use the java command to run the example. On Linux or UNIX: USD java -cp "target/classes:target/dependency/*" org.messaginghub.jms.example.HelloWorld On Windows: > java -cp "target\classes;target\dependency\*" org.messaginghub.jms.example.HelloWorld Running it on Linux results in the following output: USD java -cp "target/classes/:target/dependency/*" org.messaginghub.jms.example.HelloWorld 2018-05-17 11:04:23,393 [main ] - INFO JmsPoolConnectionFactory - Provided ConnectionFactory is JMS 2.0+ capable. 2018-05-17 11:04:23,715 [localhost:5672]] - INFO SaslMechanismFinder - Best match for SASL auth was: SASL-ANONYMOUS 2018-05-17 11:04:23,739 [localhost:5672]] - INFO JmsConnection - Connection ID:104dfd29-d18d-4bf5-aab9-a53660f58633:1 connected to remote Broker: amqp://localhost:5672 Hello World The source code for the example is in the <source-dir> /pooled-jms-examples/src/main/java directory. The JNDI and logging configuration is in the <source-dir> /pooled-jms-examples/src/main/resources directory.
[ "mvn clean package dependency:copy-dependencies -DincludeScope=runtime -DskipTests", "java -cp \"target/classes:target/dependency/*\" org.messaginghub.jms.example.HelloWorld", "> java -cp \"target\\classes;target\\dependency\\*\" org.messaginghub.jms.example.HelloWorld", "java -cp \"target/classes/:target/dependency/*\" org.messaginghub.jms.example.HelloWorld 2018-05-17 11:04:23,393 [main ] - INFO JmsPoolConnectionFactory - Provided ConnectionFactory is JMS 2.0+ capable. 2018-05-17 11:04:23,715 [localhost:5672]] - INFO SaslMechanismFinder - Best match for SASL auth was: SASL-ANONYMOUS 2018-05-17 11:04:23,739 [localhost:5672]] - INFO JmsConnection - Connection ID:104dfd29-d18d-4bf5-aab9-a53660f58633:1 connected to remote Broker: amqp://localhost:5672 Hello World" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_the_amq_jms_pool_library/getting_started
Chapter 4. Sizing requirements for Red Hat Developer Hub
Chapter 4. Sizing requirements for Red Hat Developer Hub Scaling the Red Hat Developer Hub requires significant resource allocation. The following table lists the sizing requirements for installing and running Red Hat Developer Hub, including Developer Hub application, database components, and Operator. Table 4.1. Recommended sizing for running Red Hat Developer Hub Components Red Hat Developer Hub application Red Hat Developer Hub database Red Hat Developer Hub Operator Central Processing Unit (CPU) 4 vCPU 2 vCPU 1 vCPU Memory 16 GB 8 GB 1500 Mi Storage size 2 GB 20 GB 50 Mi Replicas 2 or more 3 or more 1 or more
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.3/html/about_red_hat_developer_hub/rhdh-sizing_about-rhdh
Chapter 1. Installing the roxctl CLI
Chapter 1. Installing the roxctl CLI roxctl is a command-line interface (CLI) for running commands on Red Hat Advanced Cluster Security for Kubernetes (RHACS). You can install the roxctl CLI by downloading the binary or you can run the roxctl CLI from a container image. 1.1. Installing the roxctl CLI by downloading the binary You can install the roxctl CLI to interact with RHACS from a command-line interface. You can install roxctl on Linux, Windows, or macOS. 1.1.1. Installing the roxctl CLI on Linux You can install the roxctl CLI binary on Linux by using the following procedure. Note roxctl CLI for Linux is available for amd64 , arm64 , ppc64le , and s390x architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}" Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 1.1.2. Installing the roxctl CLI on macOS You can install the roxctl CLI binary on macOS by using the following procedure. Note roxctl CLI for macOS is available for amd64 and arm64 architectures. Procedure Determine the roxctl architecture for the target operating system: USD arch="USD(uname -m | sed "s/x86_64//")"; arch="USD{arch:+-USDarch}" Download the roxctl CLI: USD curl -L -f -o roxctl "https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}" Remove all extended attributes from the binary: USD xattr -c roxctl Make the roxctl binary executable: USD chmod +x roxctl Place the roxctl binary in a directory that is on your PATH : To check your PATH , execute the following command: USD echo USDPATH Verification Verify the roxctl version you have installed: USD roxctl version 1.1.3. Installing the roxctl CLI on Windows You can install the roxctl CLI binary on Windows by using the following procedure. Note roxctl CLI for Windows is available for the amd64 architecture. Procedure Download the roxctl CLI: USD curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe Verification Verify the roxctl version you have installed: USD roxctl version 1.2. Running the roxctl CLI from a container The roxctl client is the default entry point in the RHACS roxctl image. To run the roxctl client in a container image: Prerequisites You must first generate an authentication token from the RHACS portal. Procedure Log in to the registry.redhat.io registry. USD docker login registry.redhat.io Pull the latest container image for the roxctl CLI. USD docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 After you install the CLI, you can run it by using the following command: USD docker run -e ROX_API_TOKEN=USDROX_API_TOKEN \ -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 \ -e USDROX_CENTRAL_ADDRESS <command> Note In Red Hat Advanced Cluster Security Cloud Service (RHACS Cloud Service), when using roxctl commands that require the Central address, use the Central instance address as displayed in the Instance Details section of the Red Hat Hybrid Cloud Console. For example, use acs-ABCD12345.acs.rhcloud.com instead of acs-data-ABCD12345.acs.rhcloud.com . Verification Verify the roxctl version you have installed. USD docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 version
[ "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Linux/roxctlUSD{arch}\"", "chmod +x roxctl", "echo USDPATH", "roxctl version", "arch=\"USD(uname -m | sed \"s/x86_64//\")\"; arch=\"USD{arch:+-USDarch}\"", "curl -L -f -o roxctl \"https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Darwin/roxctlUSD{arch}\"", "xattr -c roxctl", "chmod +x roxctl", "echo USDPATH", "roxctl version", "curl -f -O https://mirror.openshift.com/pub/rhacs/assets/4.6.3/bin/Windows/roxctl.exe", "roxctl version", "docker login registry.redhat.io", "docker pull registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3", "docker run -e ROX_API_TOKEN=USDROX_API_TOKEN -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 -e USDROX_CENTRAL_ADDRESS <command>", "docker run -it registry.redhat.io/advanced-cluster-security/rhacs-roxctl-rhel8:4.6.3 version" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/roxctl_cli/installing-the-roxctl-cli-1
4.7. Displaying Cluster Status
4.7. Displaying Cluster Status The following command displays the current status of the cluster and the cluster resources. You can display a subset of information about the current status of the cluster with the following commands. The following command displays the status of the cluster, but not the cluster resources. The following command displays the status of the cluster resources.
[ "pcs status", "pcs cluster status", "pcs status resources" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/s1-clusterstat-haar
Preface
Preface Preface
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/using_streams_for_apache_kafka_on_rhel_in_kraft_mode/preface
Chapter 44. PolicyCategoryService
Chapter 44. PolicyCategoryService 44.1. GetPolicyCategories GET /v1/policycategories GetPolicyCategories returns the list of policy categories 44.1.1. Description 44.1.2. Parameters 44.1.2.1. Query Parameters Name Description Required Default Pattern query - null pagination.limit - null pagination.offset - null pagination.sortOption.field - null pagination.sortOption.reversed - null pagination.sortOption.aggregateBy.aggrFunc - UNSET pagination.sortOption.aggregateBy.distinct - null 44.1.3. Return Type V1GetPolicyCategoriesResponse 44.1.4. Content Type application/json 44.1.5. Responses Table 44.1. HTTP Response Codes Code Message Datatype 200 A successful response. V1GetPolicyCategoriesResponse 0 An unexpected error response. RuntimeError 44.1.6. Samples 44.1.7. Common object reference 44.1.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 44.1.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 44.1.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 44.1.7.3. V1GetPolicyCategoriesResponse Field Name Required Nullable Type Description Format categories List of V1PolicyCategory 44.1.7.4. V1PolicyCategory Field Name Required Nullable Type Description Format id String name String isDefault Boolean 44.2. DeletePolicyCategory DELETE /v1/policycategories/{id} DeletePolicyCategory removes the given policy category. 44.2.1. Description 44.2.2. Parameters 44.2.2.1. Path Parameters Name Description Required Default Pattern id X null 44.2.3. Return Type Object 44.2.4. Content Type application/json 44.2.5. Responses Table 44.2. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. RuntimeError 44.2.6. Samples 44.2.7. Common object reference 44.2.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 44.2.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 44.2.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 44.3. GetPolicyCategory GET /v1/policycategories/{id} GetPolicyCategory returns the requested policy category by ID. 44.3.1. Description 44.3.2. Parameters 44.3.2.1. Path Parameters Name Description Required Default Pattern id X null 44.3.3. Return Type V1PolicyCategory 44.3.4. Content Type application/json 44.3.5. Responses Table 44.3. HTTP Response Codes Code Message Datatype 200 A successful response. V1PolicyCategory 0 An unexpected error response. RuntimeError 44.3.6. Samples 44.3.7. Common object reference 44.3.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 44.3.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 44.3.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 44.3.7.3. V1PolicyCategory Field Name Required Nullable Type Description Format id String name String isDefault Boolean 44.4. PostPolicyCategory POST /v1/policycategories PostPolicyCategory creates a new policy category 44.4.1. Description 44.4.2. Parameters 44.4.2.1. Body Parameter Name Description Required Default Pattern body V1PolicyCategory X 44.4.3. Return Type V1PolicyCategory 44.4.4. Content Type application/json 44.4.5. Responses Table 44.4. HTTP Response Codes Code Message Datatype 200 A successful response. V1PolicyCategory 0 An unexpected error response. RuntimeError 44.4.6. Samples 44.4.7. Common object reference 44.4.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 44.4.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 44.4.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 44.4.7.3. V1PolicyCategory Field Name Required Nullable Type Description Format id String name String isDefault Boolean 44.5. RenamePolicyCategory PUT /v1/policycategories RenamePolicyCategory renames the given policy category. 44.5.1. Description 44.5.2. Parameters 44.5.2.1. Body Parameter Name Description Required Default Pattern body V1RenamePolicyCategoryRequest X 44.5.3. Return Type V1PolicyCategory 44.5.4. Content Type application/json 44.5.5. Responses Table 44.5. HTTP Response Codes Code Message Datatype 200 A successful response. V1PolicyCategory 0 An unexpected error response. RuntimeError 44.5.6. Samples 44.5.7. Common object reference 44.5.7.1. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 44.5.7.1.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format typeUrl String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics. value byte[] Must be a valid serialized protocol buffer of the above specified type. byte 44.5.7.2. RuntimeError Field Name Required Nullable Type Description Format error String code Integer int32 message String details List of ProtobufAny 44.5.7.3. V1PolicyCategory Field Name Required Nullable Type Description Format id String name String isDefault Boolean 44.5.7.4. V1RenamePolicyCategoryRequest Field Name Required Nullable Type Description Format id String newCategoryName String
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }", "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.5/html/api_reference/policycategoryservice
Chapter 178. JIRA Component (deprecated)
Chapter 178. JIRA Component (deprecated) Available as of Camel version 2.15 The JIRA component interacts with the JIRA API by encapsulating Atlassian's REST Java Client for JIRA . It currently provides polling for new issues and new comments. It is also able to create new issues. Rather than webhooks, this endpoint relies on simple polling. Reasons include: Concern for reliability/stability The types of payloads we're polling aren't typically large (plus, paging is available in the API) The need to support apps running somewhere not publicly accessible where a webhook would fail Note that the JIRA API is fairly expansive. Therefore, this component could be easily expanded to provide additional interactions. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jira</artifactId> <version>USD{camel-version}</version> </dependency> 178.1. URI format jira://endpoint[?options] 178.2. JIRA Options The JIRA component has no options. The JIRA endpoint is configured using URI syntax: with the following path and query parameters: 178.2.1. Path Parameters (1 parameters): Name Description Default Type type Required Operation to perform such as create a new issue or a new comment JIRAType 178.2.2. Query Parameters (9 parameters): Name Description Default Type password (common) Password for login String serverUrl (common) Required URL to the JIRA server String username (common) Username for login String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean delay (consumer) Delay in seconds when querying JIRA using the consumer. 6000 int jql (consumer) JQL is the query language from JIRA which allows you to retrieve the data you want. For example jql=project=MyProject Where MyProject is the product key in Jira. String exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean 178.3. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.component.jira.enabled Enable jira component true Boolean camel.component.jira.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean 178.4. JQL: The JQL URI option is used by both consumer endpoints. Theoretically, items like "project key", etc. could be URI options themselves. However, by requiring the use of JQL, the consumers become much more flexible and powerful. At the bare minimum, the consumers will require the following: jira://[endpoint]?[required options]&jql=project=[project key] One important thing to note is that the newIssue consumer will automatically append "ORDER BY key desc" to your JQL. This is in order to optimize startup processing, rather than having to index every single issue in the project. Another note is that, similarly, the newComment consumer will have to index every single issue and comment in the project. Therefore, for large projects, it's vital to optimize the JQL expression as much as possible. For example, the JIRA Toolkit Plugin includes a "Number of comments" custom field - use '"Number of comments" > 0' in your query. Also try to minimize based on state (status=Open), increase the polling delay, etc. Example: jira://[endpoint]?[required options]&jql=RAW(project=[project key] AND status in (Open, \"Coding In Progress\") AND \"Number of comments\">0)"
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-jira</artifactId> <version>USD{camel-version}</version> </dependency>", "jira://endpoint[?options]", "jira:type", "jira://[endpoint]?[required options]&jql=project=[project key]", "jira://[endpoint]?[required options]&jql=RAW(project=[project key] AND status in (Open, \\\"Coding In Progress\\\") AND \\\"Number of comments\\\">0)\"" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/jira-component
17.3. Delegating Host or Service Management in the Web UI
17.3. Delegating Host or Service Management in the Web UI Each host and service entry in the IdM web UI has a configuration tab that indicates what hosts have been delegated management control over that host or service. Open the Identity tab, and select the Hosts or Services subtab. Click the name of the host or service that you are going to grant delegated management to . Click the Hosts subtab on the far right of the host or service entry. This is the tab which lists hosts that can manage the selected host or service. Figure 17.2. Host Subtab Click the Add link at the top of the list. Click the check box by the names of the hosts to which to delegate management for the host or service. Click the right arrow button, > , to move the hosts to the selection box. Figure 17.3. Host/Service Delegation Management Click the Add button to close the selection box and to save the delegation settings.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/delegating-management-ui
Chapter 7. Configuring instance scheduling and placement
Chapter 7. Configuring instance scheduling and placement The Compute scheduler service determines on which Compute node or host aggregate to place an instance. When the Compute service (nova) receives a request to launch or move an instance, it uses the specifications provided in the request, the flavor, and the image to find a suitable host. For example, a flavor can specify the traits an instance requires a host to have, such as the type of storage disk, or the Intel CPU instruction set extension. The Compute scheduler service uses the configuration of the following components, in the following order, to determine on which Compute node to launch or move an instance: Placement service prefilters : The Compute scheduler service uses the Placement service to filter the set of candidate Compute nodes based on specific attributes. For example, the Placement service automatically excludes disabled Compute nodes. Filters : Used by the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance. Weights : The Compute scheduler service prioritizes the filtered Compute nodes using a weighting system. The highest weight has the highest priority. In the following diagram, host 1 and 3 are eligible after filtering. Host 1 has the highest weight and therefore has the highest priority for scheduling. 7.1. Prefiltering using the Placement service The Compute service (nova) interacts with the Placement service when it creates and manages instances. The Placement service tracks the inventory and usage of resource providers, such as a Compute node, a shared storage pool, or an IP allocation pool, and their available quantitative resources, such as the available vCPUs. Any service that needs to manage the selection and consumption of resources can use the Placement service. The Placement service also tracks the mapping of available qualitative resources to resource providers, such as the type of storage disk trait a resource provider has. The Placement service applies prefilters to the set of candidate Compute nodes based on Placement service resource provider inventories and traits. You can create prefilters based on the following criteria: Supported image types Traits Projects or tenants Availability zone 7.1.1. Filtering by requested image type support You can exclude Compute nodes that do not support the disk format of the image used to launch an instance. This is useful when your environment uses Red Hat Ceph Storage as an ephemeral backend, which does not support QCOW2 images. Enabling this feature ensures that the scheduler does not send requests to launch instances using a QCOW2 image to Compute nodes backed by Red Hat Ceph Storage. Procedure Open your Compute environment file. To exclude Compute nodes that do not support the disk format of the image used to launch an instance, set the NovaSchedulerQueryImageType parameter to True in the Compute environment file. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 7.1.2. Filtering by resource provider traits Each resource provider has a set of traits. Traits are the qualitative aspects of a resource provider, for example, the type of storage disk, or the Intel CPU instruction set extension. The Compute node reports its capabilities to the Placement service as traits. An instance can specify which of these traits it requires, or which traits the resource provider must not have. The Compute scheduler can use these traits to identify a suitable Compute node or host aggregate to host an instance. To enable your cloud users to create instances on hosts that have particular traits, you can define a flavor that requires or forbids a particular trait, and you can create an image that requires or forbids a particular trait. For a list of the available traits, see the os-traits library . You can also create custom traits, as required. Additional resources Section 7.5.1, "Declaring custom traits and resource classes in a YAML file" 7.1.2.1. Creating an image that requires or forbids a resource provider trait You can create an instance image that your cloud users can use to launch instances on hosts that have particular traits. Procedure Create a new image: Identify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. To schedule instances on a host or host aggregate that has a required trait, add the trait to the image extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the image extra specs: To filter out hosts or host aggregates that have a forbidden trait, add the trait to the image extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the image extra specs: 7.1.2.2. Creating a flavor that requires or forbids a resource provider trait You can create flavors that your cloud users can use to launch instances on hosts that have particular traits. Procedure Create a flavor: Identify the trait you require a host or host aggregate to have. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. To schedule instances on a host or host aggregate that has a required trait, add the trait to the flavor extra specs. For example, to schedule instances on a host or host aggregate that supports AVX-512, add the following trait to the flavor extra specs: To filter out hosts or host aggregates that have a forbidden trait, add the trait to the flavor extra specs. For example, to prevent instances from being scheduled on a host or host aggregate that supports multi-attach volumes, add the following trait to the flavor extra specs: 7.1.3. Filtering by isolating host aggregates You can restrict scheduling on a host aggregate to only those instances whose flavor and image traits match the metadata of the host aggregate. The combination of flavor and image metadata must require all the host aggregate traits to be eligible for scheduling on Compute nodes in that host aggregate. Procedure Open your Compute environment file. To isolate host aggregates to host only instances whose flavor and image traits match the aggregate metadata, set the NovaSchedulerEnableIsolatedAggregateFiltering parameter to True in the Compute environment file. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Identify the traits you want to isolate the host aggregate for. You can select an existing trait, or create a new trait: To use an existing trait, list the existing traits to retrieve the trait name: To create a new trait, enter the following command: Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each Compute node: Check the existing resource provider traits for the traits you want to isolate the host aggregate for: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each Compute node in the host aggregate: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. Repeat steps 6 - 8 for each Compute node in the host aggregate. Add the metadata property for the trait to the host aggregate: Add the trait to a flavor or an image: 7.1.4. Filtering by availability zone using the Placement service You can use the Placement service to honor availability zone requests. To use the Placement service to filter by availability zone, placement aggregates must exist that match the membership and UUID of the availability zone host aggregates. Procedure Open your Compute environment file. To use the Placement service to filter by availability zone, set the NovaSchedulerQueryPlacementForAvailabilityZone parameter to True in the Compute environment file. Remove the AvailabilityZoneFilter filter from the NovaSchedulerEnabledFilters parameter. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Additional resources For more information on creating a host aggregate to use as an availability zone, see Creating an availability zone . 7.2. Configuring filters and weights for the Compute scheduler service You configure the filters and weights for the Compute scheduler service to determine the initial set of Compute nodes on which to launch an instance. Procedure Open your Compute environment file. Add the filters you want the scheduler to use to the NovaSchedulerEnabledFilters parameter, for example: Optional: By default, the Compute scheduler weighs host Compute nodes by all resource types and spreads instances evenly across all hosts. You can configure the multiplier to apply to each weigher. For example, to specify that the available RAM of a Compute node has a higher weight than the other default weighers, and that the Compute scheduler prefers Compute nodes with more available RAM over those nodes with less available RAM, use the following configuration: Tip You can also set multipliers to a negative value. In the above example, to prefer Compute nodes with less available RAM over those nodes with more available RAM, set ram_weight_multiplier to -2.0 . For more information on the available attributes and their multipliers, see Compute scheduler weights . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Additional resources Compute scheduler filters Compute scheduler weights 7.3. Compute scheduler filters You configure the NovaSchedulerEnabledFilters parameter in your Compute environment file to specify the filters the Compute scheduler must apply when selecting an appropriate Compute node to host an instance. The default configuration applies the following filters: AvailabilityZoneFilter : The Compute node must be in the requested availability zone. ComputeFilter : The Compute node can service the request. ComputeCapabilitiesFilter : The Compute node satisfies the flavor extra specs. ImagePropertiesFilter : The Compute node satisfies the requested image properties. ServerGroupAntiAffinityFilter : The Compute node is not already hosting an instance in a specified group. ServerGroupAffinityFilter : The Compute node is already hosting instances in a specified group. You can add and remove filters. The following table describes all the available filters. Table 7.1. Compute scheduler filters Filter Description AggregateImagePropertiesIsolation Use this filter to match the image metadata of an instance with host aggregate metadata. If any of the host aggregate metadata matches the metadata of the image, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. The scheduler only recognises valid image metadata properties. For details on valid image metadata properties, see Image configuration parameters . AggregateInstanceExtraSpecsFilter Use this filter to match namespaced properties defined in the flavor extra specs of an instance with host aggregate metadata. You must scope your flavor extra_specs keys by prefixing them with the aggregate_instance_extra_specs: namespace. If any of the host aggregate metadata matches the metadata of the flavor extra spec, then the Compute nodes that belong to that host aggregate are candidates for launching instances from that image. AggregateIoOpsFilter Use this filter to filter hosts by I/O operations with a per-aggregate filter_scheduler/max_io_ops_per_host value. If the per-aggregate value is not found, the value falls back to the global setting. If the host is in more than one aggregate and more than one value is found, the scheduler uses the minimum value. AggregateMultiTenancyIsolation Use this filter to limit the availability of Compute nodes in project-isolated host aggregates to a specified set of projects. Only projects specified using the filter_tenant_id metadata key can launch instances on Compute nodes in the host aggregate. For more information, see Creating a project-isolated host aggregate . Note The project can still place instances on other hosts. To restrict this, use the NovaSchedulerPlacementAggregateRequiredForTenants parameter. AggregateNumInstancesFilter Use this filter to limit the number of instances each Compute node in an aggregate can host. You can configure the maximum number of instances per-aggregate by using the filter_scheduler/max_instances_per_host parameter. If the per-aggregate value is not found, the value falls back to the global setting. If the Compute node is in more than one aggregate, the scheduler uses the lowest max_instances_per_host value. AggregateTypeAffinityFilter Use this filter to pass hosts if no flavor metadata key is set, or the flavor aggregate metadata value contains the name of the requested flavor. The value of the flavor metadata entry is a string that may contain either a single flavor name or a comma-separated list of flavor names, such as m1.nano or m1.nano,m1.small . AllHostsFilter Use this filter to consider all available Compute nodes for instance scheduling. Note Using this filter does not disable other filters. AvailabilityZoneFilter Use this filter to launch instances on a Compute node in the availability zone specified by the instance. ComputeCapabilitiesFilter Use this filter to match namespaced properties defined in the flavor extra specs of an instance against the Compute node capabilities. You must prefix the flavor extra specs with the capabilities: namespace. A more efficient alternative to using the ComputeCapabilitiesFilter filter is to use CPU traits in your flavors, which are reported to the Placement service. Traits provide consistent naming for CPU features. For more information, see Filtering by using resource provider traits . ComputeFilter Use this filter to pass all Compute nodes that are operational and enabled. This filter should always be present. DifferentHostFilter Use this filter to enable scheduling of an instance on a different Compute node from a set of specific instances. To specify these instances when launching an instance, use the --hint argument with different_host as the key and the instance UUID as the value: ImagePropertiesFilter Use this filter to filter Compute nodes based on the following properties defined on the instance image: hw_architecture - Corresponds to the architecture of the host, for example, x86, ARM, and Power. img_hv_type - Corresponds to the hypervisor type, for example, KVM, QEMU, Xen, and LXC. img_hv_requested_version - Corresponds to the hypervisor version the Compute service reports. hw_vm_mode - Corresponds to the hyperviser type, for example hvm, xen, uml, or exe. Compute nodes that can support the specified image properties contained in the instance are passed to the scheduler. For more information on image properties, see Image configuration parameters . IsolatedHostsFilter Use this filter to only schedule instances with isolated images on isolated Compute nodes. You can also prevent non-isolated images from being used to build instances on isolated Compute nodes by configuring filter_scheduler/restrict_isolated_hosts_to_isolated_images . To specify the isolated set of images and hosts use the filter_scheduler/isolated_hosts and filter_scheduler/isolated_images configuration options, for example: IoOpsFilter Use this filter to filter out hosts that have concurrent I/O operations that exceed the configured filter_scheduler/max_io_ops_per_host , which specifies the maximum number of I/O intensive instances allowed to run on the host. MetricsFilter Use this filter to limit scheduling to Compute nodes that report the metrics configured by using metrics/weight_setting . To use this filter, add the following configuration to your Compute environment file: By default, the Compute scheduler service updates the metrics every 60 seconds. NUMATopologyFilter Use this filter to schedule instances with a NUMA topology on NUMA-capable Compute nodes. Use flavor extra_specs and image properties to specify the NUMA topology for an instance. The filter tries to match the instance NUMA topology to the Compute node topology, taking into consideration the over-subscription limits for each host NUMA cell. NumInstancesFilter Use this filter to filter out Compute nodes that have more instances running than specified by the max_instances_per_host option. PciPassthroughFilter Use this filter to schedule instances on Compute nodes that have the devices that the instance requests by using the flavor extra_specs . Use this filter if you want to reserve nodes with PCI devices, which are typically expensive and limited, for instances that request them. SameHostFilter Use this filter to enable scheduling of an instance on the same Compute node as a set of specific instances. To specify these instances when launching an instance, use the --hint argument with same_host as the key and the instance UUID as the value: ServerGroupAffinityFilter Use this filter to schedule instances in an affinity server group on the same Compute node. To create the server group, enter the following command: To launch an instance in this group, use the --hint argument with group as the key and the group UUID as the value: ServerGroupAntiAffinityFilter Use this filter to schedule instances that belong to an anti-affinity server group on different Compute nodes. To create the server group, enter the following command: To launch an instance in this group, use the --hint argument with group as the key and the group UUID as the value: SimpleCIDRAffinityFilter Use this filter to schedule instances on Compute nodes that have a specific IP subnet range. To specify the required range, use the --hint argument to pass the keys build_near_host_ip and cidr when launching an instance: 7.4. Compute scheduler weights Each Compute node has a weight that the scheduler can use to prioritize instance scheduling. After the Compute scheduler applies the filters, it selects the Compute node with the largest weight from the remaining candidate Compute nodes. The Compute scheduler determines the weight of each Compute node by performing the following tasks: The scheduler normalizes each weight to a value between 0.0 and 1.0. The scheduler multiplies the normalized weight by the weigher multiplier. The Compute scheduler calculates the weight normalization for each resource type by using the lower and upper values for the resource availability across the candidate Compute nodes: Nodes with the lowest availability of a resource (minval) are assigned '0'. Nodes with the highest availability of a resource (maxval) are assigned '1'. Nodes with resource availability within the minval - maxval range are assigned a normalized weight calculated by using the following formula: If all the Compute nodes have the same availability for a resource then they are all normalized to 0. For example, the scheduler calculates the normalized weights for available vCPUs across 10 Compute nodes, each with a different number of available vCPUs, as follows: Compute node 1 2 3 4 5 6 7 8 9 10 No of vCPUs 5 5 10 10 15 20 20 15 10 5 Normalized weight 0 0 0.33 0.33 0.67 1 1 0.67 0.33 0 The Compute scheduler uses the following formula to calculate the weight of a Compute node: The following table describes the available configuration options for weights. To customize a weight class and multiplier, use the following syntax to configure the option on the Controller: Note Weights can be set on host aggregates using the aggregate metadata key with the same name as the options detailed in the following table. If set on the host aggregate, the host aggregate value takes precedence. Table 7.2. Compute scheduler weights Configuration option Type Description scheduler_weight_classes String Use this parameter to configure which of the following attributes to use for calculating the weight of each Compute node: nova.scheduler.weights.ram.RAMWeigher - Weighs the available RAM on the Compute node. nova.scheduler.weights.cpu.CPUWeigher - Weighs the available CPUs on the Compute node. nova.scheduler.weights.disk.DiskWeigher - Weighs the available disks on the Compute node. nova.scheduler.weights.metrics.MetricsWeigher - Weighs the metrics of the Compute node. nova.scheduler.weights.affinity.ServerGroupSoftAffinityWeigher - Weighs the proximity of the Compute node to other nodes in the given instance group. nova.scheduler.weights.affinity.ServerGroupSoftAntiAffinityWeigher - Weighs the proximity of the Compute node to other nodes in the given instance group. nova.scheduler.weights.compute.BuildFailureWeigher - Weighs Compute nodes by the number of recent failed boot attempts. nova.scheduler.weights.io_ops.IoOpsWeigher - Weighs Compute nodes by their workload. nova.scheduler.weights.pci.PCIWeigher - Weighs Compute nodes by their PCI availability. nova.scheduler.weights.cross_cell.CrossCellWeigher - Weighs Compute nodes based on which cell they are in, giving preference to Compute nodes in the source cell when moving an instance. nova.scheduler.weights.all_weighers - (Default) Uses all the above weighers. ram_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available RAM. Set to a positive value to prefer hosts with more available RAM, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available RAM, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the RAM weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. disk_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available disk space. Set to a positive value to prefer hosts with more available disk space, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available disk space, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the disk weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. cpu_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the available vCPUs. Set to a positive value to prefer hosts with more available vCPUs, which spreads instances across many hosts. Set to a negative value to prefer hosts with less available vCPUs, which fills up (stacks) hosts as much as possible before scheduling to a less-used host. The absolute value, whether positive or negative, controls how strong the vCPU weigher is relative to other weighers. Default: 1.0 - The scheduler spreads instances across all hosts evenly. io_ops_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on the host workload. Set to a negative value to prefer hosts with lighter workloads, which distributes the workload across more hosts. Set to a positive value to prefer hosts with heavier workloads, which schedules instances onto hosts that are already busy. The absolute value, whether positive or negative, controls how strong the I/O operations weigher is relative to other weighers. Default: -1.0 - The scheduler distributes the workload across more hosts. build_failure_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts based on recent build failures. Set to a positive value to increase the significance of build failures recently reported by the host. Hosts with recent build failures are then less likely to be chosen. Set to 0 to disable weighing compute hosts by the number of recent failures. Default: 1000000.0 cross_cell_move_weight_multiplier Floating point Use this parameter to specify the multiplier to use to weigh hosts during a cross-cell move. This option determines how much weight is placed on a host which is within the same source cell when moving an instance. By default, the scheduler prefers hosts within the same source cell when migrating an instance. Set to a positive value to prefer hosts within the same cell the instance is currently running. Set to a negative value to prefer hosts located in a different cell from that where the instance is currently running. Default: 1000000.0 pci_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts based on the number of PCI devices on the host and the number of PCI devices requested by an instance. If an instance requests PCI devices, then the more PCI devices a Compute node has the higher the weight allocated to the Compute node. For example, if there are three hosts available, one with a single PCI device, one with multiple PCI devices and one without any PCI devices, then the Compute scheduler prioritizes these hosts based on the demands of the instance. The scheduler should prefer the first host if the instance requests one PCI device, the second host if the instance requires multiple PCI devices and the third host if the instance does not request a PCI device. Configure this option to prevent non-PCI instances from occupying resources on hosts with PCI devices. Default: 1.0 host_subset_size Integer Use this parameter to specify the size of the subset of filtered hosts from which to select the host. You must set this option to at least 1. A value of 1 selects the first host returned by the weighing functions. The scheduler ignores any value less than 1 and uses 1 instead. Set to a value greater than 1 to prevent multiple scheduler processes handling similar requests selecting the same host, creating a potential race condition. By selecting a host randomly from the N hosts that best fit the request, the chance of a conflict is reduced. However, the higher you set this value, the less optimal the chosen host may be for a given request. Default: 1 soft_affinity_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts for group soft-affinity. Note You need to specify the microversion when creating a group with this policy: Default: 1.0 soft_anti_affinity_weight_multiplier Positive floating point Use this parameter to specify the multiplier to use to weigh hosts for group soft-anti-affinity. Note You need to specify the microversion when creating a group with this policy: Default: 1.0 7.5. Declaring custom traits and resource classes As an administrator, you can declare which custom physical features and consumable resources are available on the Red Hat OpenStack Platform (RHOSP) overcloud nodes by using one of the following methods: By defining a custom inventory of resources in a YAML file, provider.yaml By defining a custom inventory of resources that apply to a particular node role. You can declare the availability of physical host features by defining custom traits, such as CUSTOM_DIESEL_BACKUP_POWER , CUSTOM_FIPS_COMPLIANT , and CUSTOM_HPC_OPTIMIZED . You can also declare the availability of consumable resources by defining resource classes, such as CUSTOM_DISK_IOPS , and CUSTOM_POWER_WATTS . 7.5.1. Declaring custom traits and resource classes in a YAML file As an administrator, you can declare which custom physical features and consumable resources are available on the Red Hat OpenStack Platform (RHOSP) overcloud nodes by defining a custom inventory of resources in a YAML file, provider.yaml . You can declare the availability of physical host features by defining custom traits, such as CUSTOM_DIESEL_BACKUP_POWER , CUSTOM_FIPS_COMPLIANT , and CUSTOM_HPC_OPTIMIZED . You can also declare the availability of consumable resources by defining resource classes, such as CUSTOM_DISK_IOPS , and CUSTOM_POWER_WATTS . Note You can use flavor metadata to request custom resources and custom traits. For more information, see Instance bare-metal resource class and Instance resource traits . Procedure Create a file in /home/stack/templates/ named provider.yaml . To configure the resource provider, add the following configuration to your provider.yaml file: Replace <node_uuid> with the UUID for the node, for example, '5213b75d-9260-42a6-b236-f39b0fd10561' . Alternatively, you can use the name property to identify the resource provider: name: 'EXAMPLE_RESOURCE_PROVIDER' . To configure the available custom resource classes for the resource provider, add the following configuration to your provider.yaml file: Replace CUSTOM_EXAMPLE_RESOURCE_CLASS with the name of the resource class. Custom resource classes must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Replace <total_available> with the number of available CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. Replace <reserved> with the number of reserved CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. Replace <min_unit> with the minimum units of resources a single instance can consume. Replace <max_unit> with the maximum units of resources a single instance can consume. Replace <step_size> with the number of increments of consumption. Replace <allocation_ratio> with the value to set the allocation ratio for the resource. Set to 1.0 to prevent overallocation. Set to a value greater than 1.0 to increase the availability of resource to more than the physical hardware. To configure the available traits for the resource provider, add the following configuration to your provider.yaml file: Replace CUSTOM_EXAMPLE_TRAIT with the name of the trait. Custom traits must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Example provider.yaml file The following example declares one custom resource class and one custom trait for a resource provider. 1 This hypervisor has 22 units of last level cache (LLC). 2 Two of the units of LLC are reserved for the host. 3 4 The min_unit and max_unit values define how many units of resources a single VM can consume. 5 The step size defines the increments of consumption. 6 The allocation ratio configures the overallocation of resources. Save and close the provider.yaml file. Add the provider.yaml file to the stack with your other environment files and deploy the overcloud: 7.5.2. Declaring custom traits and resource classes for a role To declare custom traits and resource classes for a role, you must configure the CustomProviderInventories parameter in the role file. Procedure Generate a new roles data file named roles_data_custom_traits.yaml that includes the Controller and Compute roles, along with any other roles that you need for the overcloud: Use the following example configuration to configure the available custom resource classes and traits for the resource provider: The following example declares custom resource classes and custom traits for the ComputeGpu role: Replace CUSTOM_EXAMPLE_RESOURCE_CLASS with the name of the resource class. Custom resource classes must begin with the prefix CUSTOM_ and contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. <1> total is the number of available CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. <2> reserved is the number of reserved CUSTOM_EXAMPLE_RESOURCE_CLASS for this resource provider. <3> min_unit is the minimum units of resources a single instance can consume. <4> max_unit is the maximum units of resources a single instance can consume. <5> step_size is the number of increments of consumption. <6> allocation_ratio configures the overallocation of resources. If you set allocation_ratio to 1.0, then no overallocation is allowed. But if allocation_ration is greater than 1.0, then the total available resource is more than the physically existing one. Save and close the role file. Add the role file to the stack with your other environment files and deploy the overcloud: 7.6. Creating and managing host aggregates As a cloud administrator, you can partition a Compute deployment into logical groups for performance or administrative purposes. Red Hat OpenStack Platform (RHOSP) provides the following mechanisms for partitioning logical groups: Host aggregate A host aggregate is a grouping of Compute nodes into a logical unit based on attributes such as the hardware or performance characteristics. You can assign a Compute node to one or more host aggregates. You can map flavors and images to host aggregates by setting metadata on the host aggregate, and then matching flavor extra specs or image metadata properties to the host aggregate metadata. The Compute scheduler can use this metadata to schedule instances when the required filters are enabled. Metadata that you specify in a host aggregate limits the use of that host to any instance that has the same metadata specified in its flavor or image. You can configure weight multipliers for each host aggregate by setting the xxx_weight_multiplier configuration option in the host aggregate metadata. You can use host aggregates to handle load balancing, enforce physical isolation or redundancy, group servers with common attributes, or separate classes of hardware. When you create a host aggregate, you can specify a zone name. This name is presented to cloud users as an availability zone that they can select. Availability zones An availability zone is the cloud user view of a host aggregate. A cloud user cannot view the Compute nodes in the availability zone, or view the metadata of the availability zone. The cloud user can only see the name of the availability zone. You can assign each Compute node to only one availability zone. You can configure a default availability zone where instances will be scheduled when the cloud user does not specify a zone. You can direct cloud users to use availability zones that have specific capabilities. 7.6.1. Enabling scheduling on host aggregates To schedule instances on host aggregates that have specific attributes, update the configuration of the Compute scheduler to enable filtering based on the host aggregate metadata. Procedure Open your Compute environment file. Add the following values to the NovaSchedulerEnabledFilters parameter, if they are not already present: AggregateInstanceExtraSpecsFilter : Add this value to filter Compute nodes by host aggregate metadata that match flavor extra specs. Note For this filter to perform as expected, you must scope the flavor extra specs by prefixing the extra_specs key with the aggregate_instance_extra_specs: namespace. AggregateImagePropertiesIsolation : Add this value to filter Compute nodes by host aggregate metadata that match image metadata properties. Note To filter host aggregate metadata by using image metadata properties, the host aggregate metadata key must match a valid image metadata property. For information about valid image metadata properties, see Image configuration parameters . AvailabilityZoneFilter : Add this value to filter by availability zone when launching an instance. Note Instead of using the AvailabilityZoneFilter Compute scheduler service filter, you can use the Placement service to process availability zone requests. For more information, see Filtering by availability zone using the Placement service . Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: 7.6.2. Creating a host aggregate As a cloud administrator, you can create as many host aggregates as you require. Procedure To create a host aggregate, enter the following command: Replace <aggregate_name> with the name you want to assign to the host aggregate. Add metadata to the host aggregate: Replace <key=value> with the metadata key-value pair. If you are using the AggregateInstanceExtraSpecsFilter filter, the key can be any arbitrary string, for example, ssd=true . If you are using the AggregateImagePropertiesIsolation filter, the key must match a valid image metadata property. For more information about valid image metadata properties, see Image configuration parameters . Replace <aggregate_name> with the name of the host aggregate. Add the Compute nodes to the host aggregate: Replace <aggregate_name> with the name of the host aggregate to add the Compute node to. Replace <host_name> with the name of the Compute node to add to the host aggregate. Create a flavor or image for the host aggregate: Create a flavor: Create an image: Set one or more key-value pairs on the flavor or image that match the key-value pairs on the host aggregate. To set the key-value pairs on a flavor, use the scope aggregate_instance_extra_specs : To set the key-value pairs on an image, use valid image metadata properties as the key: 7.6.3. Creating an availability zone As a cloud administrator, you can create an availability zone that cloud users can select when they create an instance. Procedure To create an availability zone, you can create a new availability zone host aggregate, or make an existing host aggregate an availability zone: To create a new availability zone host aggregate, enter the following command: Replace <availability_zone> with the name you want to assign to the availability zone. Replace <aggregate_name> with the name you want to assign to the host aggregate. To make an existing host aggregate an availability zone, enter the following command: Replace <availability_zone> with the name you want to assign to the availability zone. Replace <aggregate_name> with the name of the host aggregate. Optional: Add metadata to the availability zone: Replace <key=value> with your metadata key-value pair. You can add as many key-value properties as required. Replace <aggregate_name> with the name of the availability zone host aggregate. Add Compute nodes to the availability zone host aggregate: Replace <aggregate_name> with the name of the availability zone host aggregate to add the Compute node to. Replace <host_name> with the name of the Compute node to add to the availability zone. 7.6.4. Deleting a host aggregate To delete a host aggregate, you first remove all the Compute nodes from the host aggregate. Procedure To view a list of all the Compute nodes assigned to the host aggregate, enter the following command: To remove all assigned Compute nodes from the host aggregate, enter the following command for each Compute node: Replace <aggregate_name> with the name of the host aggregate to remove the Compute node from. Replace <host_name> with the name of the Compute node to remove from the host aggregate. After you remove all the Compute nodes from the host aggregate, enter the following command to delete the host aggregate: 7.6.5. Creating a project-isolated host aggregate You can create a host aggregate that is available only to specific projects. Only the projects that you assign to the host aggregate can launch instances on the host aggregate. Note Project isolation uses the Placement service to filter host aggregates for each project. This process supersedes the functionality of the AggregateMultiTenancyIsolation filter. You therefore do not need to use the AggregateMultiTenancyIsolation filter. Procedure Open your Compute environment file. To schedule project instances on the project-isolated host aggregate, set the NovaSchedulerLimitTenantsToPlacementAggregate parameter to True in the Compute environment file. Optional: To ensure that only the projects that you assign to a host aggregate can create instances on your cloud, set the NovaSchedulerPlacementAggregateRequiredForTenants parameter to True . Note NovaSchedulerPlacementAggregateRequiredForTenants is False by default. When this parameter is False , projects that are not assigned to a host aggregate can create instances on any host aggregate. Save the updates to your Compute environment file. Add your Compute environment file to the stack with your other environment files and deploy the overcloud: Create the host aggregate. Retrieve the list of project IDs: Use the filter_tenant_id<suffix> metadata key to assign projects to the host aggregate: Replace <ID0> , <ID1> , and all IDs up to <IDn> with unique values for each project filter that you want to create. Replace <project_id0> , <project_id1> , and all project IDs up to <project_idn> with the ID of each project that you want to assign to the host aggregate. Replace <aggregate_name> with the name of the project-isolated host aggregate. For example, use the following syntax to assign projects 78f1 , 9d3t , and aa29 to the host aggregate project-isolated-aggregate : Tip You can create a host aggregate that is available only to a single specific project by omitting the suffix from the filter_tenant_id metadata key: Additional resources For more information on creating a host aggregate, see Creating and managing host aggregates .
[ "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)USD openstack image create ... trait-image", "(overcloud)USD openstack --os-placement-api-version 1.6 trait list", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "(overcloud)USD openstack image set --property trait:HW_CPU_X86_AVX512BW=required trait-image", "(overcloud)USD openstack image set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-image", "(overcloud)USD openstack flavor create --vcpus 1 --ram 512 --disk 2 trait-flavor", "(overcloud)USD openstack --os-placement-api-version 1.6 trait list", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "(overcloud)USD openstack flavor set --property trait:HW_CPU_X86_AVX512BW=required trait-flavor", "(overcloud)USD openstack flavor set --property trait:COMPUTE_VOLUME_MULTI_ATTACH=forbidden trait-flavor", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)USD openstack --os-placement-api-version 1.6 trait list", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_TRAIT_NAME", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait <TRAIT_NAME> <host_uuid>", "(overcloud)USD openstack --os-compute-api-version 2.53 aggregate set --property trait:<TRAIT_NAME>=required <aggregate_name>", "(overcloud)USD openstack flavor set --property trait:<TRAIT_NAME>=required <flavor> (overcloud)USD openstack image set --property trait:<TRAIT_NAME>=required <image>", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "parameter_defaults: NovaSchedulerEnabledFilters: - AggregateInstanceExtraSpecsFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter", "parameter_defaults: ControllerExtraConfig: nova::scheduler::filter::ram_weight_multiplier: '2.0'", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint different_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint different_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: filter_scheduler/isolated_hosts: value: server1, server2 filter_scheduler/isolated_images: value: 342b492c-128f-4a42-8d3a-c5088cf27d13, ebd267a6-ca86-4d6c-9a0e-bd132d6b7d09", "parameter_defaults: ComputeExtraConfig: nova::config::nova_config: DEFAULT/compute_monitors: value: 'cpu.virt_driver'", "openstack server create --image cedef40a-ed67-4d10-800e-17455edce175 --flavor 1 --hint same_host=a0cf03a5-d921-4877-bb5c-86d26cf818e1 --hint same_host=8c19174f-4220-44f0-824a-cd1eeef10287 server-1", "openstack server group create --policy affinity <group_name>", "openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>", "openstack server group create --policy anti-affinity <group_name>", "openstack server create --image <image> --flavor <flavor> --hint group=<group_uuid> <instance_name>", "openstack server create --image <image> --flavor <flavor> --hint build_near_host_ip=<ip_address> --hint cidr=<subnet_mask> <instance_name>", "(node_resource_availability - minval) / (maxval - minval)", "(w1_multiplier * norm(w1)) + (w2_multiplier * norm(w2)) +", "ControllerExtraConfig: nova::scheduler::filter::scheduler_weight_classes: 'nova.scheduler.weights.ram.RAMWeigher' nova::scheduler::filter::ram_weight_multiplier: '2.0'", "openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>", "openstack --os-compute-api-version 2.15 server group create --policy soft-affinity <group_name>", "meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid>", "meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: - CUSTOM_EXAMPLE_RESOURCE_CLASS: total: <total_available> reserved: <reserved> min_unit: <min_unit> max_unit: <max_unit> step_size: <step_size> allocation_ratio: <allocation_ratio>", "meta: schema_version: '1.0' providers: - identification: uuid: <node_uuid> inventories: additional: traits: additional: - 'CUSTOM_EXAMPLE_TRAIT'", "meta: schema_version: 1.0 providers: - identification: uuid: USDCOMPUTE_NODE inventories: additional: CUSTOM_LLC: # Describing LLC on this Compute node total: 22 1 reserved: 2 2 min_unit: 1 3 max_unit: 11 4 step_size: 1 5 allocation_ratio: 1.0 6 traits: additional: # This Compute node enables support for P-state control - CUSTOM_P_STATE_ENABLED", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/provider.yaml", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_roles_data_custom_traits.yaml Compute:Compute Controller", "########################## # GPU configuration # ########################## ComputeGpuParameters: NovaVGPUTypesDeviceAddressesMapping: {'nvidia-319': ['0000:82:00.0'], 'nvidia-320': ['0000:04:00.0']} CustomProviderInventories: - name: computegpu-0.localdomain_pci_0000_04_00_0 traits: - CUSTOM_NVIDIA_12 - name: computegpu-0.localdomain_pci_0000_82_00_0 traits: - CUSTOM_NVIDIA_11 - name: computegpu-1.localdomain_pci_0000_04_00_0 traits: - CUSTOM_NVIDIA_12 - name: computegpu-1.localdomain_pci_0000_82_00_0 traits: - CUSTOM_NVIDIA_11 - uuid: USDCOMPUTE_NODE inventories: CUSTOM_EXAMPLE_RESOURCE_CLASS: total: 100 1 reserved: 0 2 min_unit: 1 3 max_unit: 10 4 step_size: 1 5 allocation_ratio: 1.0 6 CUSTOM_ANOTHER_EXAMPLE_RESOURCE_CLASS: total: 100 traits: # This Compute node enables support for P-state and C-state control - CUSTOM_P_STATE_ENABLED - CUSTOM_C_STATE_ENABLED", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/roles_data_roles_data_custom_traits.yaml", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml", "(overcloud)# openstack aggregate create <aggregate_name>", "(overcloud)# openstack aggregate set --property <key=value> --property <key=value> <aggregate_name>", "(overcloud)# openstack aggregate add host <aggregate_name> <host_name>", "(overcloud)USD openstack flavor create --ram <size_mb> --disk <size_gb> --vcpus <no_reserved_vcpus> host-agg-flavor", "(overcloud)USD openstack image create host-agg-image", "(overcloud)# openstack flavor set --property aggregate_instance_extra_specs:ssd=true host-agg-flavor", "(overcloud)# openstack image set --property os_type=linux host-agg-image", "(overcloud)# openstack aggregate create --zone <availability_zone> <aggregate_name>", "(overcloud)# openstack aggregate set --zone <availability_zone> <aggregate_name>", "(overcloud)# openstack aggregate set --property <key=value> <aggregate_name>", "(overcloud)# openstack aggregate add host <aggregate_name> <host_name>", "(overcloud)# openstack aggregate show <aggregate_name>", "(overcloud)# openstack aggregate remove host <aggregate_name> <host_name>", "(overcloud)# openstack aggregate delete <aggregate_name>", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/<compute_environment_file>.yaml \\", "(overcloud)# openstack project list", "(overcloud)# openstack aggregate set --property filter_tenant_id<ID0>=<project_id0> --property filter_tenant_id<ID1>=<project_id1> --property filter_tenant_id<IDn>=<project_idn> <aggregate_name>", "(overcloud)# openstack aggregate set --property filter_tenant_id0=78f1 --property filter_tenant_id1=9d3t --property filter_tenant_id2=aa29 project-isolated-aggregate", "(overcloud)# openstack aggregate set --property filter_tenant_id=78f1 single-project-isolated-aggregate" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-instance-scheduling-and-placement_memory
Chapter 11. Using the vSphere Problem Detector Operator
Chapter 11. Using the vSphere Problem Detector Operator 11.1. About the vSphere Problem Detector Operator The vSphere Problem Detector Operator checks clusters that are deployed on vSphere for common installation and misconfiguration issues that are related to storage. The Operator runs in the openshift-cluster-storage-operator namespace and is started by the Cluster Storage Operator when the Cluster Storage Operator detects that the cluster is deployed on vSphere. The vSphere Problem Detector Operator communicates with the vSphere vCenter Server to determine the virtual machines in the cluster, the default datastore, and other information about the vSphere vCenter Server configuration. The Operator uses the credentials from the Cloud Credential Operator to connect to vSphere. The Operator runs the checks according to the following schedule: The checks run every hour. If any check fails, the Operator runs the checks again in intervals of 1 minute, 2 minutes, 4, 8, and so on. The Operator doubles the interval up to a maximum interval of 8 hours. When all checks pass, the schedule returns to an hour interval. The Operator increases the frequency of the checks after a failure so that the Operator can report success quickly after the failure condition is remedied. You can run the Operator manually for immediate troubleshooting information. 11.2. Running the vSphere Problem Detector Operator checks You can override the schedule for running the vSphere Problem Detector Operator checks and run the checks immediately. The vSphere Problem Detector Operator automatically runs the checks every hour. However, when the Operator starts, it runs the checks immediately. The Operator is started by the Cluster Storage Operator when the Cluster Storage Operator starts and determines that the cluster is running on vSphere. To run the checks immediately, you can scale the vSphere Problem Detector Operator to 0 and back to 1 so that it restarts the vSphere Problem Detector Operator. Prerequisites Access to the cluster as a user with the cluster-admin role. Procedure Scale the Operator to 0 : USD oc scale deployment/vsphere-problem-detector-operator --replicas=0 \ -n openshift-cluster-storage-operator Verification Verify that the pods have restarted by running the following command: USD oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w Example output NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s The AGE field must indicate that the pod is restarted. 11.3. Viewing the events from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates events that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the events by using the command line, run the following command: USD oc get event -n openshift-cluster-storage-operator \ --sort-by={.metadata.creationTimestamp} Example output 16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader To view the events by using the OpenShift Container Platform web console, navigate to Home Events and select openshift-cluster-storage-operator from the Project menu. 11.4. Viewing the logs from the vSphere Problem Detector Operator After the vSphere Problem Detector Operator runs and performs the configuration checks, it creates log records that can be viewed from the command line or from the OpenShift Container Platform web console. Procedure To view the logs by using the command line, run the following command: USD oc logs deployment/vsphere-problem-detector-operator \ -n openshift-cluster-storage-operator Example output I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed To view the Operator logs with the OpenShift Container Platform web console, perform the following steps: Navigate to Workloads Pods . Select openshift-cluster-storage-operator from the Projects menu. Click the link for the vsphere-problem-detector-operator pod. Click the Logs tab on the Pod details page to view the logs. 11.5. Configuration checks run by the vSphere Problem Detector Operator The following tables identify the configuration checks that the vSphere Problem Detector Operator runs. Some checks verify the configuration of the cluster. Other checks verify the configuration of each node in the cluster. Table 11.1. Cluster configuration checks Name Description CheckDefaultDatastore Verifies that the default datastore name in the vSphere configuration is short enough for use with dynamic provisioning. If this check fails, you can expect the following: systemd logs errors to the journal such as Failed to set up mount unit: Invalid argument . systemd does not unmount volumes if the virtual machine is shut down or rebooted without draining all the pods from the node. If this check fails, reconfigure vSphere with a shorter name for the default datastore. CheckFolderPermissions Verifies the permission to list volumes in the default datastore. This permission is required to create volumes. The Operator verifies the permission by listing the / and /kubevols directories. The root directory must exist. It is acceptable if the /kubevols directory does not exist when the check runs. The /kubevols directory is created when the datastore is used with dynamic provisioning if the directory does not already exist. If this check fails, review the required permissions for the vCenter account that was specified during the OpenShift Container Platform installation. CheckStorageClasses Verifies the following: The fully qualified path to each persistent volume that is provisioned by this storage class is less than 255 characters. If a storage class uses a storage policy, the storage class must use one policy only and that policy must be defined. CheckTaskPermissions Verifies the permission to list recent tasks and datastores. ClusterInfo Collects the cluster version and UUID from vSphere vCenter. Table 11.2. Node configuration checks Name Description CheckNodeDiskUUID Verifies that all the vSphere virtual machines are configured with disk.enableUUID=TRUE . If this check fails, see the How to check 'disk.EnableUUID' parameter from VM in vSphere Red Hat Knowledgebase solution. CheckNodeProviderID Verifies that all nodes are configured with the ProviderID from vSphere vCenter. This check fails when the output from the following command does not include a provider ID for each node. USD oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID If this check fails, refer to the vSphere product documentation for information about setting the provider ID for each node in the cluster. CollectNodeESXiVersion Reports the version of the ESXi hosts that run nodes. CollectNodeHWVersion Reports the virtual machine hardware version for a node. 11.6. About the storage class configuration check The names for persistent volumes that use vSphere storage are related to the datastore name and cluster ID. When a persistent volume is created, systemd creates a mount unit for the persistent volume. The systemd process has a 255 character limit for the length of the fully qualified path to the VDMK file that is used for the persistent volume. The fully qualified path is based on the naming conventions for systemd and vSphere. The naming conventions use the following pattern: /var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk The naming conventions require 205 characters of the 255 character limit. The datastore name and the cluster ID are determined from the deployment. The datastore name and cluster ID are substituted into the preceding pattern. Then the path is processed with the systemd-escape command to escape special characters. For example, a hyphen character uses four characters after it is escaped. The escaped value is \x2d . After processing with systemd-escape to ensure that systemd can access the fully qualified path to the VDMK file, the length of the path must be less than 255 characters. 11.7. Metrics for the vSphere Problem Detector Operator The vSphere Problem Detector Operator exposes the following metrics for use by the OpenShift Container Platform monitoring stack. Table 11.3. Metrics exposed by the vSphere Problem Detector Operator Name Description vsphere_cluster_check_total Cumulative number of cluster-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_cluster_check_errors Number of failed cluster-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one cluster-level check failed. vsphere_esxi_version_total Number of ESXi hosts with a specific version. Be aware that if a host runs more than one node, the host is counted only once. vsphere_node_check_total Cumulative number of node-level checks that the vSphere Problem Detector Operator performed. This count includes both successes and failures. vsphere_node_check_errors Number of failed node-level checks that the vSphere Problem Detector Operator performed. For example, a value of 1 indicates that one node-level check failed. vsphere_node_hw_version_total Number of vSphere nodes with a specific hardware version. vsphere_vcenter_info Information about the vSphere vCenter Server. 11.8. Additional resources Monitoring overview
[ "oc scale deployment/vsphere-problem-detector-operator --replicas=0 -n openshift-cluster-storage-operator", "oc -n openshift-cluster-storage-operator get pod -l name=vsphere-problem-detector-operator -w", "NAME READY STATUS RESTARTS AGE vsphere-problem-detector-operator-77486bd645-9ntpb 1/1 Running 0 11s", "oc get event -n openshift-cluster-storage-operator --sort-by={.metadata.creationTimestamp}", "16m Normal Started pod/vsphere-problem-detector-operator-xxxxx Started container vsphere-problem-detector 16m Normal Created pod/vsphere-problem-detector-operator-xxxxx Created container vsphere-problem-detector 16m Normal LeaderElection configmap/vsphere-problem-detector-lock vsphere-problem-detector-operator-xxxxx became leader", "oc logs deployment/vsphere-problem-detector-operator -n openshift-cluster-storage-operator", "I0108 08:32:28.445696 1 operator.go:209] ClusterInfo passed I0108 08:32:28.451029 1 datastore.go:57] CheckStorageClasses checked 1 storage classes, 0 problems found I0108 08:32:28.451047 1 operator.go:209] CheckStorageClasses passed I0108 08:32:28.452160 1 operator.go:209] CheckDefaultDatastore passed I0108 08:32:28.480648 1 operator.go:271] CheckNodeDiskUUID:<host_name> passed I0108 08:32:28.480685 1 operator.go:271] CheckNodeProviderID:<host_name> passed", "oc get nodes -o custom-columns=NAME:.metadata.name,PROVIDER_ID:.spec.providerID,UUID:.status.nodeInfo.systemUUID", "/var/lib/kubelet/plugins/kubernetes.io/vsphere-volume/mounts/[<datastore>] 00000000-0000-0000-0000-000000000000/<cluster_id>-dynamic-pvc-00000000-0000-0000-0000-000000000000.vmdk" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vsphere/using-vsphere-problem-detector-operator
4.5. Configuring Static Routes in ifcfg files
4.5. Configuring Static Routes in ifcfg files Static routes set using ip commands at the command prompt will be lost if the system is shutdown or restarted. To configure static routes to be persistent after a system restart, they must be placed in per-interface configuration files in the /etc/sysconfig/network-scripts/ directory. The file name should be of the format route- interface . There are two types of commands to use in the configuration files: Static Routes Using the IP Command Arguments Format If required in a per-interface configuration file, for example /etc/sysconfig/network-scripts/route-enp1s0 , define a route to a default gateway on the first line. This is only required if the gateway is not set through DHCP and is not set globally in the /etc/sysconfig/network file: default via 192.168.1.1 dev interface where 192.168.1.1 is the IP address of the default gateway. The interface is the interface that is connected to, or can reach, the default gateway. The dev option can be omitted, it is optional. Note that this setting takes precedence over a setting in the /etc/sysconfig/network file. If a route to a remote network is required, a static route can be specified as follows. Each line is parsed as an individual route: 10.10.10.0/24 via 192.168.1.1 [ dev interface ] where 10.10.10.0/24 is the network address and prefix length of the remote or destination network. The address 192.168.1.1 is the IP address leading to the remote network. It is preferably the hop address but the address of the exit interface will work. The " hop " means the remote end of a link, for example a gateway or router. The dev option can be used to specify the exit interface interface but it is not required. Add as many static routes as required. The following is an example of a route- interface file using the ip command arguments format. The default gateway is 192.168.0.1 , interface enp1s0 and a leased line or WAN connection is available at 192.168.0.10 . The two static routes are for reaching the 10.10.10.0/24 network and the 172.16.1.10/32 host: In the above example, packets going to the local 192.168.0.0/24 network will be directed out the interface attached to that network. Packets going to the 10.10.10.0/24 network and 172.16.1.10/32 host will be directed to 192.168.0.10 . Packets to unknown, remote, networks will use the default gateway therefore static routes should only be configured for remote networks or hosts if the default route is not suitable. Remote in this context means any networks or hosts that are not directly attached to the system. For IPv6 configuration, an example of a route6- interface file in ip route format: Specifying an exit interface is optional. It can be useful if you want to force traffic out of a specific interface. For example, in the case of a VPN, you can force traffic to a remote network to pass through a tun0 interface even when the interface is in a different subnet to the destination network. The ip route format can be used to specify a source address. For example: To define an existing policy-based routing configuration, which specifies multiple routing tables, see Section 4.5.1, "Understanding Policy-routing" . Important If the default gateway is already assigned by DHCP and if the same gateway with the same metric is specified in a configuration file, an error during start-up, or when bringing up an interface, will occur. The follow error message may be shown: "RTNETLINK answers: File exists". This error may be ignored. Static Routes Using the Network/Netmask Directives Format You can also use the network/netmask directives format for route- interface files. The following is a template for the network/netmask format, with instructions following afterwards: ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.1.1 ADDRESS0= 10.10.10.0 is the network address of the remote network or host to be reached. NETMASK0= 255.255.255.0 is the netmask for the network address defined with ADDRESS0= 10.10.10.0 . GATEWAY0= 192.168.1.1 is the default gateway, or an IP address that can be used to reach ADDRESS0= 10.10.10.0 The following is an example of a route- interface file using the network/netmask directives format. The default gateway is 192.168.0.1 but a leased line or WAN connection is available at 192.168.0.10 . The two static routes are for reaching the 10.10.10.0/24 and 172.16.1.0/24 networks: ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.0.10 ADDRESS1=172.16.1.10 NETMASK1=255.255.255.0 GATEWAY1=192.168.0.10 Subsequent static routes must be numbered sequentially, and must not skip any values. For example, ADDRESS0 , ADDRESS1 , ADDRESS2 , and so on. By default, forwarding packets from one interface to another, or out of the same interface, is disabled for security reasons. This prevents the system acting as a router for external traffic. If you need the system to route external traffic, such as when sharing a connection or configuring a VPN server, you will need to enable IP forwarding. See the Red Hat Enterprise Linux 7 Security Guide for more details. 4.5.1. Understanding Policy-routing Policy-routing also known as source-routing, is a mechanism for more flexible routing configurations. Routing decisions are commonly made based on the destination IP address of a package. Policy-routing allows more flexibility to select routes based on other routing properties, such as source IP address, source port, protocol type. Routing tables stores route information about networks. They are identified by either numeric values or names, which can be configured in the /etc/iproute2/rt_tables file. The default table is identified with 254 . Using policy-routing , you also need rules. Rules are used to select a routing table, based on certain properties of packets. For initscripts, the routing table is a property of the route that can be configured through the table argument. The ip route format can be used to define an existing policy-based routing configuration, which specifies multiple routing tables: To specify routing rules in initscripts, edit them to the /etc/sysconfig/network-scripts/rule- enp1s0 file for IPv4 or to the /etc/sysconfig/network-scripts/rule6- enp1s0 file for IPv6 . NetworkManager supports policy-routing, but rules are not supported yet. The rules must be configured by the user running a custom script. For each manual static route, a routing table can be selected: ipv4.route-table for IPv4 and ipv6.route-table for IPv6 . By setting routes to a particular table, all routes from DHCP , autoconf6 , DHCP6 are placed in that specific table. In addition, all routes for subnets that have already configured addresses, are placed in the corresponding routing table. For example, if you configure the 192.168.1.10/24 address, the 192.168.1.0/24 subnet is contained in ipv4.route-table. For more details about policy-routing rules, see the ip-rule(8) man page. For routing tables, see the ip-route(8) man page.
[ "default via 192.168.0.1 dev enp1s0 10.10.10.0/24 via 192.168.0.10 dev enp1s0 172.16.1.10/32 via 192.168.0.10 dev enp1s0", "2001:db8:1::/48 via 2001:db8::1 metric 2048 2001:db8:2::/48", "10.10.10.0/24 via 192.168.0.10 src 192.168.0.2", "ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.1.1", "ADDRESS0=10.10.10.0 NETMASK0=255.255.255.0 GATEWAY0=192.168.0.10 ADDRESS1=172.16.1.10 NETMASK1=255.255.255.0 GATEWAY1=192.168.0.10", "10.10.10.0/24 via 192.168.0.10 table 1 10.10.10.0/24 via 192.168.0.10 table 2" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-Configuring_Static_Routes_in_ifcfg_files
Chapter 1. Operator APIs
Chapter 1. Operator APIs 1.1. Authentication [operator.openshift.io/v1] Description Authentication provides information to configure an operator to manage authentication. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.2. CloudCredential [operator.openshift.io/v1] Description CloudCredential provides a means to configure an operator to manage CredentialsRequests. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.3. ClusterCSIDriver [operator.openshift.io/v1] Description ClusterCSIDriver object allows management and configuration of a CSI driver operator installed by default in OpenShift. Name of the object must be name of the CSI driver it operates. See CSIDriverName type for list of allowed values. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.4. Console [operator.openshift.io/v1] Description Console provides a means to configure an operator to manage the console. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.5. Config [operator.openshift.io/v1] Description Config specifies the behavior of the config operator which is responsible for creating the initial configuration of other components on the cluster. The operator also handles installation, migration or synchronization of cloud configurations for AWS and Azure cloud based clusters Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.6. Config [imageregistry.operator.openshift.io/v1] Description Config is the configuration object for a registry instance managed by the registry operator Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.7. Config [samples.operator.openshift.io/v1] Description Config contains the configuration and detailed condition status for the Samples Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.8. CSISnapshotController [operator.openshift.io/v1] Description CSISnapshotController provides a means to configure an operator to manage the CSI snapshots. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.9. DNS [operator.openshift.io/v1] Description DNS manages the CoreDNS component to provide a name resolution service for pods and services in the cluster. This supports the DNS-based service discovery specification: https://github.com/kubernetes/dns/blob/master/docs/specification.md More details: https://kubernetes.io/docs/tasks/administer-cluster/coredns Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.10. DNSRecord [ingress.operator.openshift.io/v1] Description DNSRecord is a DNS record managed in the zones defined by dns.config.openshift.io/cluster .spec.publicZone and .spec.privateZone. Cluster admin manipulation of this resource is not supported. This resource is only for internal communication of OpenShift operators. If DNSManagementPolicy is "Unmanaged", the operator will not be responsible for managing the DNS records on the cloud provider. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.11. Etcd [operator.openshift.io/v1] Description Etcd provides information to configure an operator to manage etcd. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.12. ImageContentSourcePolicy [operator.openshift.io/v1alpha1] Description ImageContentSourcePolicy holds cluster-wide information about how to handle registry mirror rules. When multiple policies are defined, the outcome of the behavior is defined on each field. Compatibility level 4: No compatibility is provided, the API can change at any point for any reason. These capabilities should not be used by applications needing long term support. Type object 1.13. ImagePruner [imageregistry.operator.openshift.io/v1] Description ImagePruner is the configuration object for an image registry pruner managed by the registry operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.14. IngressController [operator.openshift.io/v1] Description IngressController describes a managed ingress controller for the cluster. The controller can service OpenShift Route and Kubernetes Ingress resources. When an IngressController is created, a new ingress controller deployment is created to allow external traffic to reach the services that expose Ingress or Route resources. Updating this resource may lead to disruption for public facing network connections as a new ingress controller revision may be rolled out. https://kubernetes.io/docs/concepts/services-networking/ingress-controllers Whenever possible, sensible defaults for the platform are used. See each field for more details. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.15. InsightsOperator [operator.openshift.io/v1] Description InsightsOperator holds cluster-wide information about the Insights Operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.16. KubeAPIServer [operator.openshift.io/v1] Description KubeAPIServer provides information to configure an operator to manage kube-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.17. KubeControllerManager [operator.openshift.io/v1] Description KubeControllerManager provides information to configure an operator to manage kube-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.18. KubeScheduler [operator.openshift.io/v1] Description KubeScheduler provides information to configure an operator to manage scheduler. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.19. KubeStorageVersionMigrator [operator.openshift.io/v1] Description KubeStorageVersionMigrator provides information to configure an operator to manage kube-storage-version-migrator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.20. MachineConfiguration [operator.openshift.io/v1] Description MachineConfiguration provides information to configure an operator to manage Machine Configuration. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.21. Network [operator.openshift.io/v1] Description Network describes the cluster's desired network configuration. It is consumed by the cluster-network-operator. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.22. OpenShiftAPIServer [operator.openshift.io/v1] Description OpenShiftAPIServer provides information to configure an operator to manage openshift-apiserver. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.23. OpenShiftControllerManager [operator.openshift.io/v1] Description OpenShiftControllerManager provides information to configure an operator to manage openshift-controller-manager. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.24. OperatorPKI [network.operator.openshift.io/v1] Description OperatorPKI is a simple certificate authority. It is not intended for external use - rather, it is internal to the network operator. The CNO creates a CA and a certificate signed by that CA. The certificate has both ClientAuth and ServerAuth extended usages enabled. A Secret called <name>-ca with two data keys: tls.key - the private key tls.crt - the CA certificate A ConfigMap called <name>-ca with a single data key: cabundle.crt - the CA certificate(s) A Secret called <name>-cert with two data keys: tls.key - the private key tls.crt - the certificate, signed by the CA The CA certificate will have a validity of 10 years, rotated after 9. The target certificate will have a validity of 6 months, rotated after 3 The CA certificate will have a CommonName of "<namespace>_<name>-ca@<timestamp>", where <timestamp> is the last rotation time. Type object 1.25. ServiceCA [operator.openshift.io/v1] Description ServiceCA provides information to configure an operator to manage the service cert controllers Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object 1.26. Storage [operator.openshift.io/v1] Description Storage provides a means to configure an operator to manage the cluster storage operator. cluster is the canonical name. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object
[ "More specifically, given an OperatorPKI with <name>, the CNO will manage:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/operator_apis/operator-apis
Chapter 12. Atmosphere Websocket Component
Chapter 12. Atmosphere Websocket Component Available as of Camel version 2.14 The atmosphere-websocket: component provides Websocket based endpoints for a servlet communicating with external clients over Websocket (as a servlet accepting websocket connections from external clients). The component uses the SERVLET component and uses the Atmosphere library to support the Websocket transport in various Servlet containers (e..g., Jetty, Tomcat, ... ). Unlike the Websocket component that starts the embedded Jetty server, this component uses the servlet provider of the container. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atmosphere-websocket</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 12.1. Atmosphere-Websocket Options The Atmosphere Websocket component supports 9 options, which are listed below. Name Description Default Type servletName (consumer) Default name of servlet to use. The default name is CamelServlet. CamelServlet String httpRegistry (consumer) To use a custom org.apache.camel.component.servlet.HttpRegistry. HttpRegistry attachmentMultipart Binding (consumer) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's. false boolean fileNameExtWhitelist (consumer) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String httpBinding (advanced) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding httpConfiguration (advanced) To use the shared HttpConfiguration as base configuration. HttpConfiguration allowJavaSerialized Object (advanced) Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean headerFilterStrategy (filter) To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Atmosphere Websocket endpoint is configured using URI syntax: with the following path and query parameters: 12.1.1. Path Parameters (1 parameters): Name Description Default Type servicePath Required Name of websocket endpoint String 12.1.2. Query Parameters (38 parameters): Name Description Default Type chunked (common) If this option is false the Servlet will disable the HTTP streaming and set the content-length header on the response true boolean disableStreamCache (common) Determines whether or not the raw input stream from Servlet is cached or not (Camel will read the stream into a in memory/overflow to file, Stream caching) cache. By default Camel will cache the Servlet input stream to support reading it multiple times to ensure it Camel can retrieve all data from the stream. However you can set this option to true when you for example need to access the raw stream, such as streaming it directly to a file or other persistent store. DefaultHttpBinding will copy the request input stream into a stream cache and put it into message body if this option is false to support reading the stream multiple times. If you use Servlet to bridge/proxy an endpoint then consider enabling this option to improve performance, in case you do not need to read the message payload multiple times. The http/http4 producer will by default cache the response body stream. If setting this option to true, then the producers will not cache the response body stream but use the response stream as-is as the message body. false boolean headerFilterStrategy (common) To use a custom HeaderFilterStrategy to filter header to and from Camel message. HeaderFilterStrategy sendToAll (common) Whether to send to all (broadcast) or send to a single receiver. false boolean transferException (common) If enabled and an Exchange failed processing on the consumer side, and if the caused Exception was send back serialized in the response as a application/x-java-serialized-object content type. On the producer side the exception will be deserialized and thrown as is, instead of the HttpOperationFailedException. The caused exception is required to be serialized. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false boolean useStreaming (common) To enable streaming to send data as multiple text fragments. false boolean httpBinding (common) To use a custom HttpBinding to control the mapping between Camel message and HttpClient. HttpBinding async (consumer) Configure the consumer to work in async mode false boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean httpMethodRestrict (consumer) Used to only allow consuming if the HttpMethod matches, such as GET/POST/PUT etc. Multiple methods can be specified separated by comma. String matchOnUriPrefix (consumer) Whether or not the consumer should try to find a target consumer by matching the URI prefix if no exact match is found. false boolean responseBufferSize (consumer) To use a custom buffer size on the javax.servlet.ServletResponse. Integer servletName (consumer) Name of the servlet to use CamelServlet String attachmentMultipartBinding (consumer) Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's. false boolean eagerCheckContentAvailable (consumer) Whether to eager check whether the HTTP requests has content if the content-length header is 0 or not present. This can be turned on in case HTTP clients do not send streamed data. false boolean exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern fileNameExtWhitelist (consumer) Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String optionsEnabled (consumer) Specifies whether to enable HTTP OPTIONS for this Servlet consumer. By default OPTIONS is turned off. false boolean traceEnabled (consumer) Specifies whether to enable HTTP TRACE for this Servlet consumer. By default TRACE is turned off. false boolean bridgeEndpoint (producer) If the option is true, HttpProducer will ignore the Exchange.HTTP_URI header, and use the endpoint's URI for request. You may also set the option throwExceptionOnFailure to be false to let the HttpProducer send all the fault response back. false boolean connectionClose (producer) Specifies whether a Connection Close header must be added to HTTP Request. By default connectionClose is false. false boolean copyHeaders (producer) If this option is true then IN exchange headers will be copied to OUT exchange headers according to copy strategy. Setting this to false, allows to only include the headers from the HTTP response (not propagating IN headers). true boolean httpMethod (producer) Configure the HTTP method to use. The HttpMethod header cannot override this option if set. HttpMethods ignoreResponseBody (producer) If this option is true, The http producer won't read response body and cache the input stream false boolean preserveHostHeader (producer) If the option is true, HttpProducer will set the Host header to the value contained in the current exchange Host header, useful in reverse proxy applications where you want the Host header received by the downstream server to reflect the URL called by the upstream client, this allows applications which use the Host header to generate accurate URL's for a proxied service false boolean throwExceptionOnFailure (producer) Option to disable throwing the HttpOperationFailedException in case of failed responses from the remote server. This allows you to get all responses regardless of the HTTP status code. true boolean cookieHandler (producer) Configure a cookie handler to maintain a HTTP session CookieHandler okStatusCodeRange (producer) The status codes which are considered a success response. The values are inclusive. Multiple ranges can be defined, separated by comma, e.g. 200-204,209,301-304. Each range must be a single number or from-to with the dash included. 200-299 String urlRewrite (producer) Deprecated Refers to a custom org.apache.camel.component.http.UrlRewrite which allows you to rewrite urls when you bridge/proxy endpoints. See more details at http://camel.apache.org/urlrewrite.html UrlRewrite mapHttpMessageBody (advanced) If this option is true then IN exchange Body of the exchange will be mapped to HTTP body. Setting this to false will avoid the HTTP mapping. true boolean mapHttpMessageFormUrl EncodedBody (advanced) If this option is true then IN exchange Form Encoded body of the exchange will be mapped to HTTP. Setting this to false will avoid the HTTP Form Encoded body mapping. true boolean mapHttpMessageHeaders (advanced) If this option is true then IN exchange Headers of the exchange will be mapped to HTTP headers. Setting this to false will avoid the HTTP Headers mapping. true boolean synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean proxyAuthScheme (proxy) Proxy authentication scheme to use String proxyHost (proxy) Proxy hostname to use String proxyPort (proxy) Proxy port to use int authHost (security) Authentication host to use with NTML String 12.2. Spring Boot Auto-Configuration The component supports 10 options, which are listed below. Name Description Default Type camel.component.atmosphere-websocket.allow-java-serialized-object Whether to allow java serialization when a request uses context-type=application/x-java-serialized-object. This is by default turned off. If you enable this then be aware that Java will deserialize the incoming data from the request to Java and that can be a potential security risk. false Boolean camel.component.atmosphere-websocket.attachment-multipart-binding Whether to automatic bind multipart/form-data as attachments on the Camel Exchange. The options attachmentMultipartBinding=true and disableStreamCache=false cannot work together. Remove disableStreamCache to use AttachmentMultipartBinding. This is turn off by default as this may require servlet specific configuration to enable this when using Servlet's. false Boolean camel.component.atmosphere-websocket.enabled Enable atmosphere-websocket component true Boolean camel.component.atmosphere-websocket.file-name-ext-whitelist Whitelist of accepted filename extensions for accepting uploaded files. Multiple extensions can be separated by comma, such as txt,xml. String camel.component.atmosphere-websocket.header-filter-strategy To use a custom org.apache.camel.spi.HeaderFilterStrategy to filter header to and from Camel message. The option is a org.apache.camel.spi.HeaderFilterStrategy type. String camel.component.atmosphere-websocket.http-binding To use a custom HttpBinding to control the mapping between Camel message and HttpClient. The option is a org.apache.camel.http.common.HttpBinding type. String camel.component.atmosphere-websocket.http-configuration To use the shared HttpConfiguration as base configuration. The option is a org.apache.camel.http.common.HttpConfiguration type. String camel.component.atmosphere-websocket.http-registry To use a custom org.apache.camel.component.servlet.HttpRegistry. The option is a org.apache.camel.component.servlet.HttpRegistry type. String camel.component.atmosphere-websocket.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.atmosphere-websocket.servlet-name Default name of servlet to use. The default name is CamelServlet. CamelServlet String 12.3. URI Format atmosphere-websocket:///relative path[?options] 12.4. Reading and Writing Data over Websocket An atmopshere-websocket endpoint can either write data to the socket or read from the socket, depending on whether the endpoint is configured as the producer or the consumer, respectively. 12.5. Configuring URI to Read or Write Data In the route below, Camel will read from the specified websocket connection. from("atmosphere-websocket:///servicepath") .to("direct:"); And the equivalent Spring sample: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="atmosphere-websocket:///servicepath"/> <to uri="direct:"/> </route> </camelContext> In the route below, Camel will read from the specified websocket connection. from("direct:") .to("atmosphere-websocket:///servicepath"); And the equivalent Spring sample: <camelContext xmlns="http://camel.apache.org/schema/spring"> <route> <from uri="direct:"/> <to uri="atmosphere-websocket:///servicepath"/> </route> </camelContext> 12.6. See Also Configuring Camel Component Endpoint Getting Started SERVLET AHC-WS * Websocket
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-atmosphere-websocket</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "atmosphere-websocket:servicePath", "atmosphere-websocket:///relative path[?options]", "from(\"atmosphere-websocket:///servicepath\") .to(\"direct:next\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"atmosphere-websocket:///servicepath\"/> <to uri=\"direct:next\"/> </route> </camelContext>", "from(\"direct:next\") .to(\"atmosphere-websocket:///servicepath\");", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"direct:next\"/> <to uri=\"atmosphere-websocket:///servicepath\"/> </route> </camelContext>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/atmosphere-websocket-component
Chapter 4. Configuring an identity provider for your OpenShift cluster
Chapter 4. Configuring an identity provider for your OpenShift cluster Configure an identity provider for your OpenShift Dedicated or Red Hat OpenShift Service on Amazon Web Services (ROSA) cluster to manage users and groups. Red Hat OpenShift AI supports the same authentication systems as Red Hat OpenShift Dedicated and ROSA. Check the appropriate documentation for your cluster for more information. Supported identity providers on OpenShift Dedicated Supported identity providers on ROSA Important Adding more than one OpenShift Identity Provider can create problems when the same user name exists in multiple providers. When mappingMethod is set to claim (the default mapping method for identity providers) and multiple providers have credentials associated with the same user name, the first provider used to log in to OpenShift is the one that works for that user, regardless of the order in which identity providers are configured. Refer to Identity provider parameters in the OpenShift Dedicated documentation for more information about mapping methods. Prerequisites Credentials for OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). An existing OpenShift Dedicated cluster. Procedure Log in to OpenShift Cluster Manager ( https://console.redhat.com/openshift/ ). Click Clusters . The Clusters page opens. Click the name of the cluster to configure. Click the Access control tab. Click Identity providers . Click Add identity provider . Select your provider from the Identity Provider list. Complete the remaining fields relevant to the identity provider that you selected. See Configuring identity providers for more information. Click Confirm . Verification The configured identity providers are visible on the Access control tab of the Cluster details page. Additional resources Configuring identity providers Syncing LDAP groups
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/installing_the_openshift_ai_cloud_service/configuring-an-identity-provider_install
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation If you have a suggestion to improve this documentation, or find an error, you can contact technical support at https://access.redhat.com to open a request.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/containerized_ansible_automation_platform_installation_guide/providing-feedback
Chapter 11. Managing an instance
Chapter 11. Managing an instance You can perform management operations on an instance, such as resizing the instance or shelving the instance. For a complete list of management operations, see Instance management operations . 11.1. Resizing an instance You can resize an instance if you need to increase or decrease the memory or CPU count of the instance. To resize an instance, select a new flavor for the instance that has the required capacity. Resizing an instance rebuilds and restarts the instance. Procedure Retrieve the name or ID of the instance that you want to resize: Retrieve the name or ID of the flavor that you want to use to resize the instance: Resize the instance: Replace <flavor> with the name or ID of the flavor that you retrieved in step 2. Replace <instance> with the name or ID of the instance that you are resizing. Note Resizing can take time. The operating system on the instance performs a controlled shutdown before the instance is powered off and the instance is resized. During this time, the instance status is RESIZE : When the resize completes, the instance status changes to VERIFY_RESIZE . You must now either confirm or revert the resize: To confirm the resize, enter the following command: To revert the resize, enter the following command: The instance is reverted to the original flavor and the status is changed to ACTIVE . Note The cloud might be configured to automatically confirm instance resizes if you do not confirm or revert within a configured time frame. 11.2. Creating an instance snapshot A snapshot is an image that captures the state of the running disk of an instance. You can take a snapshot of an instance to create an image that you can use as a template to create new instances. Snapshots allow you to create new instances from another instance, and restore the state of an instance. If you delete an instance on which a snapshot is based, you can use the snapshot image to create a new instance to the same state as the snapshot. Procedure Retrieve the name or ID of the instance that you want to take a snapshot of: Create the snapshot: Replace <image_name> with a name for the new snapshot image. Replace <instance> with the name or ID of the instance that you want to create the snapshot from. Optional: To ensure that the disk state is consistent when you use the instance snapshot as a template to create new instances, enable the QEMU guest agent and specify that the filesystem must be quiesced during snapshot processing by adding the following metadata to the snapshot image: The QEMU guest agent is a background process that helps management applications execute instance OS level commands. Enabling this agent adds another device to the instance, which consumes a PCI slot, and limits the number of other devices you can allocate to the instance. It also causes Windows instances to display a warning message about an unknown hardware device. 11.3. Rescuing an instance In an emergency such as a system failure or access failure, you can put an instance in rescue mode. This shuts down the instance, reboots it with a new instance disk, and mounts the original instance disk and config drive as a volume on the rebooted instance. You can connect to the rebooted instance to view the original instance disk to repair the system and recover your data. Procedure Perform the instance rescue: Optional: By default, the instance is booted from a rescue image provided by the cloud admin, or a fresh copy of the original instance image. Use the --image option to specify an alternative image to use when rebooting the instance in rescue mode. Replace <instance> with the name or ID of the instance that you want to rescue. Connect to the rescued instance to fix the issue. Restart the instance from the normal boot disk: 11.4. Shelving an instance Shelving is useful if you have an instance that you are not using, but that you do not want to delete. When you shelve an instance, you retain the instance data and resource allocations, but clear the instance memory. Depending on the cloud configuration, shelved instances are moved to the SHELVED_OFFLOADED state either immediately or after a timed delay. When SHELVED_OFFLOADED , the instance data and resource allocations are deleted. When you shelve an instance, the Compute service generates a snapshot image that captures the state of the instance, and allocates a name to the image in the following format: <instance>-shelved . This snapshot image is deleted when the instance is unshelved or deleted. If you no longer need a shelved instance, you can delete it. You can shelve more than one instance at a time. Procedure Retrieve the name or ID of the instance or instances that you want to shelve: Shelve the instance or instances: Replace <instance> with the name or ID of the instance that you want to shelve. You can specify more than one instance to shelve, as required. Verify that the instance has been shelved: Shelved instances have status SHELVED_OFFLOADED . 11.5. Instance management operations After you create an instance, you can perform the following management operations. Table 11.1. Management operations Operation Description Command Stop an instance Stops the instance. openstack server stop Start an instance Starts a stopped instance. openstack server start Pause a running instance Immediately pause a running instance. The state of the instance is stored in memory (RAM). The paused instance continues to run in a frozen state. You are not prompted to confirm the pause action. openstack server pause Resume running of a paused instance Immediately resume a paused instance. You are not prompted to confirm the resume action. openstack server unpause Suspend a running instance Immediately suspend a running instance. The state of the instance is stored on the instance disk. You are not prompted to confirm the suspend action. openstack server suspend Resume running of a suspended instance Immediately resume a suspended instance. The state of the instance is stored on the instance disk. You are not prompted to confirm the resume action. openstack server resume Delete an instance Permanently destroy the instance. You are not prompted to confirm the destroy action. Deleted instances are not recoverable unless the cloud has been configured to enable soft delete. Note Deleting an instance does not delete its attached volumes. You must delete attached volumes separately. For more information, see Deleting a Block Storage service volume in the Storage Guide . openstack server delete Edit the instance metadata You can use instance metadata to specify the properties of an instance. For more information, see Creating a customized instance . openstack server set --property <key=value> [--property <key=value>] <instance> Add security groups Adds the specified security group to the instance. openstack server add security group Remove security groups Removes the specified security group from the instance. openstack remove security group Rescue an instance In an emergency such as a system failure or access failure, you can put an instance in rescue mode. This shuts down the instance and mounts the root disk to a temporary server. You can connect to the temporary server to repair the system and recover your data. It is also possible to reboot a running instance into rescue mode. For example, this operation might be required if a filesystem of an instance becomes corrupted. openstack server rescue Restore a rescued instance Reboots the rescued instance. openstack server unrescue View instance logs View the most recent section of the instance console log. openstack console log show Shelve an instance When you shelve an instance you retain the instance data and resource allocations, but clear the instance memory. Depending on the cloud configuration, shelved instances are moved to the SHELVED_OFFLOADED state either immediately or after a timed delay. When an instance is in the SHELVED_OFFLOADED state, the instance data and resource allocations are deleted. The state of the instance is stored on the instance disk. If the instance was booted from volume, it goes to SHELVED_OFFLOADED immediately. You are not prompted to confirm the shelve action. openstack server shelve Unshelve an instance Restores the instance using the disk image of the shelved instance. openstack server unshelve Lock an instance Lock an instance to prevent non-admin users from executing actions on the instance. openstack server lock openstack server unlock Soft reboot an instance Gracefully stop and restart the instance. A soft reboot attempts to gracefully shut down all processes before restarting the instance. By default, when you reboot an instance it is a soft reboot. openstack server reboot --soft <server> Hard reboot an instance Stop and restart the instance. A hard reboot shuts down the power to the instance and then turns it back on. openstack server reboot --hard <server> Rebuild an instance Use new image and disk-partition options to rebuild the instance, which involves an instance shut down, re-image, and reboot. Use this option if you encounter operating system issues, rather than terminating the instance and starting over. openstack server rebuild
[ "openstack server list", "openstack flavor list", "openstack server resize --flavor <flavor> --wait <instance>", "openstack server list +----------------------+----------------+--------+----------------------------+ | ID | Name | Status | Networks | +----------------------+----------------+--------+----------------------------+ | 67bc9a9a-5928-47c... | myCirrosServer | RESIZE | admin_internal_net=192.168.111.139 | +----------------------+----------------+--------+----------------------------+", "openstack server resize confirm <instance>", "openstack server resize revert <instance>", "openstack server list", "openstack server image create --name <image_name> <instance>", "openstack image set --property hw_qemu_guest_agent=yes --property os_require_quiesce=yes <image_name>", "openstack server rescue [--image <image>] <instance>", "openstack server unrescue <instance>", "openstack server list", "openstack server shelve <instance> [<instance> ...]", "openstack server list" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/assembly_managing-an-instance_instances
Chapter 2. FIPS settings in Red Hat build of OpenJDK 21
Chapter 2. FIPS settings in Red Hat build of OpenJDK 21 At startup, Red Hat build of OpenJDK 21 checks if the system FIPS policy is enabled. If this policy is enabled, Red Hat build of OpenJDK 21 performs a series of automatic configurations that are intended to help Java applications to comply with FIPS requirements. These automatic configurations include the following actions: Installing a restricted list of security providers that contains the FIPS-certified Network Security Services (NSS) software token module for cryptographic operations Enforcing the Red Hat Enterprise Linux (RHEL) FIPS crypto-policy for Java that limits the algorithms and parameters available Note If FIPS mode is enabled in the system while a JVM instance is running, the JVM instance must be restarted to allow changes to take effect. You can configure Red Hat build of OpenJDK 21 to bypass the described FIPS automation. For example, you might want to achieve FIPS compliance through a Hardware Security Module (HSM) instead of the NSS software token module. You can specify FIPS configurations by using system or security properties. To better understand FIPS properties, you must understand the following JDK property classes: System properties are JVM arguments prefixed with -D , which generally take the form of ‐Dproperty.name=property.value . Privileged access is not required to pass any of these values. Only the launched JVM is affected by the configuration, and persistence depends on the existence of a launcher script. UTF-8 encoded values are valid for system properties. Security properties are available in USDJRE_HOME/conf/security/java.security or in the file that the java.security.properties system property points to. Privileged access is required to modify values in the USDJRE_HOME/conf/security/java.security file. Any modification to this file persists and affects all instances of the same Red Hat build of OpenJDK 21 deployment. Non-Basic Latin Unicode characters must be encoded with \uXXXX . When system and security properties have the same name and are set to different values, the system property takes precedence. Depending on their configuration, properties might affect other properties with different names. For more information about security properties and their default values, see the java.security file. The following list details properties that affect the FIPS configuration for Red Hat build of OpenJDK 21: Property Type Default value Description security.useSystemPropertiesFile Security true When set to false , this property disables the FIPS automation, which includes global crypto-policies alignment. java.security.disableSystemPropertiesFile System false When set to true , this property disables the FIPS automation, which includes global crypto-policies alignment. This has the same effect as a security.useSystemPropertiesFile=false security property. If both properties are set to different behaviors, java.security.disableSystemPropertiesFile takes precedence. com.redhat.fips System true When set to false , this property disables the FIPS automation while still enforcing the FIPS crypto-policy. If any of the preceding properties are set to disable the FIPS automation, this property has no effect. Crypto-policies are a prerequisite for FIPS automation. fips.keystore.type Security PKCS12 This property sets the default keystore type when Red Hat build of OpenJDK 21 is in FIPS mode. Supported values are PKCS12 and PKCS11 . In addition to the previously described settings, specific configurations can be applied to use NSS DB keystores in FIPS mode. These keystores are handled by the SunPKCS11 security provider and the NSS software token, which is the security provider's PKCS#11 back end. The following list details the NSS DB FIPS properties for Red Hat build of OpenJDK 21: Property Type Default value Description fips.nssdb.path System or Security sql:/etc/pki/nssdb File-system path that points to the NSS DB location. The syntax for this property is identical to the nssSecmodDirectory attribute available in the SunPKCS11 NSS configuration file. The property allows an sql: prefix to indicate that the referred NSS DB is of SQLite type. fips.nssdb.pin System or Security pin: (empty PIN) PIN (password) for the NSS DB that fips.nssdb.path points to. You can use this property to pass the NSS DB PIN in one of the following forms: pin:<value> In this situation, <value> is a clear text PIN value (for example, pin:1234abc ). env:<value> In this situation, <value> is an environment variable that contains the PIN value (for example, env:NSSDB_PIN_VAR ). file:<value> In this situation, <value> is the path to a UTF-8 encoded file that contains the PIN value in its first line (for example, file:/path/to/pin.txt ). The pin:<value> option accommodates both cases in which the PIN value is passed as a JVM argument or programmatically through a system property. Programmatic setting of the PIN value provides flexibility for applications to decide how to obtain the PIN. The file:<value> option is compatible with NSS modutil -pwfile and -newpwfile arguments, which are used for an NSS DB PIN change. Note If a cryptographic operation requires NSS DB authentication and the status is not authenticated, Red Hat build of OpenJDK 21 performs an implicit login with this PIN value. An application can perform an explicit login by invoking KeyStore::load before any cryptographic operation. Important Perform a security assessment, so that you can decide on a configuration that protects the integrity and confidentiality of the stored keys and certificates. This assessment should consider threats, contextual information, and other security measures in place, such as operating system user isolation and file-system permissions. For example, default configuration values might not be appropriate for an application storing keys and running in a multi-user environment. Use the modutil tool in RHEL to create and manage NSS DB keystores, and use certutil or keytool to import certificates and keys. Additional resources For more information about enabling FIPS mode, see Switching the system to FIPS mode .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/configuring_red_hat_build_of_openjdk_21_on_rhel_with_fips/fips_settings
B.4. The glock debugfs Interface
B.4. The glock debugfs Interface The glock debugfs interface allows the visualization of the internal state of the glocks and the holders and it also includes some summary details of the objects being locked in some cases. Each line of the file either begins G: with no indentation (which refers to the glock itself) or it begins with a different letter, indented with a single space, and refers to the structures associated with the glock immediately above it in the file (H: is a holder, I: an inode, and R: a resource group) . Here is an example of what the content of this file might look like: The above example is a series of excerpts (from an approximately 18MB file) generated by the command cat /sys/kernel/debug/gfs2/unity:myfs/glocks >my.lock during a run of the postmark benchmark on a single node GFS2 file system. The glocks in the figure have been selected in order to show some of the more interesting features of the glock dumps. The glock states are either EX (exclusive), DF (deferred), SH (shared) or UN (unlocked). These states correspond directly with DLM lock modes except for UN which may represent either the DLM null lock state, or that GFS2 does not hold a DLM lock (depending on the I flag as explained above). The s: field of the glock indicates the current state of the lock and the same field in the holder indicates the requested mode. If the lock is granted, the holder will have the H bit set in its flags (f: field). Otherwise, it will have the W wait bit set. The n: field (number) indicates the number associated with each item. For glocks, that is the type number followed by the glock number so that in the above example, the first glock is n:5/75320; which indicates an iopen glock which relates to inode 75320. In the case of inode and iopen glocks, the glock number is always identical to the inode's disk block number. Note The glock numbers (n: field) in the debugfs glocks file are in hexadecimal, whereas the tracepoints output lists them in decimal. This is for historical reasons; glock numbers were always written in hex, but decimal was chosen for the tracepoints so that the numbers could easily be compared with the other tracepoint output (from blktrace for example) and with output from stat (1). The full listing of all the flags for both the holder and the glock are set out in Table B.4, "Glock flags" and Table B.5, "Glock holder flags" . The content of lock value blocks is not currently available through the glock debugfs interface. Table B.3, "Glock Types" shows the meanings of the different glock types. Table B.3. Glock Types Type number Lock type Use 1 trans Transaction lock 2 inode Inode metadata and data 3 rgrp Resource group metadata 4 meta The superblock 5 iopen Inode last closer detection 6 flock flock (2) syscall 8 quota Quota operations 9 journal Journal mutex One of the more important glock flags is the l (locked) flag. This is the bit lock that is used to arbitrate access to the glock state when a state change is to be performed. It is set when the state machine is about to send a remote lock request through the DLM, and only cleared when the complete operation has been performed. Sometimes this can mean that more than one lock request will have been sent, with various invalidations occurring between times. Table B.4, "Glock flags" shows the meanings of the different glock flags. Table B.4. Glock flags Flag Name Meaning d Pending demote A deferred (remote) demote request D Demote A demote request (local or remote) f Log flush The log needs to be committed before releasing this glock F Frozen Replies from remote nodes ignored - recovery is in progress. i Invalidate in progress In the process of invalidating pages under this glock I Initial Set when DLM lock is associated with this glock l Locked The glock is in the process of changing state L LRU Set when the glock is on the LRU list` o Object Set when the glock is associated with an object (that is, an inode for type 2 glocks, and a resource group for type 3 glocks) p Demote in progress The glock is in the process of responding to a demote request q Queued Set when a holder is queued to a glock, and cleared when the glock is held, but there are no remaining holders. Used as part of the algorithm the calculates the minimum hold time for a glock. r Reply pending Reply received from remote node is awaiting processing y Dirty Data needs flushing to disk before releasing this glock When a remote callback is received from a node that wants to get a lock in a mode that conflicts with that being held on the local node, then one or other of the two flags D (demote) or d (demote pending) is set. In order to prevent starvation conditions when there is contention on a particular lock, each lock is assigned a minimum hold time. A node which has not yet had the lock for the minimum hold time is allowed to retain that lock until the time interval has expired. If the time interval has expired, then the D (demote) flag will be set and the state required will be recorded. In that case the time there are no granted locks on the holders queue, the lock will be demoted. If the time interval has not expired, then the d (demote pending) flag is set instead. This also schedules the state machine to clear d (demote pending) and set D (demote) when the minimum hold time has expired. The I (initial) flag is set when the glock has been assigned a DLM lock. This happens when the glock is first used and the I flag will then remain set until the glock is finally freed (which the DLM lock is unlocked).
[ "G: s:SH n:5/75320 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:EX n:3/258028 f:yI t:EX d:EX/0 a:3 r:4 H: s:EX f:tH e:0 p:4466 [postmark] gfs2_inplace_reserve_i+0x177/0x780 [gfs2] R: n:258028 f:05 b:22256/22256 i:16800 G: s:EX n:2/219916 f:yfI t:EX d:EX/0 a:0 r:3 I: n:75661/219916 t:8 f:0x10 d:0x00000000 s:7522/7522 G: s:SH n:5/127205 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:EX n:2/50382 f:yfI t:EX d:EX/0 a:0 r:2 G: s:SH n:5/302519 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/313874 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/271916 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2] G: s:SH n:5/312732 f:I t:SH d:EX/0 a:0 r:3 H: s:SH f:EH e:0 p:4466 [postmark] gfs2_inode_lookup+0x14e/0x260 [gfs2]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/global_file_system_2/ap-glock-debugfs-gfs2
Appendix A. Managing certificates
Appendix A. Managing certificates A.1. Installing certificate authority certificates SSL/TLS authentication relies on digital certificates issued by trusted Certificate Authorities (CAs). When an SSL/TLS connection is established by a client, the AMQP peer sends a server certificate to the client. This server certificate must be signed by one of the CAs in the client's Trusted Root Certification Authorities certificate store. If the user is creating self-signed certificates for use by Red Hat AMQ Broker, then the user must create a CA to sign the certificates. Then the user can enable the client SSL/TLS handshake by installing the self-signed CA file ca.crt . From an administrator command prompt, run the MMC Certificate Manager plugin, certmgr.msc . Expand the Trusted Root Certification Authorities folder on the left to expose Certificates . Right-click Certificates and select All Tasks and then Import . Click . Browse to select file ca.crt . Click . Select Place all certificates in the following store . Select certificate store Trusted Root Certification Authorities . Click . Click Finish . For more information about installing certificates, see Managing Microsoft Certificate Services and SSL . A.2. Installing client certificates In order to use SSL/TLS and client certficates, the certificates with the client's private keys must be imported into the proper certificate store on the client system. From an administrator command prompt, run the MMC Certificate Manager plugin, certmgr.msc . Expand the Personal folder on the left to expose Certificates . Right-click Certificates and select All Tasks and then Import . Click . Click Browse . In the file type pulldown, select Personal Information Exchange (\.pfx;*.p12) . Select file client.p12 and click Open . Click . Enter the password for the private key password field. Accept the default import options. Click . Select Place all certificates in the following store . Select certificate store Personal . Click . Click Finish . A.3. Hello World using client certificates Before a client will return a certificate to the broker, the AMQ .NET library must be told which certificates to use. The client certificate file client.crt is added to the list of certificates to be used during SChannel connection startup. In this example, certfile is the full path to the client.p12 certificate installed in the Personal certificate store. A complete example is found in HelloWorld-client-certs.cs . This source file and the supporting project files are available in the SDK.
[ "factory.SSL.ClientCertificates.Add( X509Certificate.CreateFromCertFile(certfile) );" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_.net_client/managing_certificates
Nodes
Nodes OpenShift Container Platform 4.17 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "kind: Pod apiVersion: v1 metadata: name: example labels: environment: production app: abc 1 spec: restartPolicy: Always 2 securityContext: 3 runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: 4 - name: abc args: - sleep - \"1000000\" volumeMounts: 5 - name: cache-volume mountPath: /cache 6 image: registry.access.redhat.com/ubi7/ubi-init:latest 7 securityContext: allowPrivilegeEscalation: false runAsNonRoot: true capabilities: drop: [\"ALL\"] resources: limits: memory: \"100Mi\" cpu: \"1\" requests: memory: \"100Mi\" cpu: \"1\" volumes: 8 - name: cache-volume emptyDir: sizeLimit: 500Mi", "oc project <project-name>", "oc get pods", "oc get pods", "NAME READY STATUS RESTARTS AGE console-698d866b78-bnshf 1/1 Running 2 165m console-698d866b78-m87pm 1/1 Running 2 165m", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE console-698d866b78-bnshf 1/1 Running 2 166m 10.128.0.24 ip-10-0-152-71.ec2.internal <none> console-698d866b78-m87pm 1/1 Running 2 166m 10.129.0.23 ip-10-0-173-237.ec2.internal <none>", "oc adm top pods", "oc adm top pods -n openshift-console", "NAME CPU(cores) MEMORY(bytes) console-7f58c69899-q8c8k 0m 22Mi console-7f58c69899-xhbgg 0m 25Mi downloads-594fcccf94-bcxk8 3m 18Mi downloads-594fcccf94-kv4p6 2m 15Mi", "oc adm top pod --selector=''", "oc adm top pod --selector='name=my-pod'", "oc logs -f <pod_name> -c <container_name>", "oc logs ruby-58cd97df55-mww7r", "oc logs -f ruby-57f7f4855b-znl92 -c ruby", "oc logs <object_type>/<resource_name> 1", "oc logs deployment/ruby", "{ \"kind\": \"Pod\", \"spec\": { \"containers\": [ { \"image\": \"openshift/hello-openshift\", \"name\": \"hello-openshift\" } ] }, \"apiVersion\": \"v1\", \"metadata\": { \"name\": \"iperf-slow\", \"annotations\": { \"kubernetes.io/ingress-bandwidth\": \"10M\", \"kubernetes.io/egress-bandwidth\": \"10M\" } } }", "oc create -f <file_or_dir_path>", "oc get poddisruptionbudget --all-namespaces", "NAMESPACE NAME MIN AVAILABLE MAX UNAVAILABLE ALLOWED DISRUPTIONS AGE openshift-apiserver openshift-apiserver-pdb N/A 1 1 121m openshift-cloud-controller-manager aws-cloud-controller-manager 1 N/A 1 125m openshift-cloud-credential-operator pod-identity-webhook 1 N/A 1 117m openshift-cluster-csi-drivers aws-ebs-csi-driver-controller-pdb N/A 1 1 121m openshift-cluster-storage-operator csi-snapshot-controller-pdb N/A 1 1 122m openshift-cluster-storage-operator csi-snapshot-webhook-pdb N/A 1 1 122m openshift-console console N/A 1 1 116m #", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 2 selector: 3 matchLabels: name: my-pod", "apiVersion: policy/v1 1 kind: PodDisruptionBudget metadata: name: my-pdb spec: maxUnavailable: 25% 2 selector: 3 matchLabels: name: my-pod", "oc create -f </path/to/file> -n <project_name>", "apiVersion: policy/v1 kind: PodDisruptionBudget metadata: name: my-pdb spec: minAvailable: 2 selector: matchLabels: name: my-pod unhealthyPodEvictionPolicy: AlwaysAllow 1", "oc create -f pod-disruption-budget.yaml", "apiVersion: v1 kind: Pod metadata: name: my-pdb spec: template: metadata: name: critical-pod priorityClassName: system-cluster-critical 1", "oc create -f <file-name>.yaml", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "horizontalpodautoscaler.autoscaling/hello-node autoscaled", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: name: hello-node namespace: default spec: maxReplicas: 7 minReplicas: 3 scaleTargetRef: apiVersion: apps/v1 kind: Deployment name: hello-node targetCPUUtilizationPercentage: 75 status: currentReplicas: 5 desiredReplicas: 0", "oc get deployment hello-node", "NAME REVISION DESIRED CURRENT TRIGGERED BY hello-node 1 5 5 config", "type: Resource resource: name: cpu target: type: Utilization averageUtilization: 60", "behavior: scaleDown: stabilizationWindowSeconds: 300", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: behavior: scaleDown: 1 policies: 2 - type: Pods 3 value: 4 4 periodSeconds: 60 5 - type: Percent value: 10 6 periodSeconds: 60 selectPolicy: Min 7 stabilizationWindowSeconds: 300 8 scaleUp: 9 policies: - type: Pods value: 5 10 periodSeconds: 70 - type: Percent value: 12 11 periodSeconds: 80 selectPolicy: Max stabilizationWindowSeconds: 0", "apiVersion: autoscaling/v2 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory namespace: default spec: minReplicas: 20 behavior: scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 30 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max scaleUp: selectPolicy: Disabled", "oc edit hpa hpa-resource-metrics-memory", "apiVersion: autoscaling/v1 kind: HorizontalPodAutoscaler metadata: annotations: autoscaling.alpha.kubernetes.io/behavior: '{\"ScaleUp\":{\"StabilizationWindowSeconds\":0,\"SelectPolicy\":\"Max\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":15},{\"Type\":\"Percent\",\"Value\":100,\"PeriodSeconds\":15}]}, \"ScaleDown\":{\"StabilizationWindowSeconds\":300,\"SelectPolicy\":\"Min\",\"Policies\":[{\"Type\":\"Pods\",\"Value\":4,\"PeriodSeconds\":60},{\"Type\":\"Percent\",\"Value\":10,\"PeriodSeconds\":60}]}}'", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc autoscale <object_type>/<name> \\ 1 --min <number> \\ 2 --max <number> \\ 3 --cpu-percent=<percent> 4", "oc autoscale deployment/hello-node --min=5 --max=7 --cpu-percent=75", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: cpu-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: cpu 9 target: type: AverageValue 10 averageValue: 500m 11", "oc create -f <file-name>.yaml", "oc get hpa cpu-autoscale", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE cpu-autoscale Deployment/example 173m/500m 1 10 1 20m", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-129-223.compute.internal -n openshift-kube-scheduler", "Name: openshift-kube-scheduler-ip-10-0-129-223.compute.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Cpu: 0 Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2020-02-14T22:21:14Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-129-223.compute.internal Timestamp: 2020-02-14T22:21:14Z Window: 5m0s Events: <none>", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: hpa-resource-metrics-memory 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: AverageValue 10 averageValue: 500Mi 11 behavior: 12 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Pods value: 4 periodSeconds: 60 - type: Percent value: 10 periodSeconds: 60 selectPolicy: Max", "apiVersion: autoscaling/v2 1 kind: HorizontalPodAutoscaler metadata: name: memory-autoscale 2 namespace: default spec: scaleTargetRef: apiVersion: apps/v1 3 kind: Deployment 4 name: example 5 minReplicas: 1 6 maxReplicas: 10 7 metrics: 8 - type: Resource resource: name: memory 9 target: type: Utilization 10 averageUtilization: 50 11 behavior: 12 scaleUp: stabilizationWindowSeconds: 180 policies: - type: Pods value: 6 periodSeconds: 120 - type: Percent value: 10 periodSeconds: 120 selectPolicy: Max", "oc create -f <file-name>.yaml", "oc create -f hpa.yaml", "horizontalpodautoscaler.autoscaling/hpa-resource-metrics-memory created", "oc get hpa hpa-resource-metrics-memory", "NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE hpa-resource-metrics-memory Deployment/example 2441216/500Mi 1 10 1 20m", "oc describe hpa hpa-resource-metrics-memory", "Name: hpa-resource-metrics-memory Namespace: default Labels: <none> Annotations: <none> CreationTimestamp: Wed, 04 Mar 2020 16:31:37 +0530 Reference: Deployment/example Metrics: ( current / target ) resource memory on pods: 2441216 / 500Mi Min replicas: 1 Max replicas: 10 ReplicationController pods: 1 current / 1 desired Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale recommended size matches current size ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from memory resource ScalingLimited False DesiredWithinRange the desired count is within the acceptable range Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal SuccessfulRescale 6m34s horizontal-pod-autoscaler New size: 1; reason: All metrics below target", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range Events:", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale False FailedGetScale the HPA controller was unable to get the target's current scale: no matches for kind \"ReplicationController\" in group \"apps\" Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedGetScale 6s (x3 over 36s) horizontal-pod-autoscaler no matches for kind \"ReplicationController\" in group \"apps\"", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True SucceededGetScale the HPA controller was able to get the target's current scale ScalingActive False FailedGetResourceMetric the HPA was unable to compute the replica count: failed to get cpu utilization: unable to get metrics for resource cpu: no metrics returned from resource metrics API", "Conditions: Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "oc describe hpa <pod-name>", "oc describe hpa cm-test", "Name: cm-test Namespace: prom Labels: <none> Annotations: <none> CreationTimestamp: Fri, 16 Jun 2017 18:09:22 +0000 Reference: ReplicationController/cm-test Metrics: ( current / target ) \"http_requests\" on pods: 66m / 500m Min replicas: 1 Max replicas: 4 ReplicationController pods: 1 current / 1 desired Conditions: 1 Type Status Reason Message ---- ------ ------ ------- AbleToScale True ReadyForNewScale the last scale time was sufficiently old as to warrant a new scale ScalingActive True ValidMetricFound the HPA was able to successfully calculate a replica count from pods metric http_request ScalingLimited False DesiredWithinRange the desired replica count is within the acceptable range", "oc get all -n openshift-vertical-pod-autoscaler", "NAME READY STATUS RESTARTS AGE pod/vertical-pod-autoscaler-operator-85b4569c47-2gmhc 1/1 Running 0 3m13s pod/vpa-admission-plugin-default-67644fc87f-xq7k9 1/1 Running 0 2m56s pod/vpa-recommender-default-7c54764b59-8gckt 1/1 Running 0 2m56s pod/vpa-updater-default-7f6cc87858-47vw9 1/1 Running 0 2m56s NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/vpa-webhook ClusterIP 172.30.53.206 <none> 443/TCP 2m56s NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/vertical-pod-autoscaler-operator 1/1 1 1 3m13s deployment.apps/vpa-admission-plugin-default 1/1 1 1 2m56s deployment.apps/vpa-recommender-default 1/1 1 1 2m56s deployment.apps/vpa-updater-default 1/1 1 1 2m56s NAME DESIRED CURRENT READY AGE replicaset.apps/vertical-pod-autoscaler-operator-85b4569c47 1 1 1 3m13s replicaset.apps/vpa-admission-plugin-default-67644fc87f 1 1 1 2m56s replicaset.apps/vpa-recommender-default-7c54764b59 1 1 1 2m56s replicaset.apps/vpa-updater-default-7f6cc87858 1 1 1 2m56s", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-master-1 <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-master-1 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-master-0 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-master-1 <none> <none>", "oc edit Subscription vertical-pod-autoscaler -n openshift-vertical-pod-autoscaler", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: labels: operators.coreos.com/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler: \"\" name: vertical-pod-autoscaler spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit VerticalPodAutoscalerController default -n openshift-vertical-pod-autoscaler", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 1 recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 2 updater: container: resources: {} nodeSelector: node-role.kubernetes.io/<node_role>: \"\" 3", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" recommender: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 2 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\" updater: container: resources: {} nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 3 - key: \"my-example-node-taint-key\" operator: \"Exists\" effect: \"NoSchedule\"", "oc get pods -n openshift-vertical-pod-autoscaler -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES vertical-pod-autoscaler-operator-6c75fcc9cd-5pb6z 1/1 Running 0 7m59s 10.128.2.24 c416-tfsbj-infra-eastus3-2bndt <none> <none> vpa-admission-plugin-default-6cb78d6f8b-rpcrj 1/1 Running 0 5m37s 10.129.2.22 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-recommender-default-66846bd94c-dsmpp 1/1 Running 0 5m37s 10.129.2.20 c416-tfsbj-infra-eastus1-lrgj8 <none> <none> vpa-updater-default-db8b58df-2nkvf 1/1 Running 0 5m37s 10.129.2.21 c416-tfsbj-infra-eastus1-lrgj8 <none> <none>", "resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi", "resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: cpu: 25m memory: 262144k target: cpu: 25m memory: 262144k uncappedTarget: cpu: 25m memory: 262144k upperBound: cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: creationTimestamp: \"2021-04-21T19:29:49Z\" generation: 2 name: default namespace: openshift-vertical-pod-autoscaler resourceVersion: \"142172\" uid: 180e17e9-03cc-427f-9955-3b4d7aeb2d59 spec: minReplicas: 3 1 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Initial\" 3", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Off\" 3", "oc get vpa <vpa-name> --output yaml", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\"", "spec: containers: - name: frontend resources: limits: cpu: 1 memory: 500Mi requests: cpu: 500m memory: 100Mi - name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "spec: containers: name: frontend resources: limits: cpu: 50m memory: 1250Mi requests: cpu: 25m memory: 262144k name: backend resources: limits: cpu: \"1\" memory: 500Mi requests: cpu: 500m memory: 100Mi", "apiVersion: autoscaling.openshift.io/v1 kind: VerticalPodAutoscalerController metadata: name: default namespace: openshift-vertical-pod-autoscaler spec: deploymentOverrides: admission: 1 container: args: 2 - '--kube-api-qps=50.0' - '--kube-api-burst=100.0' resources: requests: 3 cpu: 40m memory: 150Mi limits: memory: 300Mi recommender: 4 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' - '--memory-saver=true' 5 resources: requests: cpu: 75m memory: 275Mi limits: memory: 550Mi updater: 6 container: args: - '--kube-api-qps=60.0' - '--kube-api-burst=120.0' resources: requests: cpu: 80m memory: 350M limits: memory: 700Mi minReplicas: 2 podMinCPUMillicores: 25 podMinMemoryMb: 250 recommendationOnly: false safetyMarginFraction: 0.15", "apiVersion: v1 kind: Pod metadata: name: vpa-updater-default-d65ffb9dc-hgw44 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --min-replicas=2 - --kube-api-qps=60.0 - --kube-api-burst=120.0 resources: requests: cpu: 80m memory: 350M", "apiVersion: v1 kind: Pod metadata: name: vpa-admission-plugin-default-756999448c-l7tsd namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --tls-cert-file=/data/tls-certs/tls.crt - --tls-private-key=/data/tls-certs/tls.key - --client-ca-file=/data/tls-ca-certs/service-ca.crt - --webhook-timeout-seconds=10 - --kube-api-qps=50.0 - --kube-api-burst=100.0 resources: requests: cpu: 40m memory: 150Mi", "apiVersion: v1 kind: Pod metadata: name: vpa-recommender-default-74c979dbbc-znrd2 namespace: openshift-vertical-pod-autoscaler spec: containers: - args: - --logtostderr - --v=1 - --recommendation-margin-fraction=0.15 - --pod-recommendation-min-cpu-millicores=25 - --pod-recommendation-min-memory-mb=250 - --kube-api-qps=60.0 - --kube-api-burst=120.0 - --memory-saver=true resources: requests: cpu: 75m memory: 275Mi", "apiVersion: v1 1 kind: ServiceAccount metadata: name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 2 kind: ClusterRoleBinding metadata: name: system:example-metrics-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:metrics-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 3 kind: ClusterRoleBinding metadata: name: system:example-vpa-actor roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-actor subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name> --- apiVersion: rbac.authorization.k8s.io/v1 4 kind: ClusterRoleBinding metadata: name: system:example-vpa-target-reader-binding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: system:vpa-target-reader subjects: - kind: ServiceAccount name: alt-vpa-recommender-sa namespace: <namespace_name>", "apiVersion: apps/v1 kind: Deployment metadata: name: alt-vpa-recommender namespace: <namespace_name> spec: replicas: 1 selector: matchLabels: app: alt-vpa-recommender template: metadata: labels: app: alt-vpa-recommender spec: containers: 1 - name: recommender image: quay.io/example/alt-recommender:latest 2 imagePullPolicy: Always resources: limits: cpu: 200m memory: 1000Mi requests: cpu: 50m memory: 500Mi ports: - name: prometheus containerPort: 8942 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL seccompProfile: type: RuntimeDefault serviceAccountName: alt-vpa-recommender-sa 3 securityContext: runAsNonRoot: true", "oc get pods", "NAME READY STATUS RESTARTS AGE frontend-845d5478d-558zf 1/1 Running 0 4m25s frontend-845d5478d-7z9gx 1/1 Running 0 4m25s frontend-845d5478d-b7l4j 1/1 Running 0 4m25s vpa-alt-recommender-55878867f9-6tp5v 1/1 Running 0 9s", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender namespace: <namespace_name> spec: recommenders: - name: alt-vpa-recommender 1 targetRef: apiVersion: \"apps/v1\" kind: Deployment 2 name: frontend", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: vpa-recommender spec: targetRef: apiVersion: \"apps/v1\" kind: Deployment 1 name: frontend 2 updatePolicy: updateMode: \"Auto\" 3 resourcePolicy: 4 containerPolicies: - containerName: my-opt-sidecar mode: \"Off\" recommenders: 5 - name: my-recommender", "oc create -f <file-name>.yaml", "oc get vpa <vpa-name> --output yaml", "status: recommendation: containerRecommendations: - containerName: frontend lowerBound: 1 cpu: 25m memory: 262144k target: 2 cpu: 25m memory: 262144k uncappedTarget: 3 cpu: 25m memory: 262144k upperBound: 4 cpu: 262m memory: \"274357142\" - containerName: backend lowerBound: cpu: 12m memory: 131072k target: cpu: 12m memory: 131072k uncappedTarget: cpu: 12m memory: 131072k upperBound: cpu: 476m memory: \"498558823\"", "apiVersion: apiextensions.k8s.io/v1 kind: CustomResourceDefinition metadata: name: scalablepods.testing.openshift.io spec: group: testing.openshift.io versions: - name: v1 served: true storage: true schema: openAPIV3Schema: type: object properties: spec: type: object properties: replicas: type: integer minimum: 0 selector: type: string status: type: object properties: replicas: type: integer subresources: status: {} scale: specReplicasPath: .spec.replicas statusReplicasPath: .status.replicas labelSelectorPath: .spec.selector 1 scope: Namespaced names: plural: scalablepods singular: scalablepod kind: ScalablePod shortNames: - spod", "apiVersion: testing.openshift.io/v1 kind: ScalablePod metadata: name: scalable-cr namespace: default spec: selector: \"app=scalable-cr\" 1 replicas: 1", "apiVersion: autoscaling.k8s.io/v1 kind: VerticalPodAutoscaler metadata: name: scalable-cr namespace: default spec: targetRef: apiVersion: testing.openshift.io/v1 kind: ScalablePod name: scalable-cr updatePolicy: updateMode: \"Auto\"", "oc delete namespace openshift-vertical-pod-autoscaler", "oc delete crd verticalpodautoscalercheckpoints.autoscaling.k8s.io", "oc delete crd verticalpodautoscalercontrollers.autoscaling.openshift.io", "oc delete crd verticalpodautoscalers.autoscaling.k8s.io", "oc delete MutatingWebhookConfiguration vpa-webhook-config", "oc delete operator/vertical-pod-autoscaler.openshift-vertical-pod-autoscaler", "apiVersion: v1 kind: Secret metadata: name: test-secret namespace: my-namespace type: Opaque 1 data: 2 username: <username> 3 password: <password> stringData: 4 hostname: myapp.mydomain.com 5", "apiVersion: v1 kind: Secret metadata: name: test-secret type: Opaque 1 data: 2 username: <username> password: <password> stringData: 3 hostname: myapp.mydomain.com secret.properties: | property1=valueA property2=valueB", "apiVersion: v1 kind: ServiceAccount secrets: - name: test-secret", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"cat /etc/secret-volume/*\" ] volumeMounts: 1 - name: secret-volume mountPath: /etc/secret-volume 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: secret-volume secret: secretName: test-secret 4 restartPolicy: Never", "apiVersion: v1 kind: Pod metadata: name: secret-example-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: secret-test-container image: busybox command: [ \"/bin/sh\", \"-c\", \"export\" ] env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: secret-example-bc spec: strategy: sourceStrategy: env: - name: TEST_SECRET_USERNAME_ENV_VAR valueFrom: secretKeyRef: 1 name: test-secret key: username from: kind: ImageStreamTag namespace: openshift name: 'cli:latest'", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque 1 data: username: <username> password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-sa-sample annotations: kubernetes.io/service-account.name: \"sa-name\" 1 type: kubernetes.io/service-account-token 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-basic-auth type: kubernetes.io/basic-auth 1 data: stringData: 2 username: admin password: <password>", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-ssh-auth type: kubernetes.io/ssh-auth 1 data: ssh-privatekey: | 2 MIIEpQIBAAKCAQEAulqb/Y", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: secret-docker-cfg namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfig:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "apiVersion: v1 kind: Secret metadata: name: secret-docker-json namespace: my-project type: kubernetes.io/dockerconfig 1 data: .dockerconfigjson:bm5ubm5ubm5ubm5ubm5ubm5ubm5ubmdnZ2dnZ2dnZ2dnZ2dnZ2dnZ2cgYXV0aCBrZXlzCg== 2", "oc create -f <filename>.yaml", "apiVersion: v1 kind: Secret metadata: name: example namespace: <namespace> type: Opaque 1 data: username: <base64 encoded username> password: <base64 encoded password> stringData: 2 hostname: myapp.mydomain.com", "oc create sa <service_account_name> -n <your_namespace>", "apiVersion: v1 kind: Secret metadata: name: <secret_name> 1 annotations: kubernetes.io/service-account.name: \"sa-name\" 2 type: kubernetes.io/service-account-token 3", "oc apply -f service-account-token-secret.yaml", "oc get secret <sa_token_secret> -o jsonpath='{.data.token}' | base64 --decode 1", "ayJhbGciOiJSUzI1NiIsImtpZCI6IklOb2dtck1qZ3hCSWpoNnh5YnZhSE9QMkk3YnRZMVZoclFfQTZfRFp1YlUifQ.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJkZWZhdWx0Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZWNyZXQubmFtZSI6ImJ1aWxkZXItdG9rZW4tdHZrbnIiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiYnVpbGRlciIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6IjNmZGU2MGZmLTA1NGYtNDkyZi04YzhjLTNlZjE0NDk3MmFmNyIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDpkZWZhdWx0OmJ1aWxkZXIifQ.OmqFTDuMHC_lYvvEUrjr1x453hlEEHYcxS9VKSzmRkP1SiVZWPNPkTWlfNRp6bIUZD3U6aN3N7dMSN0eI5hu36xPgpKTdvuckKLTCnelMx6cxOdAbrcw1mCmOClNscwjS1KO1kzMtYnnq8rXHiMJELsNlhnRyyIXRTtNBsy4t64T3283s3SLsancyx0gy0ujx-Ch3uKAKdZi5iT-I8jnnQ-ds5THDs2h65RJhgglQEmSxpHrLGZFmyHAQI-_SjvmHZPXEc482x3SkaQHNLqpmrpJorNqh1M8ZHKzlujhZgVooMvJmWPXTb2vnvi3DGn2XI-hZxl1yD2yGH1RBpYUHA", "curl -X GET <openshift_cluster_api> --header \"Authorization: Bearer <token>\" 1 2", "apiVersion: v1 kind: Service metadata: name: registry annotations: service.beta.openshift.io/serving-cert-secret-name: registry-cert 1", "kind: Service apiVersion: v1 metadata: name: my-service annotations: service.beta.openshift.io/serving-cert-secret-name: my-cert 1 spec: selector: app: MyApp ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f <file-name>.yaml", "oc get secrets", "NAME TYPE DATA AGE my-cert kubernetes.io/tls 2 9m", "oc describe secret my-cert", "Name: my-cert Namespace: openshift-console Labels: <none> Annotations: service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z service.beta.openshift.io/originating-service-name: my-service service.beta.openshift.io/originating-service-uid: 640f0ec3-afc2-4380-bf31-a8c784846a11 service.beta.openshift.io/expiry: 2023-03-08T23:22:40Z Type: kubernetes.io/tls Data ==== tls.key: 1679 bytes tls.crt: 2595 bytes", "apiVersion: v1 kind: Pod metadata: name: my-service-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mypod image: redis volumeMounts: - name: my-container mountPath: \"/etc/my-path\" securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: my-volume secret: secretName: my-cert items: - key: username path: my-group/my-username mode: 511", "secret/ssl-key references serviceUID 62ad25ca-d703-11e6-9d6f-0e9c0057b608, which does not match 77b6dd80-d716-11e6-9d6f-0e9c0057b60", "oc delete secret <secret_name>", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-", "oc annotate service <service_name> service.beta.openshift.io/serving-cert-generation-error-num-", "apiVersion: operator.openshift.io/v1 kind: ClusterCSIDriver metadata: name: secrets-store.csi.k8s.io spec: managementState: Managed", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"secretsmanager:GetSecretValue\" - \"secretsmanager:DescribeSecret\" effect: Allow resource: \"arn:*:secretsmanager:*:*:secret:testSecret-??????\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testSecret\" objectType: \"secretsmanager\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret", "oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-aws-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-aws-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-aws-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-aws namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-aws labels: app: csi-secrets-store-provider-aws spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-aws template: metadata: labels: app: csi-secrets-store-provider-aws spec: serviceAccountName: csi-secrets-store-provider-aws hostNetwork: false containers: - name: provider-aws-installer image: public.ecr.aws/aws-secrets-manager/secrets-store-csi-driver-provider-aws:1.0.r2-50-g5b4aca1-2023.06.09.21.19 imagePullPolicy: Always args: - --provider-volume=/etc/kubernetes/secrets-store-csi-providers resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: privileged: true volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol - name: mountpoint-dir mountPath: /var/lib/kubelet/pods mountPropagation: HostToContainer tolerations: - operator: Exists volumes: - name: providervol hostPath: path: \"/etc/kubernetes/secrets-store-csi-providers\" - name: mountpoint-dir hostPath: path: /var/lib/kubelet/pods type: DirectoryOrCreate nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-aws -n openshift-cluster-csi-drivers", "oc apply -f aws-provider.yaml", "mkdir credentialsrequest-dir-aws", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: aws-provider-test namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - action: - \"ssm:GetParameter\" - \"ssm:GetParameters\" effect: Allow resource: \"arn:*:ssm:*:*:parameter/testParameter*\" secretRef: name: aws-creds namespace: my-namespace serviceAccountNames: - aws-provider", "oc get --raw=/.well-known/openid-configuration | jq -r '.issuer'", "https://<oidc_provider_name>", "ccoctl aws create-iam-roles --name my-role --region=<aws_region> --credentials-requests-dir=credentialsrequest-dir-aws --identity-provider-arn arn:aws:iam::<aws_account>:oidc-provider/<oidc_provider_name> --output-dir=credrequests-ccoctl-output", "2023/05/15 18:10:34 Role arn:aws:iam::<aws_account_id>:role/my-role-my-namespace-aws-creds created 2023/05/15 18:10:34 Saved credentials configuration to: credrequests-ccoctl-output/manifests/my-namespace-aws-creds-credentials.yaml 2023/05/15 18:10:35 Updated Role policy for Role my-role-my-namespace-aws-creds", "oc annotate -n my-namespace sa/aws-provider eks.amazonaws.com/role-arn=\"<aws_role_arn>\"", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-aws-provider 1 namespace: my-namespace 2 spec: provider: aws 3 parameters: 4 objects: | - objectName: \"testParameter\" objectType: \"ssmparameter\"", "oc create -f secret-provider-class-aws.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-aws-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: aws-provider containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-aws-provider\" 3", "oc create -f deployment.yaml", "oc exec my-aws-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testParameter", "oc exec my-aws-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret", "<secret_value>", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-azure-cluster-role rules: - apiGroups: [\"\"] resources: [\"serviceaccounts/token\"] verbs: [\"create\"] - apiGroups: [\"\"] resources: [\"serviceaccounts\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"pods\"] verbs: [\"get\"] - apiGroups: [\"\"] resources: [\"nodes\"] verbs: [\"get\"] --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-azure-cluster-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-azure-cluster-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-azure namespace: openshift-cluster-csi-drivers --- apiVersion: apps/v1 kind: DaemonSet metadata: namespace: openshift-cluster-csi-drivers name: csi-secrets-store-provider-azure labels: app: csi-secrets-store-provider-azure spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-azure template: metadata: labels: app: csi-secrets-store-provider-azure spec: serviceAccountName: csi-secrets-store-provider-azure hostNetwork: true containers: - name: provider-azure-installer image: mcr.microsoft.com/oss/azure/secrets-store/provider-azure:v1.4.1 imagePullPolicy: IfNotPresent args: - --endpoint=unix:///provider/azure.sock - --construct-pem-chain=true - --healthz-port=8989 - --healthz-path=/healthz - --healthz-timeout=5s livenessProbe: httpGet: path: /healthz port: 8989 failureThreshold: 3 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi securityContext: allowPrivilegeEscalation: false readOnlyRootFilesystem: true runAsUser: 0 capabilities: drop: - ALL volumeMounts: - mountPath: \"/provider\" name: providervol affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: type operator: NotIn values: - virtual-kubelet volumes: - name: providervol hostPath: path: \"/var/run/secrets-store-csi-providers\" tolerations: - operator: Exists nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-azure -n openshift-cluster-csi-drivers", "oc apply -f azure-provider.yaml", "SERVICE_PRINCIPAL_CLIENT_SECRET=\"USD(az ad sp create-for-rbac --name https://USDKEYVAULT_NAME --query 'password' -otsv)\"", "SERVICE_PRINCIPAL_CLIENT_ID=\"USD(az ad sp list --display-name https://USDKEYVAULT_NAME --query '[0].appId' -otsv)\"", "oc create secret generic secrets-store-creds -n my-namespace --from-literal clientid=USD{SERVICE_PRINCIPAL_CLIENT_ID} --from-literal clientsecret=USD{SERVICE_PRINCIPAL_CLIENT_SECRET}", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider 1 namespace: my-namespace 2 spec: provider: azure 3 parameters: 4 usePodIdentity: \"false\" useVMManagedIdentity: \"false\" userAssignedIdentityID: \"\" keyvaultName: \"kvname\" objects: | array: - | objectName: secret1 objectType: secret tenantId: \"tid\"", "oc create -f secret-provider-class-azure.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-azure-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-azure-provider\" 3 nodePublishSecretRef: name: secrets-store-creds 4", "oc create -f deployment.yaml", "oc exec my-azure-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "secret1", "oc exec my-azure-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/secret1", "my-secret-value", "apiVersion: v1 kind: ServiceAccount metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: csi-secrets-store-provider-gcp-rolebinding roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: csi-secrets-store-provider-gcp-role subjects: - kind: ServiceAccount name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: csi-secrets-store-provider-gcp-role rules: - apiGroups: - \"\" resources: - serviceaccounts/token verbs: - create - apiGroups: - \"\" resources: - serviceaccounts verbs: - get --- apiVersion: apps/v1 kind: DaemonSet metadata: name: csi-secrets-store-provider-gcp namespace: openshift-cluster-csi-drivers labels: app: csi-secrets-store-provider-gcp spec: updateStrategy: type: RollingUpdate selector: matchLabels: app: csi-secrets-store-provider-gcp template: metadata: labels: app: csi-secrets-store-provider-gcp spec: serviceAccountName: csi-secrets-store-provider-gcp initContainers: - name: chown-provider-mount image: busybox command: - chown - \"1000:1000\" - /etc/kubernetes/secrets-store-csi-providers volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol securityContext: privileged: true hostNetwork: false hostPID: false hostIPC: false containers: - name: provider image: us-docker.pkg.dev/secretmanager-csi/secrets-store-csi-driver-provider-gcp/plugin@sha256:a493a78bbb4ebce5f5de15acdccc6f4d19486eae9aa4fa529bb60ac112dd6650 securityContext: privileged: true imagePullPolicy: IfNotPresent resources: requests: cpu: 50m memory: 100Mi limits: cpu: 50m memory: 100Mi env: - name: TARGET_DIR value: \"/etc/kubernetes/secrets-store-csi-providers\" volumeMounts: - mountPath: \"/etc/kubernetes/secrets-store-csi-providers\" name: providervol mountPropagation: None readOnly: false livenessProbe: failureThreshold: 3 httpGet: path: /live port: 8095 initialDelaySeconds: 5 timeoutSeconds: 10 periodSeconds: 30 volumes: - name: providervol hostPath: path: /etc/kubernetes/secrets-store-csi-providers tolerations: - key: kubernetes.io/arch operator: Equal value: amd64 effect: NoSchedule nodeSelector: kubernetes.io/os: linux", "oc adm policy add-scc-to-user privileged -z csi-secrets-store-provider-gcp -n openshift-cluster-csi-drivers", "oc apply -f gcp-provider.yaml", "oc new-project my-namespace", "oc label ns my-namespace security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc create serviceaccount my-service-account --namespace=my-namespace", "oc create secret generic secrets-store-creds -n my-namespace --from-file=key.json 1", "oc -n my-namespace label secret secrets-store-creds secrets-store.csi.k8s.io/used=true", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-gcp-provider 1 namespace: my-namespace 2 spec: provider: gcp 3 parameters: 4 secrets: | - resourceName: \"projects/my-project/secrets/testsecret1/versions/1\" path: \"testsecret1.txt\"", "oc create -f secret-provider-class-gcp.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: my-gcp-deployment 1 namespace: my-namespace 2 spec: replicas: 1 selector: matchLabels: app: my-storage template: metadata: labels: app: my-storage spec: serviceAccountName: my-service-account 3 containers: - name: busybox image: k8s.gcr.io/e2e-test-images/busybox:1.29 command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-gcp-provider\" 4 nodePublishSecretRef: name: secrets-store-creds 5", "oc create -f deployment.yaml", "oc exec my-gcp-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testsecret1", "oc exec my-gcp-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testsecret1", "<secret_value>", "helm repo add hashicorp https://helm.releases.hashicorp.com", "helm repo update", "oc new-project vault", "oc label ns vault security.openshift.io/scc.podSecurityLabelSync=false pod-security.kubernetes.io/enforce=privileged pod-security.kubernetes.io/audit=privileged pod-security.kubernetes.io/warn=privileged --overwrite", "oc adm policy add-scc-to-user privileged -z vault -n vault", "oc adm policy add-scc-to-user privileged -z vault-csi-provider -n vault", "helm install vault hashicorp/vault --namespace=vault --set \"server.dev.enabled=true\" --set \"injector.enabled=false\" --set \"csi.enabled=true\" --set \"global.openshift=true\" --set \"injector.agentImage.repository=docker.io/hashicorp/vault\" --set \"server.image.repository=docker.io/hashicorp/vault\" --set \"csi.image.repository=docker.io/hashicorp/vault-csi-provider\" --set \"csi.agent.image.repository=docker.io/hashicorp/vault\" --set \"csi.daemonSet.providersDir=/var/run/secrets-store-csi-providers\"", "oc patch daemonset -n vault vault-csi-provider --type='json' -p='[{\"op\": \"add\", \"path\": \"/spec/template/spec/containers/0/securityContext\", \"value\": {\"privileged\": true} }]'", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 24m vault-csi-provider-87rgw 1/2 Running 0 5s vault-csi-provider-bd6hp 1/2 Running 0 4s vault-csi-provider-smlv7 1/2 Running 0 5s", "oc exec vault-0 --namespace=vault -- vault kv put secret/example1 testSecret1=my-secret-value", "oc exec vault-0 --namespace=vault -- vault kv get secret/example1", "= Secret Path = secret/data/example1 ======= Metadata ======= Key Value --- ----- created_time 2024-04-05T07:05:16.713911211Z custom_metadata <nil> deletion_time n/a destroyed false version 1 === Data === Key Value --- ----- testSecret1 my-secret-value", "oc exec vault-0 --namespace=vault -- vault auth enable kubernetes", "Success! Enabled kubernetes auth method at: kubernetes/", "TOKEN_REVIEWER_JWT=\"USD(oc exec vault-0 --namespace=vault -- cat /var/run/secrets/kubernetes.io/serviceaccount/token)\"", "KUBERNETES_SERVICE_IP=\"USD(oc get svc kubernetes --namespace=default -o go-template=\"{{ .spec.clusterIP }}\")\"", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/config issuer=\"https://kubernetes.default.svc.cluster.local\" token_reviewer_jwt=\"USD{TOKEN_REVIEWER_JWT}\" kubernetes_host=\"https://USD{KUBERNETES_SERVICE_IP}:443\" kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt", "Success! Data written to: auth/kubernetes/config", "oc exec -i vault-0 --namespace=vault -- vault policy write csi -<<EOF path \"secret/data/*\" { capabilities = [\"read\"] } EOF", "Success! Uploaded policy: csi", "oc exec -i vault-0 --namespace=vault -- vault write auth/kubernetes/role/csi bound_service_account_names=default bound_service_account_namespaces=default,test-ns,negative-test-ns,my-namespace policies=csi ttl=20m", "Success! Data written to: auth/kubernetes/role/csi", "oc get pods -n vault", "NAME READY STATUS RESTARTS AGE vault-0 1/1 Running 0 43m vault-csi-provider-87rgw 2/2 Running 0 19m vault-csi-provider-bd6hp 2/2 Running 0 19m vault-csi-provider-smlv7 2/2 Running 0 19m", "oc get pods -n openshift-cluster-csi-drivers | grep -E \"secrets\"", "secrets-store-csi-driver-node-46d2g 3/3 Running 0 45m secrets-store-csi-driver-node-d2jjn 3/3 Running 0 45m secrets-store-csi-driver-node-drmt4 3/3 Running 0 45m secrets-store-csi-driver-node-j2wlt 3/3 Running 0 45m secrets-store-csi-driver-node-v9xv4 3/3 Running 0 45m secrets-store-csi-driver-node-vlz28 3/3 Running 0 45m secrets-store-csi-driver-operator-84bd699478-fpxrw 1/1 Running 0 47m", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-vault-provider 1 namespace: my-namespace 2 spec: provider: vault 3 parameters: 4 roleName: \"csi\" vaultAddress: \"http://vault.vault:8200\" objects: | - secretPath: \"secret/data/example1\" objectName: \"testSecret1\" secretKey: \"testSecret1", "oc create -f secret-provider-class-vault.yaml", "apiVersion: apps/v1 kind: Deployment metadata: name: busybox-deployment 1 namespace: my-namespace 2 labels: app: busybox spec: replicas: 1 selector: matchLabels: app: busybox template: metadata: labels: app: busybox spec: terminationGracePeriodSeconds: 0 containers: - image: registry.k8s.io/e2e-test-images/busybox:1.29-4 name: busybox imagePullPolicy: IfNotPresent command: - \"/bin/sleep\" - \"10000\" volumeMounts: - name: secrets-store-inline mountPath: \"/mnt/secrets-store\" readOnly: true volumes: - name: secrets-store-inline csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: \"my-vault-provider\" 3", "oc create -f deployment.yaml", "oc exec busybox-deployment-<hash> -n my-namespace -- ls /mnt/secrets-store/", "testSecret1", "oc exec busybox-deployment-<hash> -n my-namespace -- cat /mnt/secrets-store/testSecret1", "my-secret-value", "oc edit secretproviderclass my-azure-provider 1", "apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: my-azure-provider namespace: my-namespace spec: provider: azure secretObjects: 1 - secretName: tlssecret 2 type: kubernetes.io/tls 3 labels: environment: \"test\" data: - objectName: tlskey 4 key: tls.key 5 - objectName: tlscrt key: tls.crt parameters: usePodIdentity: \"false\" keyvaultName: \"kvname\" objects: | array: - | objectName: tlskey objectType: secret - | objectName: tlscrt objectType: secret tenantId: \"tid\"", "oc get secretproviderclasspodstatus <secret_provider_class_pod_status_name> -o yaml 1", "status: mounted: true objects: - id: secret/tlscrt version: f352293b97da4fa18d96a9528534cb33 - id: secret/tlskey version: 02534bc3d5df481cb138f8b2a13951ef podName: busybox-<hash> secretProviderClassName: my-azure-provider targetPath: /var/lib/kubelet/pods/f0d49c1e-c87a-4beb-888f-37798456a3e7/volumes/kubernetes.io~csi/secrets-store-inline/mount", "oc create serviceaccount <service_account_name>", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/workload-identity-provider\": \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\"}}}'", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/service-account-email\": \"<service_account_email>\"}}}'", "oc patch serviceaccount <service_account_name> -p '{\"metadata\": {\"annotations\": {\"cloud.google.com/injection-mode\": \"direct\"}}}'", "gcloud projects add-iam-policy-binding <project_id> --member \"<service_account_email>\" --role \"projects/<project_id>/roles/<role_for_workload_permissions>\"", "oc get serviceaccount <service_account_name>", "apiVersion: v1 kind: ServiceAccount metadata: name: app-x namespace: service-a annotations: cloud.google.com/workload-identity-provider: \"projects/<project_number>/locations/global/workloadIdentityPools/<identity_pool>/providers/<identity_provider>\" 1 cloud.google.com/service-account-email: \"[email protected]\" cloud.google.com/audience: \"sts.googleapis.com\" 2 cloud.google.com/token-expiration: \"86400\" 3 cloud.google.com/gcloud-run-as-user: \"1000\" cloud.google.com/injection-mode: \"direct\" 4", "apiVersion: apps/v1 kind: Deployment metadata: name: ubi9 spec: replicas: 1 selector: matchLabels: app: ubi9 template: metadata: labels: app: ubi9 spec: serviceAccountName: \"<service_account_name>\" 1 containers: - name: ubi image: 'registry.access.redhat.com/ubi9/ubi-micro:latest' command: - /bin/sh - '-c' - | sleep infinity", "oc apply -f deployment.yaml", "oc get pods -o json | jq -r '.items[0].spec.containers[0].env[] | select(.name==\"GOOGLE_APPLICATION_CREDENTIALS\")'", "{ \"name\": \"GOOGLE_APPLICATION_CREDENTIALS\", \"value\": \"/var/run/secrets/workload-identity/federation.json\" }", "apiVersion: v1 kind: Pod metadata: name: app-x-pod namespace: service-a annotations: cloud.google.com/skip-containers: \"init-first,sidecar\" cloud.google.com/external-credentials-json: |- 1 { \"type\": \"external_account\", \"audience\": \"//iam.googleapis.com/projects/<project_number>/locations/global/workloadIdentityPools/on-prem-kubernetes/providers/<identity_provider>\", \"subject_token_type\": \"urn:ietf:params:oauth:token-type:jwt\", \"token_url\": \"https://sts.googleapis.com/v1/token\", \"service_account_impersonation_url\": \"https://iamcredentials.googleapis.com/v1/projects/-/serviceAccounts/[email protected]:generateAccessToken\", \"credential_source\": { \"file\": \"/var/run/secrets/sts.googleapis.com/serviceaccount/token\", \"format\": { \"type\": \"text\" } } } spec: serviceAccountName: app-x initContainers: - name: init-first image: container-image:version containers: - name: sidecar image: container-image:version - name: container-name image: container-image:version env: 2 - name: GOOGLE_APPLICATION_CREDENTIALS value: /var/run/secrets/gcloud/config/federation.json - name: CLOUDSDK_COMPUTE_REGION value: asia-northeast1 volumeMounts: - name: gcp-iam-token readOnly: true mountPath: /var/run/secrets/sts.googleapis.com/serviceaccount - mountPath: /var/run/secrets/gcloud/config name: external-credential-config readOnly: true volumes: - name: gcp-iam-token projected: sources: - serviceAccountToken: audience: sts.googleapis.com expirationSeconds: 86400 path: token - downwardAPI: defaultMode: 288 items: - fieldRef: apiVersion: v1 fieldPath: metadata.annotations['cloud.google.com/external-credentials-json'] path: federation.json name: external-credential-config", "kind: ConfigMap apiVersion: v1 metadata: creationTimestamp: 2016-02-18T19:14:38Z name: example-config namespace: my-namespace data: 1 example.property.1: hello example.property.2: world example.property.file: |- property.1=value-1 property.2=value-2 property.3=value-3 binaryData: bar: L3Jvb3QvMTAw 2", "oc create configmap <configmap_name> [options]", "oc create configmap game-config --from-file=example-files/", "oc describe configmaps game-config", "Name: game-config Namespace: default Labels: <none> Annotations: <none> Data game.properties: 158 bytes ui.properties: 83 bytes", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config --from-file=example-files/", "oc get configmaps game-config -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:34:05Z name: game-config namespace: default resourceVersion: \"407\" selflink: /api/v1/namespaces/default/configmaps/game-config uid: 30944725-d66e-11e5-8cd0-68f728db1985", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "cat example-files/game.properties", "enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30", "cat example-files/ui.properties", "color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice", "oc create configmap game-config-2 --from-file=example-files/game.properties --from-file=example-files/ui.properties", "oc create configmap game-config-3 --from-file=game-special-key=example-files/game.properties", "oc get configmaps game-config-2 -o yaml", "apiVersion: v1 data: game.properties: |- enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 ui.properties: | color.good=purple color.bad=yellow allow.textmode=true how.nice.to.look=fairlyNice kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:52:05Z name: game-config-2 namespace: default resourceVersion: \"516\" selflink: /api/v1/namespaces/default/configmaps/game-config-2 uid: b4952dc3-d670-11e5-8cd0-68f728db1985", "oc get configmaps game-config-3 -o yaml", "apiVersion: v1 data: game-special-key: |- 1 enemies=aliens lives=3 enemies.cheat=true enemies.cheat.level=noGoodRotten secret.code.passphrase=UUDDLRLRBABAS secret.code.allowed=true secret.code.lives=30 kind: ConfigMap metadata: creationTimestamp: 2016-02-18T18:54:22Z name: game-config-3 namespace: default resourceVersion: \"530\" selflink: /api/v1/namespaces/default/configmaps/game-config-3 uid: 05f8da22-d671-11e5-8cd0-68f728db1985", "oc create configmap special-config --from-literal=special.how=very --from-literal=special.type=charm", "oc get configmaps special-config -o yaml", "apiVersion: v1 data: special.how: very special.type: charm kind: ConfigMap metadata: creationTimestamp: 2016-02-18T19:14:38Z name: special-config namespace: default resourceVersion: \"651\" selflink: /api/v1/namespaces/default/configmaps/special-config uid: dadce046-d673-11e5-8cd0-68f728db1985", "apiVersion: v1 kind: ConfigMap metadata: name: special-config 1 namespace: default 2 data: special.how: very 3 special.type: charm 4", "apiVersion: v1 kind: ConfigMap metadata: name: env-config 1 namespace: default data: log_level: INFO 2", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: 1 - name: SPECIAL_LEVEL_KEY 2 valueFrom: configMapKeyRef: name: special-config 3 key: special.how 4 - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config 5 key: special.type 6 optional: true 7 envFrom: 8 - configMapRef: name: env-config 9 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "SPECIAL_LEVEL_KEY=very log_level=INFO", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"echo USD(SPECIAL_LEVEL_KEY) USD(SPECIAL_TYPE_KEY)\" ] 1 env: - name: SPECIAL_LEVEL_KEY valueFrom: configMapKeyRef: name: special-config key: special.how - name: SPECIAL_TYPE_KEY valueFrom: configMapKeyRef: name: special-config key: special.type securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "very charm", "apiVersion: v1 kind: ConfigMap metadata: name: special-config namespace: default data: special.how: very special.type: charm", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/special.how\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config 1 restartPolicy: Never", "very", "apiVersion: v1 kind: Pod metadata: name: dapi-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"cat\", \"/etc/config/path/to/special-key\" ] volumeMounts: - name: config-volume mountPath: /etc/config securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: config-volume configMap: name: special-config items: - key: special.how path: path/to/special-key 1 restartPolicy: Never", "very", "service DevicePlugin { // GetDevicePluginOptions returns options to be communicated with Device // Manager rpc GetDevicePluginOptions(Empty) returns (DevicePluginOptions) {} // ListAndWatch returns a stream of List of Devices // Whenever a Device state change or a Device disappears, ListAndWatch // returns the new list rpc ListAndWatch(Empty) returns (stream ListAndWatchResponse) {} // Allocate is called during container creation so that the Device // Plug-in can run device specific operations and instruct Kubelet // of the steps to make the Device available in the container rpc Allocate(AllocateRequest) returns (AllocateResponse) {} // PreStartcontainer is called, if indicated by Device Plug-in during // registration phase, before each container start. Device plug-in // can run device specific operations such as resetting the device // before making devices available to the container rpc PreStartcontainer(PreStartcontainerRequest) returns (PreStartcontainerResponse) {} }", "oc describe machineconfig <name>", "oc describe machineconfig 00-worker", "Name: 00-worker Namespace: Labels: machineconfiguration.openshift.io/role=worker 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: devicemgr 1 spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io: devicemgr 2 kubeletConfig: feature-gates: - DevicePlugins=true 3", "oc create -f devicemgr.yaml", "kubeletconfig.machineconfiguration.openshift.io/devicemgr created", "oc get priorityclasses", "NAME VALUE GLOBAL-DEFAULT AGE system-node-critical 2000001000 false 72m system-cluster-critical 2000000000 false 72m openshift-user-critical 1000000000 false 3d13h cluster-logging 1000000 false 29s", "apiVersion: scheduling.k8s.io/v1 kind: PriorityClass metadata: name: high-priority 1 value: 1000000 2 preemptionPolicy: PreemptLowerPriority 3 globalDefault: false 4 description: \"This priority class should be used for XYZ service pods only.\" 5", "oc create -f <file-name>.yaml", "apiVersion: v1 kind: Pod metadata: name: nginx labels: env: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] priorityClassName: high-priority 1", "oc create -f <file-name>.yaml", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.30.3", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc get pods -n openshift-run-once-duration-override-operator", "NAME READY STATUS RESTARTS AGE run-once-duration-override-operator-7b88c676f6-lcxgc 1/1 Running 0 7m46s runoncedurationoverride-62blp 1/1 Running 0 41s runoncedurationoverride-h8h8b 1/1 Running 0 41s runoncedurationoverride-tdsqk 1/1 Running 0 41s", "oc label namespace <namespace> \\ 1 runoncedurationoverrides.admission.runoncedurationoverride.openshift.io/enabled=true", "apiVersion: v1 kind: Pod metadata: name: example namespace: <namespace> 1 spec: restartPolicy: Never 2 securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: busybox securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] image: busybox:1.25 command: - /bin/sh - -ec - | while sleep 5; do date; done", "oc get pods -n <namespace> -o yaml | grep activeDeadlineSeconds", "activeDeadlineSeconds: 3600", "oc edit runoncedurationoverride cluster", "apiVersion: operator.openshift.io/v1 kind: RunOnceDurationOverride metadata: spec: runOnceDurationOverride: spec: activeDeadlineSeconds: 1800 1", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster spec: featureSet: TechPreviewNoUpgrade 1", "apiVersion: machineconfiguration.openshift.io/v1 kind: ContainerRuntimeConfig metadata: name: enable-crun-worker spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 containerRuntimeConfig: defaultRuntime: crun 2", "oc edit ns/<namespace_name>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: system:admin openshift.io/sa.scc.mcs: s0:c27,c24 openshift.io/sa.scc.supplemental-groups: 1000/10000 1 openshift.io/sa.scc.uid-range: 1000/10000 2 name: userns", "apiVersion: v1 kind: Pod metadata: name: userns-pod spec: containers: - name: userns-container image: registry.access.redhat.com/ubi9 command: [\"sleep\", \"1000\"] securityContext: capabilities: drop: [\"ALL\"] allowPrivilegeEscalation: false 1 runAsNonRoot: true 2 seccompProfile: type: RuntimeDefault runAsUser: 1000 3 runAsGroup: 1000 4 hostUsers: false 5", "oc create -f <file_name>.yaml", "oc rsh -c <container_name> pod/<pod_name>", "oc rsh -c userns-container_name pod/userns-pod", "sh-5.1USD id", "uid=1000(1000) gid=1000(1000) groups=1000(1000)", "sh-5.1USD lsns -t user", "NS TYPE NPROCS PID USER COMMAND 4026532447 user 3 1 1000 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1", "oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9", "oc debug node/ci-ln-z5vppzb-72292-8zp2b-worker-c-q8sh9", "sh-5.1# chroot /host", "sh-5.1# lsns -t user", "NS TYPE NPROCS PID USER COMMAND 4026531837 user 233 1 root /usr/lib/systemd/systemd --switched-root --system --deserialize 28 4026532447 user 1 4767 2908816384 /usr/bin/coreutils --coreutils-prog-shebang=sleep /usr/bin/sleep 1000 1", "oc delete crd scaledobjects.keda.k8s.io", "oc delete crd triggerauthentications.keda.k8s.io", "oc create configmap -n openshift-keda thanos-cert --from-file=ca-cert.pem", "oc get all -n openshift-keda", "NAME READY STATUS RESTARTS AGE pod/custom-metrics-autoscaler-operator-5fd8d9ffd8-xt4xp 1/1 Running 0 18m NAME READY UP-TO-DATE AVAILABLE AGE deployment.apps/custom-metrics-autoscaler-operator 1/1 1 1 18m NAME DESIRED CURRENT READY AGE replicaset.apps/custom-metrics-autoscaler-operator-5fd8d9ffd8 1 1 1 18m", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: watchNamespace: '' 1 operator: logLevel: info 2 logEncoder: console 3 caConfigMaps: 4 - thanos-cert - kafka-cert metricsServer: logLevel: '0' 5 auditConfig: 6 logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: [\"RequestReceived\"] omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" serviceAccount: {}", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: prom-scaledobject namespace: my-namespace spec: triggers: - type: prometheus 1 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 2 namespace: kedatest 3 metricName: http_requests_total 4 threshold: '5' 5 query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) 6 authModes: basic 7 cortexOrgID: my-org 8 ignoreNullValues: \"false\" 9 unsafeSsl: \"false\" 10", "oc project <project_name> 1", "oc create serviceaccount thanos 1", "apiVersion: v1 kind: Secret metadata: name: thanos-token annotations: kubernetes.io/service-account.name: thanos 1 type: kubernetes.io/service-account-token", "oc create -f <file_name>.yaml", "oc describe serviceaccount thanos 1", "Name: thanos Namespace: <namespace_name> Labels: <none> Annotations: <none> Image pull secrets: thanos-dockercfg-nnwgj Mountable secrets: thanos-dockercfg-nnwgj Tokens: thanos-token 1 Events: <none>", "apiVersion: keda.sh/v1alpha1 kind: <authentication_method> 1 metadata: name: keda-trigger-auth-prometheus spec: secretTargetRef: 2 - parameter: bearerToken 3 name: thanos-token 4 key: token 5 - parameter: ca name: thanos-token key: ca.crt", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: name: thanos-metrics-reader rules: - apiGroups: - \"\" resources: - pods verbs: - get - apiGroups: - metrics.k8s.io resources: - pods - nodes verbs: - get - list - watch", "oc create -f <file-name>.yaml", "apiVersion: rbac.authorization.k8s.io/v1 kind: <binding_type> 1 metadata: name: thanos-metrics-reader 2 namespace: my-project 3 roleRef: apiGroup: rbac.authorization.k8s.io kind: Role name: thanos-metrics-reader subjects: - kind: ServiceAccount name: thanos 4 namespace: <namespace_name> 5", "oc create -f <file-name>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cpu-scaledobject namespace: my-namespace spec: triggers: - type: cpu 1 metricType: Utilization 2 metadata: value: '60' 3 minReplicaCount: 1 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: memory-scaledobject namespace: my-namespace spec: triggers: - type: memory 1 metricType: Utilization 2 metadata: value: '60' 3 containerName: api 4", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: kafka-scaledobject namespace: my-namespace spec: triggers: - type: kafka 1 metadata: topic: my-topic 2 bootstrapServers: my-cluster-kafka-bootstrap.openshift-operators.svc:9092 3 consumerGroup: my-group 4 lagThreshold: '10' 5 activationLagThreshold: '5' 6 offsetResetPolicy: latest 7 allowIdleConsumers: true 8 scaleToZeroOnInvalidOffset: false 9 excludePersistentLag: false 10 version: '1.0.0' 11 partitionLimitation: '1,2,10-20,31' 12 tls: enable 13", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: cron-scaledobject namespace: default spec: scaleTargetRef: name: my-deployment minReplicaCount: 0 1 maxReplicaCount: 100 2 cooldownPeriod: 300 triggers: - type: cron 3 metadata: timezone: Asia/Kolkata 4 start: \"0 6 * * *\" 5 end: \"30 18 * * *\" 6 desiredReplicas: \"100\" 7", "apiVersion: v1 kind: Secret metadata: name: my-basic-secret namespace: default data: username: \"dXNlcm5hbWU=\" 1 password: \"cGFzc3dvcmQ=\"", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "kind: ClusterTriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: 1 name: secret-cluster-triggerauthentication spec: secretTargetRef: 2 - parameter: username 3 name: my-basic-secret 4 key: username 5 - parameter: password name: my-basic-secret key: password", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: ca-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0... 1 client-cert.pem: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0... 2 client-key.pem: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0t", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: secret-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: key 3 name: my-secret 4 key: client-key.pem 5 - parameter: ca 6 name: my-secret 7 key: ca-cert.pem 8", "apiVersion: v1 kind: Secret metadata: name: my-secret namespace: my-namespace data: bearerToken: \"eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXV\" 1", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: token-triggerauthentication namespace: my-namespace 1 spec: secretTargetRef: 2 - parameter: bearerToken 3 name: my-secret 4 key: bearerToken 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: env-var-triggerauthentication namespace: my-namespace 1 spec: env: 2 - parameter: access_key 3 name: ACCESS_KEY 4 containerName: my-container 5", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: pod-id-triggerauthentication namespace: my-namespace 1 spec: podIdentity: 2 provider: aws-eks 3", "apiVersion: v1 kind: Secret metadata: name: my-secret data: user-name: <base64_USER_NAME> password: <base64_USER_PASSWORD>", "kind: TriggerAuthentication apiVersion: keda.sh/v1alpha1 metadata: name: prom-triggerauthentication namespace: my-namespace spec: secretTargetRef: - parameter: user-name name: my-secret key: USER_NAME - parameter: password name: my-secret key: USER_PASSWORD", "oc create -f <filename>.yaml", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-triggerauthentication 1 kind: TriggerAuthentication 2", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: name: scaledobject namespace: my-namespace spec: scaleTargetRef: name: example-deployment maxReplicaCount: 100 minReplicaCount: 0 pollingInterval: 30 triggers: - type: prometheus metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest # replace <NAMESPACE> metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"basic\" authenticationRef: name: prom-cluster-triggerauthentication 1 kind: ClusterTriggerAuthentication 2", "oc apply -f <filename>", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\"", "oc edit ScaledObject scaledobject", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"4\" 1 creationTimestamp: \"2023-02-08T14:41:01Z\" generation: 1 name: scaledobject namespace: my-project resourceVersion: '65729' uid: f5aec682-acdf-4232-a783-58b5b82f5dd0", "kind: KedaController apiVersion: keda.sh/v1alpha1 metadata: name: keda namespace: openshift-keda spec: metricsServer: auditConfig: logFormat: \"json\" 1 logOutputVolumeClaim: \"pvc-audit-log\" 2 policy: rules: 3 - level: Metadata omitStages: \"RequestReceived\" 4 omitManagedFields: false 5 lifetime: 6 maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\"", "get pod -n openshift-keda", "NAME READY STATUS RESTARTS AGE custom-metrics-autoscaler-operator-5cb44cd75d-9v4lv 1/1 Running 0 8m20s keda-metrics-apiserver-65c7cc44fd-rrl4r 1/1 Running 0 2m55s keda-operator-776cbb6768-zpj5b 1/1 Running 0 2m55s", "oc logs keda-metrics-apiserver-<hash>|grep -i metadata 1", "oc logs keda-metrics-apiserver-65c7cc44fd-rrl4r|grep -i metadata", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Metadata\",\"auditID\":\"4c81d41b-3dab-4675-90ce-20b87ce24013\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/healthz\",\"verb\":\"get\",\"user\":{\"username\":\"system:anonymous\",\"groups\":[\"system:unauthenticated\"]},\"sourceIPs\":[\"10.131.0.1\"],\"userAgent\":\"kube-probe/1.28\",\"responseStatus\":{\"metadata\":{},\"code\":200},\"requestReceivedTimestamp\":\"2023-02-16T13:00:03.554567Z\",\"stageTimestamp\":\"2023-02-16T13:00:03.555032Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"\"}}", "oc rsh pod/keda-metrics-apiserver-<hash> -n openshift-keda", "oc rsh pod/keda-metrics-apiserver-65c7cc44fd-rrl4r -n openshift-keda", "sh-4.4USD cd /var/audit-policy/", "sh-4.4USD ls", "log-2023.02.17-14:50 policy.yaml", "sh-4.4USD cat <log_name>/<pvc_name>|grep -i <log_level> 1", "sh-4.4USD cat log-2023.02.17-14:50/pvc-audit-log|grep -i Request", "{\"kind\":\"Event\",\"apiVersion\":\"audit.k8s.io/v1\",\"level\":\"Request\",\"auditID\":\"63e7f68c-04ec-4f4d-8749-bf1656572a41\",\"stage\":\"ResponseComplete\",\"requestURI\":\"/openapi/v2\",\"verb\":\"get\",\"user\":{\"username\":\"system:aggregator\",\"groups\":[\"system:authenticated\"]},\"sourceIPs\":[\"10.128.0.1\"],\"responseStatus\":{\"metadata\":{},\"code\":304},\"requestReceivedTimestamp\":\"2023-02-17T13:12:55.035478Z\",\"stageTimestamp\":\"2023-02-17T13:12:55.038346Z\",\"annotations\":{\"authorization.k8s.io/decision\":\"allow\",\"authorization.k8s.io/reason\":\"RBAC: allowed by ClusterRoleBinding \\\"system:discovery\\\" of ClusterRole \\\"system:discovery\\\" to Group \\\"system:authenticated\\\"\"}}", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc import-image is/must-gather -n openshift", "oc adm must-gather --image=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "IMAGE=\"USD(oc get packagemanifests openshift-custom-metrics-autoscaler-operator -n openshift-marketplace -o jsonpath='{.status.channels[?(@.name==\"stable\")].currentCSVDesc.annotations.containerImage}')\"", "oc adm must-gather --image-stream=openshift/must-gather --image=USD{IMAGE}", "└── openshift-keda ├── apps │ ├── daemonsets.yaml │ ├── deployments.yaml │ ├── replicasets.yaml │ └── statefulsets.yaml ├── apps.openshift.io │ └── deploymentconfigs.yaml ├── autoscaling │ └── horizontalpodautoscalers.yaml ├── batch │ ├── cronjobs.yaml │ └── jobs.yaml ├── build.openshift.io │ ├── buildconfigs.yaml │ └── builds.yaml ├── core │ ├── configmaps.yaml │ ├── endpoints.yaml │ ├── events.yaml │ ├── persistentvolumeclaims.yaml │ ├── pods.yaml │ ├── replicationcontrollers.yaml │ ├── secrets.yaml │ └── services.yaml ├── discovery.k8s.io │ └── endpointslices.yaml ├── image.openshift.io │ └── imagestreams.yaml ├── k8s.ovn.org │ ├── egressfirewalls.yaml │ └── egressqoses.yaml ├── keda.sh │ ├── kedacontrollers │ │ └── keda.yaml │ ├── scaledobjects │ │ └── example-scaledobject.yaml │ └── triggerauthentications │ └── example-triggerauthentication.yaml ├── monitoring.coreos.com │ └── servicemonitors.yaml ├── networking.k8s.io │ └── networkpolicies.yaml ├── openshift-keda.yaml ├── pods │ ├── custom-metrics-autoscaler-operator-58bd9f458-ptgwx │ │ ├── custom-metrics-autoscaler-operator │ │ │ └── custom-metrics-autoscaler-operator │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── custom-metrics-autoscaler-operator-58bd9f458-ptgwx.yaml │ ├── custom-metrics-autoscaler-operator-58bd9f458-thbsh │ │ └── custom-metrics-autoscaler-operator │ │ └── custom-metrics-autoscaler-operator │ │ └── logs │ ├── keda-metrics-apiserver-65c7cc44fd-6wq4g │ │ ├── keda-metrics-apiserver │ │ │ └── keda-metrics-apiserver │ │ │ └── logs │ │ │ ├── current.log │ │ │ ├── previous.insecure.log │ │ │ └── previous.log │ │ └── keda-metrics-apiserver-65c7cc44fd-6wq4g.yaml │ └── keda-operator-776cbb6768-fb6m5 │ ├── keda-operator │ │ └── keda-operator │ │ └── logs │ │ ├── current.log │ │ ├── previous.insecure.log │ │ └── previous.log │ └── keda-operator-776cbb6768-fb6m5.yaml ├── policy │ └── poddisruptionbudgets.yaml └── route.openshift.io └── routes.yaml", "tar cvaf must-gather.tar.gz must-gather.local.5421342344627712289/ 1", "oc describe PodMetrics openshift-kube-scheduler-ip-10-0-135-131.ec2.internal", "Name: openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Namespace: openshift-kube-scheduler Labels: <none> Annotations: <none> API Version: metrics.k8s.io/v1beta1 Containers: Name: wait-for-host-port Usage: Memory: 0 Name: scheduler Usage: Cpu: 8m Memory: 45440Ki Kind: PodMetrics Metadata: Creation Timestamp: 2019-05-23T18:47:56Z Self Link: /apis/metrics.k8s.io/v1beta1/namespaces/openshift-kube-scheduler/pods/openshift-kube-scheduler-ip-10-0-135-131.ec2.internal Timestamp: 2019-05-23T18:47:56Z Window: 1m0s Events: <none>", "apiVersion: v1 kind: Pod spec: containers: - name: app image: images.my-company.example/app:v4 resources: limits: memory: \"128Mi\" cpu: \"500m\"", "apiVersion: keda.sh/v1alpha1 kind: ScaledObject metadata: annotations: autoscaling.keda.sh/paused-replicas: \"0\" 1 name: scaledobject 2 namespace: my-namespace spec: scaleTargetRef: apiVersion: apps/v1 3 name: example-deployment 4 kind: Deployment 5 envSourceContainerName: .spec.template.spec.containers[0] 6 cooldownPeriod: 200 7 maxReplicaCount: 100 8 minReplicaCount: 0 9 metricsServer: 10 auditConfig: logFormat: \"json\" logOutputVolumeClaim: \"persistentVolumeClaimName\" policy: rules: - level: Metadata omitStages: \"RequestReceived\" omitManagedFields: false lifetime: maxAge: \"2\" maxBackup: \"1\" maxSize: \"50\" fallback: 11 failureThreshold: 3 replicas: 6 pollingInterval: 30 12 advanced: restoreToOriginalReplicaCount: false 13 horizontalPodAutoscalerConfig: name: keda-hpa-scale-down 14 behavior: 15 scaleDown: stabilizationWindowSeconds: 300 policies: - type: Percent value: 100 periodSeconds: 15 triggers: - type: prometheus 16 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: basic authenticationRef: 17 name: prom-triggerauthentication kind: TriggerAuthentication", "oc create -f <filename>.yaml", "oc get scaledobject <scaled_object_name>", "NAME SCALETARGETKIND SCALETARGETNAME MIN MAX TRIGGERS AUTHENTICATION READY ACTIVE FALLBACK AGE scaledobject apps/v1.Deployment example-deployment 0 50 prometheus prom-triggerauthentication True True True 17s", "kind: ScaledJob apiVersion: keda.sh/v1alpha1 metadata: name: scaledjob namespace: my-namespace spec: failedJobsHistoryLimit: 5 jobTargetRef: activeDeadlineSeconds: 600 1 backoffLimit: 6 2 parallelism: 1 3 completions: 1 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] maxReplicaCount: 100 6 pollingInterval: 30 7 successfulJobsHistoryLimit: 5 8 failedJobsHistoryLimit: 5 9 envSourceContainerName: 10 rolloutStrategy: gradual 11 scalingStrategy: 12 strategy: \"custom\" customScalingQueueLengthDeduction: 1 customScalingRunningJobPercentage: \"0.5\" pendingPodConditions: - \"Ready\" - \"PodScheduled\" - \"AnyOtherCustomPodCondition\" multipleScalersCalculation : \"max\" triggers: - type: prometheus 13 metadata: serverAddress: https://thanos-querier.openshift-monitoring.svc.cluster.local:9092 namespace: kedatest metricName: http_requests_total threshold: '5' query: sum(rate(http_requests_total{job=\"test-app\"}[1m])) authModes: \"bearer\" authenticationRef: 14 name: prom-cluster-triggerauthentication", "oc create -f <filename>.yaml", "oc get scaledjob <scaled_job_name>", "NAME MAX TRIGGERS AUTHENTICATION READY ACTIVE AGE scaledjob 100 prometheus prom-triggerauthentication True True 8s", "oc delete crd clustertriggerauthentications.keda.sh kedacontrollers.keda.sh scaledjobs.keda.sh scaledobjects.keda.sh triggerauthentications.keda.sh", "oc get clusterrole | grep keda.sh", "oc delete clusterrole.keda.sh-v1alpha1-admin", "oc get clusterrolebinding | grep keda.sh", "oc delete clusterrolebinding.keda.sh-v1alpha1-admin", "oc delete project openshift-keda", "oc delete operator/openshift-custom-metrics-autoscaler-operator.openshift-keda", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: mastersSchedulable: false profile: HighNodeUtilization 1 #", "apiVersion: v1 kind: Pod metadata: name: with-pod-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 operator: In 4 values: - S1 5 topologyKey: topology.kubernetes.io/zone containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 operator: In 5 values: - S2 topologyKey: kubernetes.io/hostname containers: - name: with-pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s1-east spec: affinity: 1 podAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 - labelSelector: matchExpressions: - key: security 3 values: - S1 operator: In 4 topologyKey: topology.kubernetes.io/zone 5", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s1 labels: security: S1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: security-s1 image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: security-s2-east spec: affinity: 1 podAntiAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: security 4 values: - S1 operator: In 5 topologyKey: kubernetes.io/hostname 6", "oc create -f <pod-spec>.yaml", "apiVersion: v1 kind: Pod metadata: name: team4 labels: team: \"4\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: team4a spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: team operator: In values: - \"4\" topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s1 topologyKey: kubernetes.io/hostname containers: - name: pod-antiaffinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s1 labels: security: s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: ocp image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: pod-s2 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: podAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: security operator: In values: - s2 topologyKey: kubernetes.io/hostname containers: - name: pod-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "NAME READY STATUS RESTARTS AGE IP NODE pod-s2 0/1 Pending 0 32s <none>", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: app operator: In values: - test topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: cpu operator: In values: - high topologyKey: kubernetes.io/hostname #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: podAntiAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: podAffinityTerm: labelSelector: matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal topologyKey: topology.kubernetes.io/zone #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-NorthSouth 3 operator: In 4 values: - e2e-az-North 5 - e2e-az-South 6 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "apiVersion: v1 kind: Pod metadata: name: with-node-affinity spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault affinity: nodeAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 1 3 preference: matchExpressions: - key: e2e-az-EastWest 4 operator: In 5 values: - e2e-az-East 6 - e2e-az-West 7 containers: - name: with-node-affinity image: docker.io/ocpqe/hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc label node node1 e2e-az-name=e2e-az1", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: e2e-az-name: e2e-az1 #", "apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: 2 nodeSelectorTerms: - matchExpressions: - key: e2e-az-name 3 values: - e2e-az1 - e2e-az2 operator: In 4 #", "oc create -f <file-name>.yaml", "oc label node node1 e2e-az-name=e2e-az3", "apiVersion: v1 kind: Pod metadata: name: s1 spec: affinity: 1 nodeAffinity: preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 3 preference: matchExpressions: - key: e2e-az-name 4 values: - e2e-az3 operator: In 5 #", "oc create -f <file-name>.yaml", "oc label node node1 zone=us", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: us #", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #", "oc get pod -o wide", "NAME READY STATUS RESTARTS AGE IP NODE pod-s1 1/1 Running 0 4m IP1 node1", "oc label node node1 zone=emea", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: zone: emea #", "cat pod-s1.yaml", "apiVersion: v1 kind: Pod metadata: name: pod-s1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: \"zone\" operator: In values: - us #", "oc describe pod pod-s1", "Events: FirstSeen LastSeen Count From SubObjectPath Type Reason --------- -------- ----- ---- ------------- -------- ------ 1m 33s 8 default-scheduler Warning FailedScheduling No nodes are available that match all of the following predicates:: MatchNodeSelector (1).", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-163-94.us-west-2.compute.internal #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: nodeAffinity: 1 requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/arch operator: In values: - arm64 - key: kubernetes.io/os operator: In values: - linux #", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-custom-metrics-autoscaler-operator namespace: openshift-keda spec: name: my-package source: my-operators sourceNamespace: operator-registries config: affinity: 1 nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: kubernetes.io/hostname operator: In values: - ip-10-0-185-229.ec2.internal #", "oc get pods -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES custom-metrics-autoscaler-operator-5dcc45d656-bhshg 1/1 Running 0 50s 10.131.0.20 ip-10-0-185-229.ec2.internal <none> <none>", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "apiVersion: v1 kind: Node metadata: name: my-node # spec: taints: - effect: NoExecute key: key1 value: value1 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes node1 key1=value1:NoSchedule", "oc adm taint nodes node1 key1=value1:NoExecute", "oc adm taint nodes node1 key2=value2:NoSchedule", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoSchedule\" - key: \"key1\" operator: \"Equal\" value: \"value1\" effect: \"NoExecute\" #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute tolerationSeconds: 300 1 - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute tolerationSeconds: 300 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - operator: \"Exists\" #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" 1 effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc adm taint nodes <node_name> <key>=<value>:<effect>", "oc adm taint nodes node1 key1=value1:NoExecute", "apiVersion: v1 kind: Node metadata: annotations: machine.openshift.io/machine: openshift-machine-api/ci-ln-62s7gtb-f76d1-v8jxv-master-0 machineconfiguration.openshift.io/currentConfig: rendered-master-cdc1ab7da414629332cc4c3926e6e59c name: my-node # spec: taints: - effect: NoSchedule key: node-role.kubernetes.io/master #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" 1 value: \"value1\" operator: \"Equal\" effect: \"NoExecute\" tolerationSeconds: 3600 2 #", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key1\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "oc edit machineset <machineset>", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: my-machineset # spec: # template: # spec: taints: - effect: NoExecute key: key1 value: value1 #", "oc scale --replicas=0 machineset <machineset> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: replicas: 0", "oc scale --replicas=2 machineset <machineset> -n openshift-machine-api", "oc edit machineset <machineset> -n openshift-machine-api", "oc adm taint nodes node1 dedicated=groupName:NoSchedule", "kind: Node apiVersion: v1 metadata: name: my-node # spec: taints: - key: dedicated value: groupName effect: NoSchedule #", "kind: Project apiVersion: project.openshift.io/v1 metadata: name: <project_name> 1 annotations: openshift.io/node-selector: '<label>' 2 scheduler.alpha.kubernetes.io/defaultTolerations: >- [{\"operator\": \"Exists\", \"effect\": \"NoSchedule\", \"key\": \"<key_name>\"} 3 ]", "oc apply -f project.yaml", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"disktype\" value: \"ssd\" operator: \"Equal\" effect: \"NoSchedule\" tolerationSeconds: 3600 #", "oc adm taint nodes <node-name> disktype=ssd:NoSchedule", "oc adm taint nodes <node-name> disktype=ssd:PreferNoSchedule", "kind: Node apiVersion: v1 metadata: name: my_node # spec: taints: - key: disktype value: ssd effect: PreferNoSchedule #", "oc adm taint nodes <node-name> <key>-", "oc adm taint nodes ip-10-0-132-248.ec2.internal key1-", "node/ip-10-0-132-248.ec2.internal untainted", "apiVersion: v1 kind: Pod metadata: name: my-pod # spec: tolerations: - key: \"key2\" operator: \"Exists\" effect: \"NoExecute\" tolerationSeconds: 3600 #", "kind: Node apiVersion: v1 metadata: name: ip-10-0-131-14.ec2.internal selfLink: /api/v1/nodes/ip-10-0-131-14.ec2.internal uid: 7bc2580a-8b8e-11e9-8e01-021ab4174c74 resourceVersion: '478704' creationTimestamp: '2019-06-10T14:46:08Z' labels: kubernetes.io/os: linux topology.kubernetes.io/zone: us-east-1a node.openshift.io/os_version: '4.5' node-role.kubernetes.io/worker: '' topology.kubernetes.io/region: us-east-1 node.openshift.io/os_id: rhcos node.kubernetes.io/instance-type: m4.large kubernetes.io/hostname: ip-10-0-131-14 kubernetes.io/arch: amd64 region: east 1 type: user-node #", "apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: 1 region: east type: user-node #", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster # spec: defaultNodeSelector: type=user-node,region=east #", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #", "apiVersion: v1 kind: Pod metadata: name: s1 # spec: nodeSelector: region: east #", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Namespace metadata: name: east-region annotations: openshift.io/node-selector: \"region=east\" #", "apiVersion: v1 kind: Node metadata: name: ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 # labels: region: east type: user-node #", "apiVersion: v1 kind: Pod metadata: namespace: east-region # spec: nodeSelector: region: east type: user-node #", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES pod-s1 1/1 Running 0 20s 10.131.2.6 ci-ln-qg1il3k-f76d1-hlmhl-worker-b-df2s4 <none> <none>", "apiVersion: v1 kind: Pod metadata: name: west-region # spec: nodeSelector: region: west #", "oc describe pod router-default-66d5cf9464-7pwkc", "kind: Pod apiVersion: v1 metadata: Name: router-default-66d5cf9464-7pwkc Namespace: openshift-ingress Controlled By: ReplicaSet/router-default-66d5cf9464", "apiVersion: v1 kind: Pod metadata: name: router-default-66d5cf9464-7pwkc ownerReferences: - apiVersion: apps/v1 kind: ReplicaSet name: router-default-66d5cf9464 uid: d81dd094-da26-11e9-a48a-128e7edf0312 controller: true blockOwnerDeletion: true", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet abc612-msrtw-worker-us-east-1c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: xf2bd-infra-us-east-2a namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc label nodes <name> <key>=<value>", "oc label nodes ip-10-0-142-25.ec2.internal type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: hello-node-6fbccf8d9 labels: type: \"user-node\" region: \"east\"", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ip-10-0-142-25.ec2.internal Ready worker 17m v1.30.3", "kind: ReplicaSet apiVersion: apps/v1 metadata: name: hello-node-6fbccf8d9 spec: template: metadata: creationTimestamp: null labels: ingresscontroller.operator.openshift.io/deployment-ingresscontroller: default pod-template-hash: 66d5cf9464 spec: nodeSelector: kubernetes.io/os: linux node-role.kubernetes.io/worker: '' type: user-node 1", "apiVersion: v1 kind: Pod metadata: name: hello-node-6fbccf8d9 spec: nodeSelector: region: east type: user-node", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: type=user-node,region=east 1 mastersSchedulable: false", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api 1", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet abc612-msrtw-worker-us-east-1c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3", "oc label nodes <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>,<key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3", "Error from server (Forbidden): error when creating \"pod.yaml\": pods \"pod-4\" is forbidden: pod node label selector conflicts with its project node label selector", "oc edit namespace <name>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"type=user-node,region=east\" 1 openshift.io/description: \"\" openshift.io/display-name: \"\" openshift.io/requester: kube:admin openshift.io/sa.scc.mcs: s0:c30,c5 openshift.io/sa.scc.supplemental-groups: 1000880000/10000 openshift.io/sa.scc.uid-range: 1000880000/10000 creationTimestamp: \"2021-05-10T12:35:04Z\" labels: kubernetes.io/metadata.name: demo name: demo resourceVersion: \"145537\" uid: 3f8786e3-1fcb-42e3-a0e3-e2ac54d15001 spec: finalizers: - kubernetes", "oc patch MachineSet <name> --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"<key>\"=\"<value>\",\"<key>\"=\"<value>\"}}]' -n openshift-machine-api", "oc patch MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c --type='json' -p='[{\"op\":\"add\",\"path\":\"/spec/template/spec/metadata/labels\", \"value\":{\"type\":\"user-node\",\"region\":\"east\"}}]' -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: <machineset> namespace: openshift-machine-api spec: template: spec: metadata: labels: region: \"east\" type: \"user-node\"", "oc edit MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: spec: template: metadata: spec: metadata: labels: region: east type: user-node", "oc scale --replicas=0 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc scale --replicas=1 MachineSet ci-ln-l8nry52-f76d1-hl7m7-worker-c -n openshift-machine-api", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-c-vmqzp Ready worker 61s v1.30.3", "oc label <resource> <name> <key>=<value>", "oc label nodes ci-ln-l8nry52-f76d1-hl7m7-worker-c-tgq49 type=user-node region=east", "kind: Node apiVersion: v1 metadata: name: <node_name> labels: type: \"user-node\" region: \"east\"", "oc get nodes -l <key>=<value>", "oc get nodes -l type=user-node,region=east", "NAME STATUS ROLES AGE VERSION ci-ln-l8nry52-f76d1-hl7m7-worker-b-tgq49 Ready worker 17m v1.30.3", "apiVersion: v1 kind: Pod metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 1 topologyKey: topology.kubernetes.io/zone 2 whenUnsatisfiable: DoNotSchedule 3 labelSelector: 4 matchLabels: region: us-east 5 matchLabelKeys: - my-pod-label 6 containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: my-pod labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: topology.kubernetes.io/zone whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "kind: Pod apiVersion: v1 metadata: name: my-pod-2 labels: region: us-east spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault topologySpreadConstraints: - maxSkew: 1 topologyKey: node whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east - maxSkew: 1 topologyKey: rack whenUnsatisfiable: DoNotSchedule labelSelector: matchLabels: region: us-east containers: - image: \"docker.io/ocpqe/hello-pod\" name: hello-pod securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 logLevel: Normal managementState: Managed operatorLogLevel: Normal mode: Predictive 1 profileCustomizations: namespaces: 2 excluded: - my-namespace podLifetime: 48h 3 thresholdPriorityClassName: my-priority-class-name 4 evictionLimits: total: 20 5 profiles: 6 - AffinityAndTaints - TopologyAndDuplicates - LifecycleAndUtilization - EvictPodsWithLocalStorage - EvictPodsWithPVC", "oc edit kubedeschedulers.operator.openshift.io cluster -n openshift-kube-descheduler-operator", "apiVersion: operator.openshift.io/v1 kind: KubeDescheduler metadata: name: cluster namespace: openshift-kube-descheduler-operator spec: deschedulingIntervalSeconds: 3600 1", "apiVersion: v1 kind: ConfigMap metadata: name: \"secondary-scheduler-config\" 1 namespace: \"openshift-secondary-scheduler-operator\" 2 data: \"config.yaml\": | apiVersion: kubescheduler.config.k8s.io/v1 kind: KubeSchedulerConfiguration 3 leaderElection: leaderElect: false profiles: - schedulerName: secondary-scheduler 4 plugins: 5 score: disabled: - name: NodeResourcesBalancedAllocation - name: NodeResourcesLeastAllocated", "apiVersion: v1 kind: Pod metadata: name: nginx namespace: default spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: nginx image: nginx:1.14.2 ports: - containerPort: 80 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] schedulerName: secondary-scheduler 1", "oc describe pod nginx -n default", "Name: nginx Namespace: default Priority: 0 Node: ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp/10.0.128.3 Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 12s secondary-scheduler Successfully assigned default/nginx to ci-ln-t0w4r1k-72292-xkqs4-worker-b-xqkxp", "kind: Pod apiVersion: v1 metadata: name: hello-node-6fbccf8d9-9tmzr # spec: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchFields: - key: metadata.name operator: In values: - target-host-name #", "oc patch namespace myproject -p '{\"metadata\": {\"annotations\": {\"openshift.io/node-selector\": \"\"}}}'", "apiVersion: v1 kind: Namespace metadata: name: <namespace> annotations: openshift.io/node-selector: '' #", "oc adm new-project <name> --node-selector=\"\"", "apiVersion: apps/v1 kind: DaemonSet metadata: name: hello-daemonset spec: selector: matchLabels: name: hello-daemonset 1 template: metadata: labels: name: hello-daemonset 2 spec: nodeSelector: 3 role: worker containers: - image: openshift/hello-openshift imagePullPolicy: Always name: registry ports: - containerPort: 80 protocol: TCP resources: {} terminationMessagePath: /dev/termination-log serviceAccount: default terminationGracePeriodSeconds: 10 #", "oc create -f daemonset.yaml", "oc get pods", "hello-daemonset-cx6md 1/1 Running 0 2m hello-daemonset-e3md9 1/1 Running 0 2m", "oc describe pod/hello-daemonset-cx6md|grep Node", "Node: openshift-node01.hostname.com/10.14.20.134", "oc describe pod/hello-daemonset-e3md9|grep Node", "Node: openshift-node02.hostname.com/10.14.20.137", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc delete cronjob/<cron_job_name>", "apiVersion: batch/v1 kind: Job metadata: name: pi spec: parallelism: 1 1 completions: 1 2 activeDeadlineSeconds: 1800 3 backoffLimit: 6 4 template: 5 metadata: name: pi spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 6 #", "oc create -f <file-name>.yaml", "oc create job pi --image=perl -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "apiVersion: batch/v1 kind: CronJob metadata: name: pi spec: schedule: \"*/1 * * * *\" 1 timeZone: Etc/UTC 2 concurrencyPolicy: \"Replace\" 3 startingDeadlineSeconds: 200 4 suspend: true 5 successfulJobsHistoryLimit: 3 6 failedJobsHistoryLimit: 1 7 jobTemplate: 8 spec: template: metadata: labels: 9 parent: \"cronjobpi\" spec: containers: - name: pi image: perl command: [\"perl\", \"-Mbignum=bpi\", \"-wle\", \"print bpi(2000)\"] restartPolicy: OnFailure 10 #", "oc create -f <file-name>.yaml", "oc create cronjob pi --image=perl --schedule='*/1 * * * *' -- perl -Mbignum=bpi -wle 'print bpi(2000)'", "oc get nodes", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.30.3 node1.example.com Ready worker 7h v1.30.3 node2.example.com Ready worker 7h v1.30.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master.example.com Ready master 7h v1.30.3 node1.example.com NotReady,SchedulingDisabled worker 7h v1.30.3 node2.example.com Ready worker 7h v1.30.3", "oc get nodes -o wide", "NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME master.example.com Ready master 171m v1.30.3 10.0.129.108 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev node1.example.com Ready worker 72m v1.30.3 10.0.129.222 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev node2.example.com Ready worker 164m v1.30.3 10.0.142.150 <none> Red Hat Enterprise Linux CoreOS 48.83.202103210901-0 (Ootpa) 4.18.0-240.15.1.el8_3.x86_64 cri-o://1.30.3-30.rhaos4.10.gitf2f339d.el8-dev", "oc get node <node>", "oc get node node1.example.com", "NAME STATUS ROLES AGE VERSION node1.example.com Ready worker 7h v1.30.3", "oc describe node <node>", "oc describe node node1.example.com", "Name: node1.example.com 1 Roles: worker 2 Labels: kubernetes.io/os=linux kubernetes.io/hostname=ip-10-0-131-14 kubernetes.io/arch=amd64 3 node-role.kubernetes.io/worker= node.kubernetes.io/instance-type=m4.large node.openshift.io/os_id=rhcos node.openshift.io/os_version=4.5 region=east topology.kubernetes.io/region=us-east-1 topology.kubernetes.io/zone=us-east-1a Annotations: cluster.k8s.io/machine: openshift-machine-api/ahardin-worker-us-east-2a-q5dzc 4 machineconfiguration.openshift.io/currentConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/desiredConfig: worker-309c228e8b3a92e2235edd544c62fea8 machineconfiguration.openshift.io/state: Done volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Wed, 13 Feb 2019 11:05:57 -0500 Taints: <none> 5 Unschedulable: false Conditions: 6 Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- OutOfDisk False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientDisk kubelet has sufficient disk space available MemoryPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:05:57 -0500 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Wed, 13 Feb 2019 15:09:42 -0500 Wed, 13 Feb 2019 11:07:09 -0500 KubeletReady kubelet is posting ready status Addresses: 7 InternalIP: 10.0.140.16 InternalDNS: ip-10-0-140-16.us-east-2.compute.internal Hostname: ip-10-0-140-16.us-east-2.compute.internal Capacity: 8 attachable-volumes-aws-ebs: 39 cpu: 2 hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8172516Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7558116Ki pods: 250 System Info: 9 Machine ID: 63787c9534c24fde9a0cde35c13f1f66 System UUID: EC22BF97-A006-4A58-6AF8-0A38DEEA122A Boot ID: f24ad37d-2594-46b4-8830-7f7555918325 Kernel Version: 3.10.0-957.5.1.el7.x86_64 OS Image: Red Hat Enterprise Linux CoreOS 410.8.20190520.0 (Ootpa) Operating System: linux Architecture: amd64 Container Runtime Version: cri-o://1.30.3-0.6.dev.rhaos4.3.git9ad059b.el8-rc2 Kubelet Version: v1.30.3 Kube-Proxy Version: v1.30.3 PodCIDR: 10.128.4.0/24 ProviderID: aws:///us-east-2a/i-04e87b31dc6b3e171 Non-terminated Pods: (12 in total) 10 Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits --------- ---- ------------ ---------- --------------- ------------- openshift-cluster-node-tuning-operator tuned-hdl5q 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-dns dns-default-l69zr 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-image-registry node-ca-9hmcg 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ingress router-default-76455c45c-c5ptv 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-machine-config-operator machine-config-daemon-cvqw9 20m (1%) 0 (0%) 50Mi (0%) 0 (0%) openshift-marketplace community-operators-f67fh 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-monitoring alertmanager-main-0 50m (3%) 50m (3%) 210Mi (2%) 10Mi (0%) openshift-monitoring node-exporter-l7q8d 10m (0%) 20m (1%) 20Mi (0%) 40Mi (0%) openshift-monitoring prometheus-adapter-75d769c874-hvb85 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-multus multus-kw8w5 0 (0%) 0 (0%) 0 (0%) 0 (0%) openshift-ovn-kubernetes ovnkube-node-t4dsn 80m (0%) 0 (0%) 1630Mi (0%) 0 (0%) Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 380m (25%) 270m (18%) memory 880Mi (11%) 250Mi (3%) attachable-volumes-aws-ebs 0 0 Events: 11 Type Reason Age From Message ---- ------ ---- ---- ------- Normal NodeHasSufficientPID 6d (x5 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6d kubelet, m01.example.com Updated Node Allocatable limit across pods Normal NodeHasSufficientMemory 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasNoDiskPressure Normal NodeHasSufficientDisk 6d (x6 over 6d) kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientDisk Normal NodeHasSufficientPID 6d kubelet, m01.example.com Node m01.example.com status is now: NodeHasSufficientPID Normal Starting 6d kubelet, m01.example.com Starting kubelet. #", "oc get pod --selector=<nodeSelector>", "oc get pod --selector=kubernetes.io/os", "oc get pod -l=<nodeSelector>", "oc get pod -l kubernetes.io/os=linux", "oc get pod --all-namespaces --field-selector=spec.nodeName=<nodename>", "oc adm top nodes", "NAME CPU(cores) CPU% MEMORY(bytes) MEMORY% ip-10-0-12-143.ec2.compute.internal 1503m 100% 4533Mi 61% ip-10-0-132-16.ec2.compute.internal 76m 5% 1391Mi 18% ip-10-0-140-137.ec2.compute.internal 398m 26% 2473Mi 33% ip-10-0-142-44.ec2.compute.internal 656m 43% 6119Mi 82% ip-10-0-146-165.ec2.compute.internal 188m 12% 3367Mi 45% ip-10-0-19-62.ec2.compute.internal 896m 59% 5754Mi 77% ip-10-0-44-193.ec2.compute.internal 632m 42% 5349Mi 72%", "oc adm top node --selector=''", "oc adm cordon <node1>", "node/<node1> cordoned", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready,SchedulingDisabled worker 1d v1.30.3", "oc adm drain <node1> <node2> [--pod-selector=<pod_selector>]", "oc adm drain <node1> <node2> --force=true", "oc adm drain <node1> <node2> --grace-period=-1", "oc adm drain <node1> <node2> --ignore-daemonsets=true", "oc adm drain <node1> <node2> --timeout=5s", "oc adm drain <node1> <node2> --delete-emptydir-data=true", "oc adm drain <node1> <node2> --dry-run=true", "oc adm uncordon <node1>", "oc label node <node> <key_1>=<value_1> ... <key_n>=<value_n>", "oc label nodes webconsole-7f7f6 unhealthy=true", "kind: Node apiVersion: v1 metadata: name: webconsole-7f7f6 labels: unhealthy: 'true' #", "oc label pods --all <key_1>=<value_1>", "oc label pods --all status=unhealthy", "oc adm cordon <node>", "oc adm cordon node1.example.com", "node/node1.example.com cordoned NAME LABELS STATUS node1.example.com kubernetes.io/hostname=node1.example.com Ready,SchedulingDisabled", "oc adm uncordon <node1>", "oc delete pods --field-selector status.phase=Failed -n <POD_NAMESPACE>", "oc get machinesets -n openshift-machine-api", "oc scale --replicas=2 machineset <machine-set-name> -n openshift-machine-api", "oc edit machineset <machine-set-name> -n openshift-machine-api", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: # name: <machine-set-name> namespace: openshift-machine-api # spec: replicas: 2 1 #", "oc adm cordon <node_name>", "oc adm drain <node_name> --force=true", "oc delete node <node_name>", "oc get machineconfigpool --show-labels", "NAME CONFIG UPDATED UPDATING DEGRADED LABELS master rendered-master-e05b81f5ca4db1d249a1bf32f9ec24fd True False False operator.machineconfiguration.openshift.io/required-for-upgrade= worker rendered-worker-f50e78e1bc06d8e82327763145bfcf62 True False False", "oc label machineconfigpool worker custom-kubelet=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-config 1 spec: machineConfigPoolSelector: matchLabels: custom-kubelet: enabled 2 kubeletConfig: 3 podsPerCore: 10 maxPods: 250 systemReserved: cpu: 2000m memory: 1Gi #", "oc create -f <file-name>", "oc create -f master-kube-config.yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: name: ci-ln-hmy310k-72292-5f87z-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-412-85-202203181601-0-gcp-x86-64 1", "oc edit MachineConfiguration cluster", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All 2", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: Partial partial: machineResourceSelector: matchLabels: update-boot-image: \"true\" 2", "oc label machineset.machine ci-ln-hmy310k-72292-5f87z-worker-a update-boot-image=true -n openshift-machine-api", "oc get machinesets <machineset_name> -n openshift-machine-api -o yaml", "apiVersion: machine.openshift.io/v1beta1 kind: MachineSet metadata: labels: machine.openshift.io/cluster-api-cluster: ci-ln-77hmkpt-72292-d4pxp update-boot-image: \"true\" name: ci-ln-77hmkpt-72292-d4pxp-worker-a namespace: openshift-machine-api spec: template: spec: providerSpec: value: disks: - autoDelete: true boot: true image: projects/rhcos-cloud/global/images/rhcos-416-92-202402201450-0-gcp-x86-64 1", "oc edit MachineConfiguration cluster", "apiVersion: operator.openshift.io/v1 kind: MachineConfiguration metadata: name: cluster namespace: openshift-machine-config-operator spec: managedBootImages: 1 machineManagers: - resource: machinesets apiGroup: machine.openshift.io selection: mode: All", "oc edit schedulers.config.openshift.io cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: creationTimestamp: \"2019-09-10T03:04:05Z\" generation: 1 name: cluster resourceVersion: \"433\" selfLink: /apis/config.openshift.io/v1/schedulers/cluster uid: a636d30a-d377-11e9-88d4-0a60097bee62 spec: mastersSchedulable: false 1 status: {} #", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - contents: | [Unit] Description=Set SELinux booleans Before=kubelet.service [Service] Type=oneshot ExecStart=/sbin/setsebool container_manage_cgroup=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target enabled: true name: setsebool.service #", "oc create -f 99-worker-setsebool.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker 1 name: 05-worker-kernelarg-selinuxpermissive 2 spec: kernelArguments: - enforcing=0 3", "oc create -f 05-worker-kernelarg-selinuxpermissive.yaml", "oc get MachineConfig", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 05-worker-kernelarg-selinuxpermissive 3.2.0 105s 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m", "oc get nodes", "NAME STATUS ROLES AGE VERSION ip-10-0-136-161.ec2.internal Ready worker 28m v1.30.3 ip-10-0-136-243.ec2.internal Ready master 34m v1.30.3 ip-10-0-141-105.ec2.internal Ready,SchedulingDisabled worker 28m v1.30.3 ip-10-0-142-249.ec2.internal Ready master 34m v1.30.3 ip-10-0-153-11.ec2.internal Ready worker 28m v1.30.3 ip-10-0-153-150.ec2.internal Ready master 34m v1.30.3", "oc debug node/ip-10-0-141-105.ec2.internal", "Starting pod/ip-10-0-141-105ec2internal-debug To use host binaries, run `chroot /host` sh-4.2# cat /host/proc/cmdline BOOT_IMAGE=/ostree/rhcos-... console=tty0 console=ttyS0,115200n8 rootflags=defaults,prjquota rw root=UUID=fd0... ostree=/ostree/boot.0/rhcos/16 coreos.oem.id=qemu coreos.oem.id=ec2 ignition.platform.id=ec2 enforcing=0 sh-4.2# exit", "oc label machineconfigpool worker kubelet-swap=enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: swap-config spec: machineConfigPoolSelector: matchLabels: kubelet-swap: enabled kubeletConfig: failSwapOn: false 1 memorySwap: swapBehavior: LimitedSwap 2 #", "#!/usr/bin/env bash set -Eeuo pipefail if [ USD# -lt 1 ]; then echo \"Usage: 'USD0 node_name'\" exit 64 fi Check for admin OpenStack credentials openstack server list --all-projects >/dev/null || { >&2 echo \"The script needs OpenStack admin credentials. Exiting\"; exit 77; } Check for admin OpenShift credentials adm top node >/dev/null || { >&2 echo \"The script needs OpenShift admin credentials. Exiting\"; exit 77; } set -x declare -r node_name=\"USD1\" declare server_id server_id=\"USD(openstack server list --all-projects -f value -c ID -c Name | grep \"USDnode_name\" | cut -d' ' -f1)\" readonly server_id Drain the node adm cordon \"USDnode_name\" adm drain \"USDnode_name\" --delete-emptydir-data --ignore-daemonsets --force Power off the server debug \"node/USD{node_name}\" -- chroot /host shutdown -h 1 Verify the server is shut off until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Migrate the node openstack server migrate --wait \"USDserver_id\" Resize the VM openstack server resize confirm \"USDserver_id\" Wait for the resize confirm to finish until openstack server show \"USDserver_id\" -f value -c status | grep -q 'SHUTOFF'; do sleep 5; done Restart the VM openstack server start \"USDserver_id\" Wait for the node to show up as Ready: until oc get node \"USDnode_name\" | grep -q \"^USD{node_name}[[:space:]]\\+Ready\"; do sleep 5; done Uncordon the node adm uncordon \"USDnode_name\" Wait for cluster operators to stabilize until oc get co -o go-template='statuses: {{ range .items }}{{ range .status.conditions }}{{ if eq .type \"Degraded\" }}{{ if ne .status \"False\" }}DEGRADED{{ end }}{{ else if eq .type \"Progressing\"}}{{ if ne .status \"False\" }}PROGRESSING{{ end }}{{ else if eq .type \"Available\"}}{{ if ne .status \"True\" }}NOTAVAILABLE{{ end }}{{ end }}{{ end }}{{ end }}' | grep -qv '\\(DEGRADED\\|PROGRESSING\\|NOTAVAILABLE\\)'; do sleep 5; done", "hosts: - hostname: extra-worker-1 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:00 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:00 ipv4: enabled: true address: - ip: 192.168.122.2 prefix-length: 23 dhcp: false - hostname: extra-worker-2 rootDeviceHints: deviceName: /dev/sda interfaces: - macAddress: 00:00:00:00:00:02 name: eth0 networkConfig: interfaces: - name: eth0 type: ethernet state: up mac-address: 00:00:00:00:00:02 ipv4: enabled: true address: - ip: 192.168.122.3 prefix-length: 23 dhcp: false", "oc adm node-image create nodes-config.yaml", "oc adm node-image monitor --ip-addresses <ip_addresses>", "oc adm certificate approve <csr_name>", "oc adm node-image create --mac-address=<mac_address>", "oc adm node-image monitor --ip-addresses <ip_address>", "oc adm certificate approve <csr_name>", "hosts:", "hosts: hostname:", "hosts: interfaces:", "hosts: interfaces: name:", "hosts: interfaces: macAddress:", "hosts: rootDeviceHints:", "hosts: rootDeviceHints: deviceName:", "hosts: networkConfig:", "cpuArchitecture", "sshKey", "kubeletConfig: podsPerCore: 10", "kubeletConfig: maxPods: 250", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-max-pods 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: podsPerCore: 10 3 maxPods: 250 4 #", "oc create -f <file_name>.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False False False worker worker-8cecd1236b33ee3f8a5e False True False", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED master master-9cc2c72f205e103bb534 False True False worker worker-8cecd1236b33ee3f8a5e True False False", "get tuned.tuned.openshift.io/default -o yaml -n openshift-cluster-node-tuning-operator", "profile: - name: tuned_profile_1 data: | # TuneD profile specification [main] summary=Description of tuned_profile_1 profile [sysctl] net.ipv4.ip_forward=1 # ... other sysctl's or other TuneD daemon plugins supported by the containerized TuneD - name: tuned_profile_n data: | # TuneD profile specification [main] summary=Description of tuned_profile_n profile # tuned_profile_n profile settings", "recommend: <recommend-item-1> <recommend-item-n>", "- machineConfigLabels: 1 <mcLabels> 2 match: 3 <match> 4 priority: <priority> 5 profile: <tuned_profile_name> 6 operand: 7 debug: <bool> 8 tunedConfig: reapply_sysctl: <bool> 9", "- label: <label_name> 1 value: <label_value> 2 type: <label_type> 3 <match> 4", "- match: - label: tuned.openshift.io/elasticsearch match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra type: pod priority: 10 profile: openshift-control-plane-es - match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra priority: 20 profile: openshift-control-plane - priority: 30 profile: openshift-node", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: openshift-node-custom namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Custom OpenShift node profile with an additional kernel parameter include=openshift-node [bootloader] cmdline_openshift_node_custom=+skew_tick=1 name: openshift-node-custom recommend: - machineConfigLabels: machineconfiguration.openshift.io/role: \"worker-custom\" priority: 20 profile: openshift-node-custom", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: provider-gce namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=GCE Cloud provider-specific profile # Your tuning for GCE Cloud provider goes here. name: provider-gce", "apiVersion: tuned.openshift.io/v1 kind: Tuned metadata: name: default namespace: openshift-cluster-node-tuning-operator spec: profile: - data: | [main] summary=Optimize systems running OpenShift (provider specific parent profile) include=-provider-USD{f:exec:cat:/var/lib/ocp-tuned/provider},openshift name: openshift recommend: - profile: openshift-control-plane priority: 30 match: - label: node-role.kubernetes.io/master - label: node-role.kubernetes.io/infra - profile: openshift-node priority: 40", "oc exec USDtuned_pod -n openshift-cluster-node-tuning-operator -- find /usr/lib/tuned/openshift{,-control-plane,-node} -name tuned.conf -exec grep -H ^ {} \\;", "apiVersion: v1 kind: Pod metadata: name: with-pod-antiaffinity spec: affinity: podAntiAffinity: 1 preferredDuringSchedulingIgnoredDuringExecution: 2 - weight: 100 3 podAffinityTerm: labelSelector: matchExpressions: - key: registry 4 operator: In 5 values: - default topologyKey: kubernetes.io/hostname #", "oc adm cordon <node1>", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force", "error when evicting pods/\"rails-postgresql-example-1-72v2w\" -n \"rails\" (will retry after 5s): Cannot evict pod as it would violate the pod's disruption budget.", "oc adm drain <node1> --ignore-daemonsets --delete-emptydir-data --force --disable-eviction", "oc debug node/<node1>", "chroot /host", "systemctl reboot", "ssh core@<master-node>.<cluster_name>.<base_domain>", "sudo systemctl reboot", "oc adm uncordon <node1>", "ssh core@<target_node>", "sudo oc adm uncordon <node> --kubeconfig /etc/kubernetes/static-pod-resources/kube-apiserver-certs/secrets/node-kubeconfigs/localhost.kubeconfig", "oc get node <node1>", "NAME STATUS ROLES AGE VERSION <node1> Ready worker 6d22h v1.18.3+b0068a8", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: worker-kubeconfig 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: evictionSoft: 3 memory.available: \"500Mi\" 4 nodefs.available: \"10%\" nodefs.inodesFree: \"5%\" imagefs.available: \"15%\" imagefs.inodesFree: \"10%\" evictionSoftGracePeriod: 5 memory.available: \"1m30s\" nodefs.available: \"1m30s\" nodefs.inodesFree: \"1m30s\" imagefs.available: \"1m30s\" imagefs.inodesFree: \"1m30s\" evictionHard: 6 memory.available: \"200Mi\" nodefs.available: \"5%\" nodefs.inodesFree: \"4%\" imagefs.available: \"10%\" imagefs.inodesFree: \"5%\" evictionPressureTransitionPeriod: 3m 7 imageMinimumGCAge: 5m 8 imageGCHighThresholdPercent: 80 9 imageGCLowThresholdPercent: 75 10 #", "oc create -f <file_name>.yaml", "oc create -f gc-container.yaml", "kubeletconfig.machineconfiguration.openshift.io/gc-container created", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING master rendered-master-546383f80705bd5aeaba93 True False worker rendered-worker-b4c51bb33ccaae6fc4a6a5 False True", "[Allocatable] = [Node Capacity] - [system-reserved] - [Hard-Eviction-Thresholds]", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: dynamic-node 1 spec: autoSizingReserved: true 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "oc debug node/<node_name>", "chroot /host", "SYSTEM_RESERVED_MEMORY=3Gi SYSTEM_RESERVED_CPU=0.08", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker #", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-allocatable 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: systemReserved: 3 cpu: 1000m memory: 1Gi #", "oc create -f <file_name>.yaml", "oc describe machineconfigpool <name>", "oc describe machineconfigpool worker", "Name: worker Namespace: Labels: machineconfiguration.openshift.io/mco-built-in= pools.operator.machineconfiguration.openshift.io/worker= 1 Annotations: <none> API Version: machineconfiguration.openshift.io/v1 Kind: MachineConfigPool #", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-reserved-cpus 1 spec: kubeletConfig: reservedSystemCPUs: \"0,1,2,3\" 2 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 3 #", "oc create -f <file_name>.yaml", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig spec: tlsSecurityProfile: old: {} type: Old machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\"", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: set-kubelet-tls-security-profile spec: tlsSecurityProfile: type: Custom 1 custom: 2 ciphers: 3 - ECDHE-ECDSA-CHACHA20-POLY1305 - ECDHE-RSA-CHACHA20-POLY1305 - ECDHE-RSA-AES128-GCM-SHA256 - ECDHE-ECDSA-AES128-GCM-SHA256 minTLSVersion: VersionTLS11 machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 4 #", "oc create -f <filename>", "oc debug node/<node_name>", "sh-4.4# chroot /host", "sh-4.4# cat /etc/kubernetes/kubelet.conf", "\"kind\": \"KubeletConfiguration\", \"apiVersion\": \"kubelet.config.k8s.io/v1beta1\", # \"tlsCipherSuites\": [ \"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256\", \"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\", \"TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305_SHA256\", \"TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256\" ], \"tlsMinVersion\": \"VersionTLS12\", #", "oc get infrastructure cluster -o jsonpath='{.status.platform}'", "oc label node <node-name> node-role.kubernetes.io/app=\"\"", "oc label node <node-name> node-role.kubernetes.io/infra=\"\"", "oc get nodes", "oc edit scheduler cluster", "apiVersion: config.openshift.io/v1 kind: Scheduler metadata: name: cluster spec: defaultNodeSelector: node-role.kubernetes.io/infra=\"\" 1", "USD(nproc) X 1/2 MiB", "for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1", "curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'", "apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: myapp-container image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: init-mydb image: registry.access.redhat.com/ubi9/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f myapp.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s", "kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376", "oc create -f myservice.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s", "kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377", "oc create -f mydb.yaml", "oc get pods", "NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m", "oc set volume <object_selection> <operation> <mandatory_parameters> <options>", "oc set volume <object_type>/<name> [options]", "oc set volume pod/p1", "oc set volume dc --all --name=v1", "oc set volume <object_type>/<name> --add [options]", "oc set volume dc/registry --add", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP", "oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data", "oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'", "oc set volume <object_type>/<name> --add --overwrite [options]", "oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1", "kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data", "oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt", "kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt", "oc set volume <object_type>/<name> --remove [options]", "oc set volume dc/d1 --remove --name=v1", "oc set volume dc/d1 --remove --name=v1 --containers=c1", "oc set volume rc/r1 --remove --confirm", "oc rsh <pod>", "sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3", "apiVersion: v1 kind: Pod metadata: name: my-site spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511", "apiVersion: v1 kind: Pod metadata: name: volume-test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data", "echo -n \"admin\" | base64", "YWRtaW4=", "echo -n \"1f2d1e2e67df\" | base64", "MWYyZDFlMmU2N2Rm", "apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=", "oc create -f <secrets-filename>", "oc create -f secret.yaml", "secret \"mysecret\" created", "oc get secret <secret-name>", "oc get secret mysecret", "NAME TYPE DATA AGE mysecret Opaque 2 17h", "oc get secret <secret-name> -o yaml", "oc get secret mysecret -o yaml", "apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque", "kind: Pod metadata: name: test-projected-volume spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1", "oc create -f <your_yaml_file>.yaml", "oc create -f secret-pod.yaml", "pod \"test-projected-volume\" created", "oc get pod <name>", "oc get pod test-projected-volume", "NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s", "oc exec -it <pod> <command>", "oc exec -it test-projected-volume -- /bin/sh", "/ # ls", "bin home root tmp dev proc run usr etc projected-volume sys var", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never", "oc create -f volume-pod.yaml", "oc logs -p dapi-volume-test-pod", "cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory", "oc create -f pod.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory", "oc create -f volume-pod.yaml", "apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth", "oc create -f secret.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue", "oc create -f configmap.yaml", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Always", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] restartPolicy: Never", "oc create -f pod.yaml", "oc logs -p dapi-env-test-pod", "oc rsync <source> <destination> [-c <container>]", "<pod name>:<dir>", "oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>", "oc rsync /home/user/source devpod1234:/src -c user-container", "oc rsync devpod1234:/src /home/user/source", "oc rsync devpod1234:/src/status.txt /home/user/", "rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "export RSYNC_RSH='oc rsh'", "rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>", "oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]", "oc exec mypod date", "Thu Apr 9 02:21:53 UTC 2015", "/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>", "/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]", "oc port-forward <pod> 5000 6000", "Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000", "oc port-forward <pod> 8888:5000", "Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000", "oc port-forward <pod> :5000", "Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000", "oc port-forward <pod> 0:5000", "/proxy/nodes/<node_name>/portForward/<namespace>/<pod>", "/proxy/nodes/node123.openshift.com/portForward/myns/mypod", "sudo sysctl -a", "oc get cm -n openshift-multus cni-sysctl-allowlist -oyaml", "apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD kind: ConfigMap metadata: annotations: kubernetes.io/description: | Sysctl allowlist for nodes. release.openshift.io/version: 4.17.0-0.nightly-2022-11-16-003434 creationTimestamp: \"2022-11-17T14:09:27Z\" name: cni-sysctl-allowlist namespace: openshift-multus resourceVersion: \"2422\" uid: 96d138a3-160e-4943-90ff-6108fa7c50c3", "oc edit cm -n openshift-multus cni-sysctl-allowlist -oyaml", "Please edit the object below. Lines beginning with a '#' will be ignored, and an empty file will abort the edit. If an error occurs while saving this file will be reopened with the relevant failures. # apiVersion: v1 data: allowlist.conf: |- ^net.ipv4.conf.IFNAME.accept_redirectsUSD ^net.ipv4.conf.IFNAME.accept_source_routeUSD ^net.ipv4.conf.IFNAME.arp_acceptUSD ^net.ipv4.conf.IFNAME.arp_notifyUSD ^net.ipv4.conf.IFNAME.disable_policyUSD ^net.ipv4.conf.IFNAME.secure_redirectsUSD ^net.ipv4.conf.IFNAME.send_redirectsUSD ^net.ipv4.conf.IFNAME.rp_filterUSD ^net.ipv6.conf.IFNAME.accept_raUSD ^net.ipv6.conf.IFNAME.accept_redirectsUSD ^net.ipv6.conf.IFNAME.accept_source_routeUSD ^net.ipv6.conf.IFNAME.arp_acceptUSD ^net.ipv6.conf.IFNAME.arp_notifyUSD ^net.ipv6.neigh.IFNAME.base_reachable_time_msUSD ^net.ipv6.neigh.IFNAME.retrans_time_msUSD ^net.ipv6.conf.IFNAME.rp_filterUSD", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: tuningnad namespace: default spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"tuningnad\", \"plugins\": [{ \"type\": \"bridge\" }, { \"type\": \"tuning\", \"sysctl\": { \"net.ipv4.conf.IFNAME.rp_filter\": \"1\" } } ] }'", "oc apply -f reverse-path-fwd-example.yaml", "networkattachmentdefinition.k8.cni.cncf.io/tuningnad created", "apiVersion: v1 kind: Pod metadata: name: example labels: app: httpd namespace: default annotations: k8s.v1.cni.cncf.io/networks: tuningnad 1 spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: httpd image: 'image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest' ports: - containerPort: 8080 securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL", "oc apply -f examplepod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE example 1/1 Running 0 47s", "oc rsh example", "sh-4.4# sysctl net.ipv4.conf.net1.rp_filter", "net.ipv4.conf.net1.rp_filter = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example namespace: default spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 1 runAsGroup: 3000 2 allowPrivilegeEscalation: false 3 capabilities: 4 drop: [\"ALL\"] securityContext: runAsNonRoot: true 5 seccompProfile: 6 type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"1\" - name: net.ipv4.ip_local_port_range value: \"32770 60666\" - name: net.ipv4.tcp_syncookies value: \"0\" - name: net.ipv4.ping_group_range value: \"0 200000000\"", "oc apply -f sysctl_pod.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example 1/1 Running 0 14s", "oc rsh sysctl-example", "sh-4.4# sysctl kernel.shm_rmid_forced", "kernel.shm_rmid_forced = 1", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-unsafe.yaml", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-unsafe 0/1 SysctlForbidden 0 14s", "oc get machineconfigpool", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-bfb92f0cd1684e54d8e234ab7423cc96 True False False 3 3 3 0 42m worker rendered-worker-21b6cb9a0f8919c88caf39db80ac1fce True False False 3 3 3 0 42m", "oc label machineconfigpool worker custom-kubelet=sysctl", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"", "oc apply -f set-sysctl-worker.yaml", "oc get machineconfigpool worker -w", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 2 0 71m worker rendered-worker-f1704a00fc6f30d3a7de9a15fd68a800 False True False 3 2 3 0 72m worker rendered-worker-0188658afe1f3a183ec8c4f14186f4d5 True False False 3 3 3 0 72m", "apiVersion: v1 kind: Pod metadata: name: sysctl-example-safe-unsafe spec: containers: - name: podexample image: centos command: [\"bin/bash\", \"-c\", \"sleep INF\"] securityContext: runAsUser: 2000 runAsGroup: 3000 allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"", "oc apply -f sysctl-example-safe-unsafe.yaml", "Warning: would violate PodSecurity \"restricted:latest\": forbidden sysctls (net.core.somaxconn, kernel.msgmax) pod/sysctl-example-safe-unsafe created", "oc get pod", "NAME READY STATUS RESTARTS AGE sysctl-example-safe-unsafe 1/1 Running 0 19s", "oc rsh sysctl-example-safe-unsafe", "sh-4.4# sysctl net.core.somaxconn", "net.core.somaxconn = 1024", "oc exec -ti no-priv -- /bin/bash", "cat >> Dockerfile <<EOF FROM registry.access.redhat.com/ubi9 EOF", "podman build .", "io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "apiVersion: v1 kind: Pod metadata: name: podman-pod annotations: io.kubernetes.cri-o.Devices: \"/dev/fuse\"", "spec: containers: - name: podman-container image: quay.io/podman/stable args: - sleep - \"1000000\" securityContext: runAsUser: 1000", "oc get events [-n <project>] 1", "oc get events -n openshift-config", "LAST SEEN TYPE REASON OBJECT MESSAGE 97m Normal Scheduled pod/dapi-env-test-pod Successfully assigned openshift-config/dapi-env-test-pod to ip-10-0-171-202.ec2.internal 97m Normal Pulling pod/dapi-env-test-pod pulling image \"gcr.io/google_containers/busybox\" 97m Normal Pulled pod/dapi-env-test-pod Successfully pulled image \"gcr.io/google_containers/busybox\" 97m Normal Created pod/dapi-env-test-pod Created container 9m5s Warning FailedCreatePodSandBox pod/dapi-volume-test-pod Failed create pod sandbox: rpc error: code = Unknown desc = failed to create pod network sandbox k8s_dapi-volume-test-pod_openshift-config_6bc60c1f-452e-11e9-9140-0eec59c23068_0(748c7a40db3d08c07fb4f9eba774bd5effe5f0d5090a242432a73eee66ba9e22): Multus: Err adding pod to network \"ovn-kubernetes\": cannot set \"ovn-kubernetes\" ifname to \"eth0\": no netns: failed to Statfs \"/proc/33366/ns/net\": no such file or directory 8m31s Normal Scheduled pod/dapi-volume-test-pod Successfully assigned openshift-config/dapi-volume-test-pod to ip-10-0-171-202.ec2.internal #", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod-spec.yaml", "podman login registry.redhat.io", "podman pull registry.redhat.io/openshift4/ose-cluster-capacity", "podman run -v USDHOME/.kube:/kube:Z -v USD(pwd):/cc:Z ose-cluster-capacity /bin/cluster-capacity --kubeconfig /kube/config --<pod_spec>.yaml /cc/<pod_spec>.yaml --verbose", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 88 instance(s) of the pod small-pod. Termination reason: Unschedulable: 0/5 nodes are available: 2 Insufficient cpu, 3 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate. Pod distribution among nodes: small-pod - 192.168.124.214: 45 instance(s) - 192.168.124.120: 43 instance(s)", "kind: ClusterRole apiVersion: rbac.authorization.k8s.io/v1 metadata: name: cluster-capacity-role rules: - apiGroups: [\"\"] resources: [\"pods\", \"nodes\", \"persistentvolumeclaims\", \"persistentvolumes\", \"services\", \"replicationcontrollers\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"apps\"] resources: [\"replicasets\", \"statefulsets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"policy\"] resources: [\"poddisruptionbudgets\"] verbs: [\"get\", \"watch\", \"list\"] - apiGroups: [\"storage.k8s.io\"] resources: [\"storageclasses\"] verbs: [\"get\", \"watch\", \"list\"]", "oc create -f <file_name>.yaml", "oc create sa cluster-capacity-sa", "oc create sa cluster-capacity-sa -n default", "oc adm policy add-cluster-role-to-user cluster-capacity-role system:serviceaccount:<namespace>:cluster-capacity-sa", "apiVersion: v1 kind: Pod metadata: name: small-pod labels: app: guestbook tier: frontend spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: php-redis image: gcr.io/google-samples/gb-frontend:v4 imagePullPolicy: Always resources: limits: cpu: 150m memory: 100Mi requests: cpu: 150m memory: 100Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc create -f pod.yaml", "oc create configmap cluster-capacity-configmap --from-file=pod.yaml=pod.yaml", "apiVersion: batch/v1 kind: Job metadata: name: cluster-capacity-job spec: parallelism: 1 completions: 1 template: metadata: name: cluster-capacity-pod spec: containers: - name: cluster-capacity image: openshift/origin-cluster-capacity imagePullPolicy: \"Always\" volumeMounts: - mountPath: /test-pod name: test-volume env: - name: CC_INCLUSTER 1 value: \"true\" command: - \"/bin/sh\" - \"-ec\" - | /bin/cluster-capacity --podspec=/test-pod/pod.yaml --verbose restartPolicy: \"Never\" serviceAccountName: cluster-capacity-sa volumes: - name: test-volume configMap: name: cluster-capacity-configmap", "oc create -f cluster-capacity-job.yaml", "oc logs jobs/cluster-capacity-job", "small-pod pod requirements: - CPU: 150m - Memory: 100Mi The cluster can schedule 52 instance(s) of the pod small-pod. Termination reason: Unschedulable: No nodes are available that match all of the following predicates:: Insufficient cpu (2). Pod distribution among nodes: small-pod - 192.168.124.214: 26 instance(s) - 192.168.124.120: 26 instance(s)", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" spec: limits: - type: \"Container\" max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: cpu: \"300m\" memory: \"200Mi\" defaultRequest: cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: cpu: \"10\"", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Container\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"100m\" 4 memory: \"4Mi\" 5 default: cpu: \"300m\" 6 memory: \"200Mi\" 7 defaultRequest: cpu: \"200m\" 8 memory: \"100Mi\" 9 maxLimitRequestRatio: cpu: \"10\" 10", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" max: cpu: \"2\" 2 memory: \"1Gi\" 3 min: cpu: \"200m\" 4 memory: \"6Mi\" 5 maxLimitRequestRatio: cpu: \"10\" 6", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/Image max: storage: 1Gi 2", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: openshift.io/ImageStream max: openshift.io/image-tags: 20 2 openshift.io/images: 30 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"PersistentVolumeClaim\" min: storage: \"2Gi\" 2 max: storage: \"50Gi\" 3", "apiVersion: \"v1\" kind: \"LimitRange\" metadata: name: \"resource-limits\" 1 spec: limits: - type: \"Pod\" 2 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"200m\" memory: \"6Mi\" - type: \"Container\" 3 max: cpu: \"2\" memory: \"1Gi\" min: cpu: \"100m\" memory: \"4Mi\" default: 4 cpu: \"300m\" memory: \"200Mi\" defaultRequest: 5 cpu: \"200m\" memory: \"100Mi\" maxLimitRequestRatio: 6 cpu: \"10\" - type: openshift.io/Image 7 max: storage: 1Gi - type: openshift.io/ImageStream 8 max: openshift.io/image-tags: 20 openshift.io/images: 30 - type: \"PersistentVolumeClaim\" 9 min: storage: \"2Gi\" max: storage: \"50Gi\"", "oc create -f <limit_range_file> -n <project> 1", "oc get limits -n demoproject", "NAME CREATED AT resource-limits 2020-07-15T17:14:23Z", "oc describe limits resource-limits -n demoproject", "Name: resource-limits Namespace: demoproject Type Resource Min Max Default Request Default Limit Max Limit/Request Ratio ---- -------- --- --- --------------- ------------- ----------------------- Pod cpu 200m 2 - - - Pod memory 6Mi 1Gi - - - Container cpu 100m 2 200m 300m 10 Container memory 4Mi 1Gi 100Mi 200Mi - openshift.io/Image storage - 1Gi - - - openshift.io/ImageStream openshift.io/image - 12 - - - openshift.io/ImageStream openshift.io/image-tags - 10 - - - PersistentVolumeClaim storage - 50Gi - - -", "oc delete limits <limit_name>", "-XX:+UseParallelGC -XX:MinHeapFreeRatio=5 -XX:MaxHeapFreeRatio=10 -XX:GCTimeRatio=4 -XX:AdaptiveSizePolicyWeight=90.", "JAVA_TOOL_OPTIONS=\"-XX:+UnlockExperimentalVMOptions -XX:+UseCGroupMemoryLimitForHeap -Dsun.zip.disableMemoryMapping=true\"", "apiVersion: v1 kind: Pod metadata: name: test spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test image: fedora:latest command: - sleep - \"3600\" env: - name: MEMORY_REQUEST 1 valueFrom: resourceFieldRef: containerName: test resource: requests.memory - name: MEMORY_LIMIT 2 valueFrom: resourceFieldRef: containerName: test resource: limits.memory resources: requests: memory: 384Mi limits: memory: 512Mi securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc create -f <file_name>.yaml", "oc rsh test", "env | grep MEMORY | sort", "MEMORY_LIMIT=536870912 MEMORY_REQUEST=402653184", "oc rsh test", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 0", "sed -e '' </dev/zero", "Killed", "echo USD?", "137", "grep '^oom_kill ' /sys/fs/cgroup/memory/memory.oom_control", "oom_kill 1", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 OOMKilled 0 1m", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: false restartCount: 0 state: terminated: exitCode: 137 reason: OOMKilled phase: Failed", "oc get pod test -o yaml", "status: containerStatuses: - name: test ready: true restartCount: 1 lastState: terminated: exitCode: 137 reason: OOMKilled state: running: phase: Running", "oc get pod test", "NAME READY STATUS RESTARTS AGE test 0/1 Evicted 0 1m", "oc get pod test -o yaml", "status: message: 'Pod The node was low on resource: [MemoryPressure].' phase: Failed reason: Evicted", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - name: hello-openshift image: openshift/hello-openshift resources: limits: memory: \"512Mi\" cpu: \"2000m\"", "apiVersion: v1 kind: Pod metadata: name: my-pod namespace: my-namespace spec: containers: - image: openshift/hello-openshift name: hello-openshift resources: limits: cpu: \"1\" 1 memory: 512Mi requests: cpu: 250m 2 memory: 256Mi", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: v1 kind: Namespace metadata: name: clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: clusterresourceoverride-operator namespace: clusterresourceoverride-operator spec: targetNamespaces: - clusterresourceoverride-operator", "oc create -f <file-name>.yaml", "oc create -f cro-og.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: channel: \"stable\" name: clusterresourceoverride source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f <file-name>.yaml", "oc create -f cro-sub.yaml", "oc project clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster 1 spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 2 cpuRequestToLimitPercent: 25 3 limitCPUToMemoryPercent: 200 4", "oc create -f <file-name>.yaml", "oc create -f cro-cr.yaml", "oc get clusterresourceoverride cluster -n clusterresourceoverride-operator -o yaml", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: annotations: kubectl.kubernetes.io/last-applied-configuration: | {\"apiVersion\":\"operator.autoscaling.openshift.io/v1\",\"kind\":\"ClusterResourceOverride\",\"metadata\":{\"annotations\":{},\"name\":\"cluster\"},\"spec\":{\"podResourceOverride\":{\"spec\":{\"cpuRequestToLimitPercent\":25,\"limitCPUToMemoryPercent\":200,\"memoryRequestToLimitPercent\":50}}}} creationTimestamp: \"2019-12-18T22:35:02Z\" generation: 1 name: cluster resourceVersion: \"127622\" selfLink: /apis/operator.autoscaling.openshift.io/v1/clusterresourceoverrides/cluster uid: 978fc959-1717-4bd1-97d0-ae00ee111e8d spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 status: mutatingWebhookConfigurationRef: 1 apiVersion: admissionregistration.k8s.io/v1 kind: MutatingWebhookConfiguration name: clusterresourceoverrides.admission.autoscaling.openshift.io resourceVersion: \"127621\" uid: 98b3b8ae-d5ce-462b-8ab5-a729ea8f38f3", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 1 cpuRequestToLimitPercent: 25 2 limitCPUToMemoryPercent: 200 3", "apiVersion: v1 kind: Namespace metadata: labels: clusterresourceoverrides.admission.autoscaling.openshift.io/enabled: \"true\" 1", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.128.2.32 ip-10-0-14-183.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.130.2.10 ip-10-0-20-140.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.131.0.33 ip-10-0-2-39.us-west-2.compute.internal <none> <none>", "NAME STATUS ROLES AGE VERSION ip-10-0-14-183.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-2-39.us-west-2.compute.internal Ready worker 58m v1.30.4 ip-10-0-20-140.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-23-244.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-77-153.us-west-2.compute.internal Ready control-plane,master 65m v1.30.4 ip-10-0-99-108.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-24-233.us-west-2.compute.internal Ready infra 55m v1.30.4 ip-10-0-88-109.us-west-2.compute.internal Ready worker 24m v1.30.4 ip-10-0-67-453.us-west-2.compute.internal Ready infra 55m v1.30.4", "oc edit -n clusterresourceoverride-operator subscriptions.operators.coreos.com clusterresourceoverride", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" 1", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: clusterresourceoverride namespace: clusterresourceoverride-operator spec: config: nodeSelector: node-role.kubernetes.io/infra: \"\" tolerations: 1 - key: \"node-role.kubernetes.io/infra\" operator: \"Exists\" effect: \"NoSchedule\"", "oc edit ClusterResourceOverride cluster -n clusterresourceoverride-operator", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster resourceVersion: \"37952\" spec: podResourceOverride: spec: cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 memoryRequestToLimitPercent: 50 deploymentOverrides: replicas: 1 1 nodeSelector: node-role.kubernetes.io/infra: \"\" 2", "apiVersion: operator.autoscaling.openshift.io/v1 kind: ClusterResourceOverride metadata: name: cluster spec: podResourceOverride: spec: memoryRequestToLimitPercent: 50 cpuRequestToLimitPercent: 25 limitCPUToMemoryPercent: 200 deploymentOverrides: replicas: 3 nodeSelector: node-role.kubernetes.io/worker: \"\" tolerations: 1 - key: \"key\" operator: \"Equal\" value: \"value\" effect: \"NoSchedule\"", "oc get pods -n clusterresourceoverride-operator -o wide", "NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES clusterresourceoverride-786b8c898c-9wrdq 1/1 Running 0 23s 10.127.2.25 ip-10-0-23-244.us-west-2.compute.internal <none> <none> clusterresourceoverride-786b8c898c-vn2lf 1/1 Running 0 26s 10.128.0.80 ip-10-0-24-233.us-west-2.compute.internal <none> <none> clusterresourceoverride-operator-6b8b8b656b-lvr62 1/1 Running 0 56m 10.129.0.71 ip-10-0-67-453.us-west-2.compute.internal <none> <none>", "sysctl -a |grep commit", "# vm.overcommit_memory = 0 #", "sysctl -a |grep panic", "# vm.panic_on_oom = 0 #", "oc edit machineconfigpool <name>", "oc edit machineconfigpool worker", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: \"2022-11-16T15:34:25Z\" generation: 4 labels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 1 name: worker", "oc label machineconfigpool worker custom-kubelet=small-pods", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units 1 spec: machineConfigPoolSelector: matchLabels: pools.operator.machineconfiguration.openshift.io/worker: \"\" 2 kubeletConfig: cpuCfsQuota: false 3", "oc create -f <file_name>.yaml", "sysctl -w vm.overcommit_memory=0", "apiVersion: v1 kind: Namespace metadata: annotations: quota.openshift.io/cluster-resource-override-enabled: \"false\" <.>", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: cgroupMode: \"v1\" 1", "oc get mc", "NAME GENERATEDBYCONTROLLER IGNITIONVERSION AGE 00-master 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 00-worker 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-master-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-container-runtime 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 01-worker-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 97-master-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-generated-kubelet 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-master-ssh 3.2.0 40m 99-worker-generated-registries 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m 99-worker-ssh 3.2.0 40m rendered-master-23d4317815a5f854bd3553d689cfe2e9 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s 1 rendered-master-23e785de7587df95a4b517e0647e5ab7 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-5d596d9293ca3ea80c896a1191735bb1 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 33m rendered-worker-dcc7f1b92892d34db74d6832bcc9ccd4 52dd3ba6a9a527fc3ab42afac8d12b693534c8c9 3.2.0 10s", "oc describe mc <name>", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd_unified_cgroup_hierarchy=1 1 cgroup_no_v1=\"all\" 2 psi=0", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 05-worker-kernelarg-selinuxpermissive spec: kernelArguments: systemd.unified_cgroup_hierarchy=0 1 systemd.legacy_systemd_cgroup_controller=1 2 psi=1 3", "oc get nodes", "NAME STATUS ROLES AGE VERSION ci-ln-fm1qnwt-72292-99kt6-master-0 Ready,SchedulingDisabled master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-1 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-master-2 Ready master 58m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-a-h5gt4 Ready,SchedulingDisabled worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-b-7vtmd Ready worker 48m v1.30.3 ci-ln-fm1qnwt-72292-99kt6-worker-c-rhzkv Ready worker 48m v1.30.3", "oc debug node/<node_name>", "sh-4.4# chroot /host", "stat -c %T -f /sys/fs/cgroup", "cgroup2fs", "tmpfs", "compute: - hyperthreading: Enabled name: worker platform: aws: rootVolume: iops: 2000 size: 500 type: io1 metadataService: authentication: Optional type: c5.4xlarge zones: - us-west-2c replicas: 3 featureSet: TechPreviewNoUpgrade", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit featuregate cluster", "apiVersion: config.openshift.io/v1 kind: FeatureGate metadata: name: cluster 1 spec: featureSet: TechPreviewNoUpgrade 2", "sh-4.2# chroot /host", "sh-4.2# cat /etc/kubernetes/kubelet.conf", "featureGates: InsightsOperatorPullingSCA: true, LegacyNodeRoleBehavior: false", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: MediumUpdateAverageReaction 1", "oc edit nodes.config/cluster", "apiVersion: config.openshift.io/v1 kind: Node metadata: annotations: include.release.openshift.io/ibm-cloud-managed: \"true\" include.release.openshift.io/self-managed-high-availability: \"true\" include.release.openshift.io/single-node-developer: \"true\" release.openshift.io/create-only: \"true\" creationTimestamp: \"2022-07-08T16:02:51Z\" generation: 1 name: cluster ownerReferences: - apiVersion: config.openshift.io/v1 kind: ClusterVersion name: version uid: 36282574-bf9f-409e-a6cd-3032939293eb resourceVersion: \"1865\" uid: 0c0f7a4c-4307-4187-b591-6155695ac85b spec: workerLatencyProfile: LowUpdateSlowReaction 1", "oc get KubeControllerManager -o yaml | grep -i workerlatency -A 5 -B 5", "- lastTransitionTime: \"2022-07-11T19:47:10Z\" reason: ProfileUpdated status: \"False\" type: WorkerLatencyProfileProgressing - lastTransitionTime: \"2022-07-11T19:47:10Z\" 1 message: all static pod revision(s) have updated latency profile reason: ProfileUpdated status: \"True\" type: WorkerLatencyProfileComplete - lastTransitionTime: \"2022-07-11T19:20:11Z\" reason: AsExpected status: \"False\" type: WorkerLatencyProfileDegraded - lastTransitionTime: \"2022-07-11T19:20:36Z\" status: \"False\"", "tolerations: - key: node.kubernetes.io/not-ready operator: Exists effect: NoExecute - key: node.kubernetes.io/unreachable operator: Exists effect: NoExecute - key: node.kubernetes.io/disk-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/memory-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/pid-pressure operator: Exists effect: NoSchedule - key: node.kubernetes.io/unschedulable operator: Exists effect: NoSchedule", "kind: Node apiVersion: v1 metadata: labels: topology.kubernetes.io/region=east", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: disable-cpu-units spec: machineConfigPoolSelector: matchLabels: machineconfiguration.openshift.io/role: worker 1 kubeletConfig: node-status-update-frequency: 2 - \"10s\" node-status-report-frequency: 3 - \"1m\"", "tolerations: - key: \"node.kubernetes.io/unreachable\" operator: \"Exists\" effect: \"NoExecute\" 1 - key: \"node.kubernetes.io/not-ready\" operator: \"Exists\" effect: \"NoExecute\" 2 tolerationSeconds: 600 3", "export OFFLINE_TOKEN=<copied_api_token>", "export JWT_TOKEN=USD( curl --silent --header \"Accept: application/json\" --header \"Content-Type: application/x-www-form-urlencoded\" --data-urlencode \"grant_type=refresh_token\" --data-urlencode \"client_id=cloud-services\" --data-urlencode \"refresh_token=USD{OFFLINE_TOKEN}\" \"https://sso.redhat.com/auth/realms/redhat-external/protocol/openid-connect/token\" | jq --raw-output \".access_token\" )", "curl -s https://api.openshift.com/api/assisted-install/v2/component-versions -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq", "{ \"release_tag\": \"v2.5.1\", \"versions\": { \"assisted-installer\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-rhel8:v1.0.0-175\", \"assisted-installer-controller\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-reporter-rhel8:v1.0.0-223\", \"assisted-installer-service\": \"quay.io/app-sre/assisted-service:ac87f93\", \"discovery-agent\": \"registry.redhat.io/rhai-tech-preview/assisted-installer-agent-rhel8:v1.0.0-156\" } }", "export API_URL=<api_url> 1", "export OPENSHIFT_CLUSTER_ID=USD(oc get clusterversion -o jsonpath='{.items[].spec.clusterID}')", "export CLUSTER_REQUEST=USD(jq --null-input --arg openshift_cluster_id \"USDOPENSHIFT_CLUSTER_ID\" '{ \"api_vip_dnsname\": \"<api_vip>\", 1 \"openshift_cluster_id\": USDopenshift_cluster_id, \"name\": \"<openshift_cluster_name>\" 2 }')", "CLUSTER_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/clusters/import\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDCLUSTER_REQUEST\" | tee /dev/stderr | jq -r '.id')", "export INFRA_ENV_REQUEST=USD(jq --null-input --slurpfile pull_secret <path_to_pull_secret_file> \\ 1 --arg ssh_pub_key \"USD(cat <path_to_ssh_pub_key>)\" \\ 2 --arg cluster_id \"USDCLUSTER_ID\" '{ \"name\": \"<infraenv_name>\", 3 \"pull_secret\": USDpull_secret[0] | tojson, \"cluster_id\": USDcluster_id, \"ssh_authorized_key\": USDssh_pub_key, \"image_type\": \"<iso_image_type>\" 4 }')", "INFRA_ENV_ID=USD(curl \"USDAPI_URL/api/assisted-install/v2/infra-envs\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" -H 'accept: application/json' -H 'Content-Type: application/json' -d \"USDINFRA_ENV_REQUEST\" | tee /dev/stderr | jq -r '.id')", "curl -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.download_url'", "https://api.openshift.com/api/assisted-images/images/41b91e72-c33e-42ee-b80f-b5c5bbf6431a?arch=x86_64&image_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE2NTYwMjYzNzEsInN1YiI6IjQxYjkxZTcyLWMzM2UtNDJlZS1iODBmLWI1YzViYmY2NDMxYSJ9.1EX_VGaMNejMhrAvVRBS7PDPIQtbOOc8LtG8OukE1a4&type=minimal-iso&version=USDVERSION", "curl -L -s '<iso_url>' --output rhcos-live-minimal.iso 1", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -r '.hosts[] | select(.status != \"installed\").id'", "2294ba03-c264-4f11-ac08-2f1bb2f8c296", "HOST_ID=<host_id> 1", "curl -s USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq ' def host_name(USDhost): if (.suggested_hostname // \"\") == \"\" then if (.inventory // \"\") == \"\" then \"Unknown hostname, please wait\" else .inventory | fromjson | .hostname end else .suggested_hostname end; def is_notable(USDvalidation): [\"failure\", \"pending\", \"error\"] | any(. == USDvalidation.status); def notable_validations(USDvalidations_info): [ USDvalidations_info // \"{}\" | fromjson | to_entries[].value[] | select(is_notable(.)) ]; { \"Hosts validations\": { \"Hosts\": [ .hosts[] | select(.status != \"installed\") | { \"id\": .id, \"name\": host_name(.), \"status\": .status, \"notable_validations\": notable_validations(.validations_info) } ] }, \"Cluster validations info\": { \"notable_validations\": notable_validations(.validations_info) } } ' -r", "{ \"Hosts validations\": { \"Hosts\": [ { \"id\": \"97ec378c-3568-460c-bc22-df54534ff08f\", \"name\": \"localhost.localdomain\", \"status\": \"insufficient\", \"notable_validations\": [ { \"id\": \"ntp-synced\", \"status\": \"failure\", \"message\": \"Host couldn't synchronize with any NTP server\" }, { \"id\": \"api-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"api-int-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" }, { \"id\": \"apps-domain-name-resolved-correctly\", \"status\": \"error\", \"message\": \"Parse error for domain name resolutions result\" } ] } ] }, \"Cluster validations info\": { \"notable_validations\": [] } }", "curl -X POST -s \"USDAPI_URL/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/hosts/USDHOST_ID/actions/install\" -H \"Authorization: Bearer USD{JWT_TOKEN}\"", "curl -s \"USDAPI_URL/api/assisted-install/v2/clusters/USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq '{ \"Cluster day-2 hosts\": [ .hosts[] | select(.status != \"installed\") | {id, requested_hostname, status, status_info, progress, status_updated_at, updated_at, infra_env_id, cluster_id, created_at} ] }'", "{ \"Cluster day-2 hosts\": [ { \"id\": \"a1c52dde-3432-4f59-b2ae-0a530c851480\", \"requested_hostname\": \"control-plane-1\", \"status\": \"added-to-existing-cluster\", \"status_info\": \"Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs\", \"progress\": { \"current_stage\": \"Done\", \"installation_percentage\": 100, \"stage_started_at\": \"2022-07-08T10:56:20.476Z\", \"stage_updated_at\": \"2022-07-08T10:56:20.476Z\" }, \"status_updated_at\": \"2022-07-08T10:56:20.476Z\", \"updated_at\": \"2022-07-08T10:57:15.306369Z\", \"infra_env_id\": \"b74ec0c3-d5b5-4717-a866-5b6854791bd3\", \"cluster_id\": \"8f721322-419d-4eed-aa5b-61b50ea586ae\", \"created_at\": \"2022-07-06T22:54:57.161614Z\" } ] }", "curl -s \"USDAPI_URL/api/assisted-install/v2/events?cluster_id=USDCLUSTER_ID\" -H \"Authorization: Bearer USD{JWT_TOKEN}\" | jq -c '.[] | {severity, message, event_time, host_id}'", "{\"severity\":\"info\",\"message\":\"Host compute-0: updated status from insufficient to known (Host is ready to be installed)\",\"event_time\":\"2022-07-08T11:21:46.346Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from known to installing (Installation is in progress)\",\"event_time\":\"2022-07-08T11:28:28.647Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing to installing-in-progress (Starting installation)\",\"event_time\":\"2022-07-08T11:28:52.068Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Uploaded logs for host compute-0 cluster 8f721322-419d-4eed-aa5b-61b50ea586ae\",\"event_time\":\"2022-07-08T11:29:47.802Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host compute-0: updated status from installing-in-progress to added-to-existing-cluster (Host has rebooted and no further updates will be posted. Please check console for progress and to possibly approve pending CSRs)\",\"event_time\":\"2022-07-08T11:29:48.259Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"} {\"severity\":\"info\",\"message\":\"Host: compute-0, reached installation stage Rebooting\",\"event_time\":\"2022-07-08T11:29:48.261Z\",\"host_id\":\"9d7b3b44-1125-4ad0-9b14-76550087b445\"}", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.30.3 compute-1.example.com Ready worker 11m v1.30.3", "OCP_VERSION=<ocp_version> 1", "ARCH=<architecture> 1", "oc extract -n openshift-machine-api secret/worker-user-data-managed --keys=userData --to=- > worker.ign", "curl -k https://mirror.openshift.com/pub/openshift-v4/clients/ocp/USDOCP_VERSION/openshift-install-linux.tar.gz > openshift-install-linux.tar.gz", "tar zxvf openshift-install-linux.tar.gz", "chmod +x openshift-install", "ISO_URL=USD(./openshift-install coreos print-stream-json | grep location | grep USDARCH | grep iso | cut -d\\\" -f4)", "curl -L USDISO_URL -o rhcos-live.iso", "nmcli con mod <network_interface> ipv4.method manual / ipv4.addresses <static_ip> ipv4.gateway <network_gateway> ipv4.dns <dns_server> / 802-3-ethernet.mtu 9000", "nmcli con up <network_interface>", "{ \"ignition\":{ \"version\":\"3.2.0\", \"config\":{ \"merge\":[ { \"source\":\"<hosted_worker_ign_file>\" 1 } ] } }, \"storage\":{ \"files\":[ { \"path\":\"/etc/hostname\", \"contents\":{ \"source\":\"data:,<new_fqdn>\" 2 }, \"mode\":420, \"overwrite\":true, \"path\":\"/etc/hostname\" } ] } }", "sudo coreos-installer install --copy-network / --ignition-url=<new_worker_ign_file> <hard_disk> --insecure-ignition", "coreos-installer install --ignition-url=<hosted_worker_ign_file> <hard_disk>", "apiVersion: agent-install.openshift.io/v1 kind: NMStateConfig metadata: name: nmstateconfig-dhcp namespace: example-sno labels: nmstate_config_cluster_name: <nmstate_config_cluster_label> spec: config: interfaces: - name: eth0 type: ethernet state: up ipv4: enabled: true dhcp: true ipv6: enabled: false interfaces: - name: \"eth0\" macAddress: \"AA:BB:CC:DD:EE:11\"", "oc get nodes", "NAME STATUS ROLES AGE VERSION control-plane-1.example.com Ready master,worker 56m v1.30.3 compute-1.example.com Ready worker 11m v1.30.3", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.30.3 master-1 Ready master 63m v1.30.3 master-2 Ready master 64m v1.30.3", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.30.3 master-1 Ready master 73m v1.30.3 master-2 Ready master 74m v1.30.3 worker-0 Ready worker 11m v1.30.3 worker-1 Ready worker 11m v1.30.3", "topk(3, sum(increase(container_runtime_crio_containers_oom_count_total[1d])) by (name))", "rate(container_runtime_crio_image_pulls_failure_total[1h]) / (rate(container_runtime_crio_image_pulls_success_total[1h]) + rate(container_runtime_crio_image_pulls_failure_total[1h]))", "sum by (node) (container_memory_rss{id=\"/system.slice\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 80", "sum by (node) (container_memory_rss{id=\"/system.slice/kubelet.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50", "sum by (node) (container_memory_rss{id=\"/system.slice/crio.service\"}) / sum by (node) (kube_node_status_capacity{resource=\"memory\"} - kube_node_status_allocatable{resource=\"memory\"}) * 100 >= 50", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 80", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/kubelet.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50", "sum by (node) (rate(container_cpu_usage_seconds_total{id=\"/system.slice/crio.service\"}[5m]) * 100) / sum by (node) (kube_node_status_capacity{resource=\"cpu\"} - kube_node_status_allocatable{resource=\"cpu\"}) >= 50" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/nodes/index
7.191. pykickstart
7.191. pykickstart 7.191.1. RHBA-2013:0507 - pykickstart bug fix and enhancement update Updated pykickstart packages that fix four bugs and add one enhancement are now available for Red Hat Enterprise Linux 6. The pykickstart packages contain a python library for manipulating kickstart files. Bug Fixes BZ# 823856 , BZ# 832688 , BZ# 837440 Previously, when using the volgroup command with the --useexisting option without specifying the physical volume (PV), the system installation failed with the following message: volgroup must be given a list of partitions With this update, the library scripts have been set to check if the PVs are defined prior to the installation. In case of undefined PVs, the scripts raise a warning message to notify the user. BZ#815573 Previously, the kickstart command options marked as deprecated were not allowed to carry a value. Consequently, a kickstart file containing a deprecated command option with an assigned value, such as --videoram="value", could not be validated. The ksvalidator tool terminated with the following message: --videoram option does not take a value With this update, the deprecated options have been allowed to take values and the error no longer occurs in the aforementioned scenario. Enhancement BZ# 843174 The "autopart", "logvol", "part", and "raid" commands can now take the --cipher option to specify the encryption algorithm to be used for encrypting devices. If this option is not provided, the installer will use the default algorithm. All users of pykickstart are advised to upgrade to these updated packages, which fix these bugs and add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/pykickstart
Chapter 4. Release components
Chapter 4. Release components 4.1. Supported Artifacts introduced in this release No artifacts have been moved from Technology Preview to fully supported in this release. 4.2. Technology Preview artifacts introduced in this release This section describes the Technology Preview artifacts introduced in this release. 4.2.1. Technology Preview artifacts introduced in the 4.3 release The following artifacts are provided as Technology Preview in the 4.3 release. vertx-grpc-client The Eclipse Vert.x gRPC Client is a new Google Remote Procedure Call (gRPC) client that relies on the Eclipse Vert.x HTTP client. The Eclipse Vert.x gRPC Client provides two alternative ways to interact with a server: A gRPC request-and-response-oriented API that does not require a generated stub A generated stub with a gRPC channel Note The Eclipse Vert.x gRPC Client supersedes the integrated Netty-based gRPC client. vertx-grpc-server The Eclipse Vert.x gRPC Server is a new Google Remote Procedure Call (gRPC) server that relies on the Eclipse Vert.x HTTP server. The Eclipse Vert.x gRPC Server provides two alternative ways to interact with a client: A gRPC request-and-response-oriented API that does not require a generated stub A generated stub with a service bridge Note The Eclipse Vert.x gRPC Server supersedes the integrated Netty-based gRPC server. vertx-grpc-common The Eclipse Vert.x gRPC Common artifact provides common functionality that the Eclipse Vert.x gRPC Client and the Eclipse Vert.x gRPC Server both use. vertx-grpc-aggregator The Eclipse Vert.x gRPC Aggregator consists of a Project Object Model (POM) file. The Eclipse Vert.x gRPC Aggregator does not provide any additional functionality. 4.2.2. Technology Preview artifacts introduced in earlier 4.x releases The following artifacts that were available as Technology Preview from 4.x releases continue to be Technology Preview in this release. vertx-auth-otp The Eclipse Vert.x OTP Auth provider is an implementation of the AuthenticationProvider interface that uses one-time passwords to perform authentication. The Eclipse Vert.x OTP Auth provider supports the Google Authenticator. You can use any convenient library to create the quick response (QR) with a key. You can also transfer the key in base32 format. vertx-oracle-client The Eclipse Vert.x reactive Oracle client is a client for the Oracle server. It is an API that helps in database scalability and has low overhead. Because the API is reactive and non-blocking, you can handle multiple database connections with a single thread. Note The Eclipse Vert.x reactive Oracle client requires that you use the Oracle JDBC driver. Red Hat does not provide support for the Oracle JDBC driver. The Eclipse Vert.x reactive Oracle client requires that you use JDK 11 or JDK 17. vertx-http-proxy The Eclipse Vert.x HTTP proxy is a reverse proxy. Using this module you can easily create proxies. The proxy server can also dy\u00adnam\u00adi\u00adcally re\u00adsolve the DNS queries from ori\u00adgin server. vertx-web-proxy The Eclipse Vert.x web proxy enables you to mount an Eclipse Vert.x HTTP proxy in an Eclipse Vert.x web router. vertx-opentelemetry Open Telemetry tracing is supported. You can use Open Telemetry for HTTP and event bus tracing. 4.3. Artifacts removed in this release No artifacts are removed in this release. 4.4. Artifacts deprecated in this release No artifacts are marked as deprecated in this release.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_eclipse_vert.x/4.3/html/release_notes_for_eclipse_vert.x_4.3/release-components-vertx
Chapter 25. Security management
Chapter 25. Security management Security management is the process of managing users, groups, and permissions. You can control access to Business Central resources and features from the Business Central Security management page. Business Central defines three types of entities for security management: users, groups, and roles. You can assign permissions to both roles and groups. A user inherits permissions from the groups and roles that the user is a member of. 25.1. Security management providers In the context of security management, a realm restricts access to different application resources. Realms contain information about users, groups, roles, and permissions. A concrete user and group management service implementation for a specific realm is called a security management provider. If the built-in security management providers do not meet the requirements of your application security realm, then you can build and register your own security management provider. Note If the security management provider is not installed, the user interface for managing the security realm is not available. After you install and configure a security management provider, the user and group management features are automatically enabled in the security management user interface. Business Central includes the Red Hat JBoss EAP security management provider which supports realm types based on the contents of the application-users.properties or application-roles.properties property file. 25.1.1. Configuring the Red Hat JBoss EAP security management provider based on property files You can build and register your own Red Hat JBoss EAP security management provider. To use the Red Hat JBoss EAP security management provider based on property files, complete the steps in this procedure. Prerequisites Red Hat JBoss EAP is installed. Procedure To use an existing users or roles property file from the Red Hat JBoss EAP instance, include the following system properties in the EAP_HOME/standalone/configuration/application-users.properties and EAP_HOME/standalone/configuration/application-roles.properties files, as shown in the following example: <property name="org.uberfire.ext.security.management.wildfly.properties.realm" value="ApplicationRealm"/> <property name="org.uberfire.ext.security.management.wildfly.properties.users-file-path" value="/standalone/configuration/application-users.properties"/> <property name="org.uberfire.ext.security.management.wildfly.properties.groups-file-path" value="/standalone/configuration/application-roles.properties"/> The following table provides a description and default value for these properties: Table 25.1. Red Hat JBoss EAP security management provider based on property files Property Description Default value org.uberfire.ext.security.management.wildfly.properties.realm The name of the realm. This property is not mandatory. ApplicationRealm org.uberfire.ext.security.management.wildfly.properties.users-file-path The absolute file path for the users property file. This property is mandatory. ./standalone/configuration/application-users.properties org.uberfire.ext.security.management.wildfly.properties.groups-file-path The absolute file path for the groups property file. This property is mandatory. ./standalone/configuration/application-roles.properties Create the security-management.properties file in the root directory of your application. For example, create the following file: Enter the following system property and security provider name as a value in the security-management.properties file: <property name="org.uberfire.ext.security.management.api.userManagementServices" value="WildflyUserManagementService"/> 25.1.2. Configuring the Red Hat JBoss EAP security management provider based on property files and CLI mode To use the Red Hat JBoss EAP security management provider based on property files and CLI mode, complete the steps in this procedure. Prerequisites Red Hat JBoss EAP is installed. Procedure To use an existing users or roles property file from the Red Hat JBoss EAP instance, include the following system properties in the EAP_HOME/standalone/configuration/application-users.properties and EAP_HOME/standalone/configuration/application-roles.properties files, as shown in the following example: <property name="org.uberfire.ext.security.management.wildfly.cli.host" value="localhost"/> <property name="org.uberfire.ext.security.management.wildfly.cli.port" value="9990"/> <property name="org.uberfire.ext.security.management.wildfly.cli.user" value="<USERNAME>"/> <property name="org.uberfire.ext.security.management.wildfly.cli.password" value="<USER_PWD>"/> <property name="org.uberfire.ext.security.management.wildfly.cli.realm" value="ApplicationRealm"/> The following table provides a description and default value for these properties: Table 25.2. Red Hat JBoss EAP security management provider based on property files and CLI mode Property Description Default value org.uberfire.ext.security.management.wildfly.cli.host The native administration interface host. localhost org.uberfire.ext.security.management.wildfly.cli.port The native administration interface port. 9990 org.uberfire.ext.security.management.wildfly.cli.user The native administration interface username. NA org.uberfire.ext.security.management.wildfly.cli.password The native administration interface user's password. NA org.uberfire.ext.security.management.wildfly.cli.realm The realm used by the application's security context. ApplicationRealm Create the security-management.properties file in the root directory of your application. For example, create the following file: Enter the following system property and security provider name as a value in the security-management.properties file: <property name="org.uberfire.ext.security.management.api.userManagementServices" value="WildflyCLIUserManagementService"/> 25.2. Permissions and settings A permission is an authorization granted to a user to perform actions related to a specific resource within the application. For example, a user can have following permissions: View a page. Save the project. View a repository. Delete a dashboard. You can grant or deny a permission and a permission can be global or resource specific. You can use permissions to protect access to resources and customize features within the application. 25.2.1. Changing permissions for groups and roles in Business Central In Business Central, you cannot change permissions for an individual user. However, you can change permissions for groups and roles. The changed permissions apply to users with the role or that belong to a group that you changed. Note Any changes that you make to roles or groups affect all of the users associated with that role or group. Prerequisites You are logged in to Business Central with the admin user role. Procedure To access the Security management page in Business Central, select the Admin icon in the top-right corner of the screen. Click Roles , Groups , or Users on the Business Central Settings page. The Security management page opens on the tab for the icon that you clicked. From the list, click the role or group you want to edit. All details are displayed in the right panel. Set the Home Page or Priority under the Settings section. Set the Business Central, page, editor, space, and project permissions under the Permissions section. Figure 25.1. Setting the permissions Click the arrow to a resource type to expand the resource type whose permissions you want to change. Optional: To add an exception for a resource type, click Add Exception and then set the permissions as required. Note You cannot add an exception to the Business Central resource type. Click Save . 25.2.2. Changing the Business Central home page The home page is the page that appears after you log in to Business Central. By default, the home page is set to Home . You can specify a different home page for each role and group. Procedure In Business Central, select the Admin icon in the top-right corner of the screen and select Roles or Groups . Select a role or group. Select a page from the Home Page list. Click Save . Note The role or group must have read access to a page before you can make it the home page. 25.2.3. Setting priorities A user can have multiple roles and belong to multiple groups. The Priority setting determines the order of precedence of a role or group. Prerequisites You are logged in to Business Central with the admin user role. Procedure In Business Central, select the Admin icon in the top-right corner of the screen and select Roles or Groups . Select a role or group. Select a priority from the Priority menu, and then click Save . Note If a user has a role or belongs to a group that has conflicting settings, the settings of the role or group with the highest priority applies.
[ "<property name=\"org.uberfire.ext.security.management.wildfly.properties.realm\" value=\"ApplicationRealm\"/> <property name=\"org.uberfire.ext.security.management.wildfly.properties.users-file-path\" value=\"/standalone/configuration/application-users.properties\"/> <property name=\"org.uberfire.ext.security.management.wildfly.properties.groups-file-path\" value=\"/standalone/configuration/application-roles.properties\"/>", "src/main/resources/security-management.properties", "<property name=\"org.uberfire.ext.security.management.api.userManagementServices\" value=\"WildflyUserManagementService\"/>", "<property name=\"org.uberfire.ext.security.management.wildfly.cli.host\" value=\"localhost\"/> <property name=\"org.uberfire.ext.security.management.wildfly.cli.port\" value=\"9990\"/> <property name=\"org.uberfire.ext.security.management.wildfly.cli.user\" value=\"<USERNAME>\"/> <property name=\"org.uberfire.ext.security.management.wildfly.cli.password\" value=\"<USER_PWD>\"/> <property name=\"org.uberfire.ext.security.management.wildfly.cli.realm\" value=\"ApplicationRealm\"/>", "src/main/resources/security-management.properties", "<property name=\"org.uberfire.ext.security.management.api.userManagementServices\" value=\"WildflyCLIUserManagementService\"/>" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/con-business-central-security-management_configuring-central
Project APIs
Project APIs OpenShift Container Platform 4.13 Reference guide for project APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/project_apis/index
Registry
Registry OpenShift Container Platform 4.10 Configuring registries for OpenShift Container Platform Red Hat OpenShift Documentation Team
[ "podman login registry.redhat.io Username:<your_registry_account_username> Password:<your_registry_account_password>", "podman pull registry.redhat.io/<repository_name>", "oc patch configs.imageregistry.operator.openshift.io/cluster --type merge -p '{\"spec\":{\"defaultRoute\":true}}'", "apiVersion: v1 kind: ConfigMap metadata: name: my-registry-ca data: registry.example.com: | -----BEGIN CERTIFICATE----- -----END CERTIFICATE----- registry-with-port.example.com..5000: | 1 -----BEGIN CERTIFICATE----- -----END CERTIFICATE-----", "oc create configmap registry-config --from-file=<external_registry_address>=ca.crt -n openshift-config", "oc edit image.config.openshift.io cluster", "spec: additionalTrustedCA: name: registry-config", "oc create secret generic image-registry-private-configuration-user --from-literal=KEY1=value1 --from-literal=KEY2=value2 --namespace openshift-image-registry", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_S3_ACCESSKEY=myaccesskey --from-literal=REGISTRY_STORAGE_S3_SECRETKEY=mysecretkey --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: s3: bucket: <bucket-name> region: <region-name>", "regionEndpoint: http://rook-ceph-rgw-ocs-storagecluster-cephobjectstore.openshift-storage.svc.cluster.local", "oc create secret generic image-registry-private-configuration-user --from-file=REGISTRY_STORAGE_GCS_KEYFILE=<path_to_keyfile> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: gcs: bucket: <bucket-name> projectID: <project-id> region: <region-name>", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"disableRedirect\":true}}'", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_SWIFT_USER=<username> --from-literal=REGISTRY_STORAGE_SWIFT_PASSWORD=<password> -n openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: swift: container: <container-id>", "oc create secret generic image-registry-private-configuration-user --from-literal=REGISTRY_STORAGE_AZURE_ACCOUNTKEY=<accountkey> --namespace openshift-image-registry", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name>", "oc edit configs.imageregistry.operator.openshift.io/cluster", "storage: azure: accountName: <storage-account-name> container: <container-name> cloudName: AzureUSGovernmentCloud 1", "apiVersion: storage.k8s.io/v1 kind: StorageClass metadata: name: custom-csi-storageclass provisioner: cinder.csi.openstack.org volumeBindingMode: WaitForFirstConsumer allowVolumeExpansion: true parameters: availability: <availability_zone_name>", "oc apply -f <storage_class_file_name>", "storageclass.storage.k8s.io/custom-csi-storageclass created", "apiVersion: v1 kind: PersistentVolumeClaim metadata: name: csi-pvc-imageregistry namespace: openshift-image-registry 1 annotations: imageregistry.openshift.io: \"true\" spec: accessModes: - ReadWriteOnce volumeMode: Filesystem resources: requests: storage: 100Gi 2 storageClassName: <your_custom_storage_class> 3", "oc apply -f <pvc_file_name>", "persistentvolumeclaim/csi-pvc-imageregistry created", "oc patch configs.imageregistry.operator.openshift.io/cluster --type 'json' -p='[{\"op\": \"replace\", \"path\": \"/spec/storage/pvc/claim\", \"value\": \"csi-pvc-imageregistry\"}]'", "config.imageregistry.operator.openshift.io/cluster patched", "oc get configs.imageregistry.operator.openshift.io/cluster -o yaml", "status: managementState: Managed pvc: claim: csi-pvc-imageregistry", "oc get pvc -n openshift-image-registry csi-pvc-imageregistry", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE csi-pvc-imageregistry Bound pvc-72a8f9c9-f462-11e8-b6b6-fa163e18b7b5 100Gi RWO custom-csi-storageclass 11m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim:", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.10 True False False 6h50m", "oc edit configs.imageregistry/cluster", "managementState: Removed", "managementState: Managed", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"managementState\":\"Managed\"}}'", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch configs.imageregistry.operator.openshift.io cluster --type merge --patch '{\"spec\":{\"storage\":{\"emptyDir\":{}}}}'", "Error from server (NotFound): configs.imageregistry.operator.openshift.io \"cluster\" not found", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "oc policy add-role-to-user registry-viewer <user_name>", "oc policy add-role-to-user registry-editor <user_name>", "oc get nodes", "oc debug nodes/<node_name>", "sh-4.2# chroot /host", "sh-4.2# oc login -u kubeadmin -p <password_from_install_log> https://api-int.<cluster_name>.<base_domain>:6443", "sh-4.2# podman login -u kubeadmin -p USD(oc whoami -t) image-registry.openshift-image-registry.svc:5000", "Login Succeeded!", "sh-4.2# podman pull <name.io>/<image>", "sh-4.2# podman tag <name.io>/<image> image-registry.openshift-image-registry.svc:5000/openshift/<image>", "sh-4.2# podman push image-registry.openshift-image-registry.svc:5000/openshift/<image>", "oc get pods -n openshift-image-registry", "NAME READY STATUS RESTARTS AGE cluster-image-registry-operator-764bd7f846-qqtpb 1/1 Running 0 78m image-registry-79fb4469f6-llrln 1/1 Running 0 77m node-ca-hjksc 1/1 Running 0 73m node-ca-tftj6 1/1 Running 0 77m node-ca-wb6ht 1/1 Running 0 77m node-ca-zvt9q 1/1 Running 0 74m", "oc logs deployments/image-registry -n openshift-image-registry", "2015-05-01T19:48:36.300593110Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"version=v2.0.0+unknown\" 2015-05-01T19:48:36.303294724Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"redis not configured\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303422845Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"using inmemory layerinfo cache\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002 2015-05-01T19:48:36.303433991Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"Using OpenShift Auth handler\" 2015-05-01T19:48:36.303439084Z time=\"2015-05-01T19:48:36Z\" level=info msg=\"listening on :5000\" instance.id=9ed6c43d-23ee-453f-9a4b-031fea646002", "cat <<EOF | oc create -f - apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: prometheus-scraper rules: - apiGroups: - image.openshift.io resources: - registry/metrics verbs: - get EOF", "oc adm policy add-cluster-role-to-user prometheus-scraper <username>", "openshift: oc whoami -t", "curl --insecure -s -u <user>:<secret> \\ 1 https://image-registry.openshift-image-registry.svc:5000/extensions/v2/metrics | grep imageregistry | head -n 20", "HELP imageregistry_build_info A metric with a constant '1' value labeled by major, minor, git commit & git version from which the image registry was built. TYPE imageregistry_build_info gauge imageregistry_build_info{gitCommit=\"9f72191\",gitVersion=\"v3.11.0+9f72191-135-dirty\",major=\"3\",minor=\"11+\"} 1 HELP imageregistry_digest_cache_requests_total Total number of requests without scope to the digest cache. TYPE imageregistry_digest_cache_requests_total counter imageregistry_digest_cache_requests_total{type=\"Hit\"} 5 imageregistry_digest_cache_requests_total{type=\"Miss\"} 24 HELP imageregistry_digest_cache_scoped_requests_total Total number of scoped requests to the digest cache. TYPE imageregistry_digest_cache_scoped_requests_total counter imageregistry_digest_cache_scoped_requests_total{type=\"Hit\"} 33 imageregistry_digest_cache_scoped_requests_total{type=\"Miss\"} 44 HELP imageregistry_http_in_flight_requests A gauge of requests currently being served by the registry. TYPE imageregistry_http_in_flight_requests gauge imageregistry_http_in_flight_requests 1 HELP imageregistry_http_request_duration_seconds A histogram of latencies for requests to the registry. TYPE imageregistry_http_request_duration_seconds summary imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.5\"} 0.01296087 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.9\"} 0.014847248 imageregistry_http_request_duration_seconds{method=\"get\",quantile=\"0.99\"} 0.015981195 imageregistry_http_request_duration_seconds_sum{method=\"get\"} 12.260727916000022", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "oc get secret -n openshift-ingress router-certs-default -o go-template='{{index .data \"tls.crt\"}}' | base64 -d | sudo tee /etc/pki/ca-trust/source/anchors/USD{HOST}.crt > /dev/null", "sudo update-ca-trust enable", "sudo podman login -u kubeadmin -p USD(oc whoami -t) USDHOST", "oc patch configs.imageregistry.operator.openshift.io/cluster --patch '{\"spec\":{\"defaultRoute\":true}}' --type=merge", "HOST=USD(oc get route default-route -n openshift-image-registry --template='{{ .spec.host }}')", "podman login -u kubeadmin -p USD(oc whoami -t) --tls-verify=false USDHOST 1", "oc create secret tls public-route-tls -n openshift-image-registry --cert=</path/to/tls.crt> --key=</path/to/tls.key>", "spec: routes: - name: public-routes hostname: myregistry.mycorp.organization secretName: public-route-tls" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html-single/registry/index
23.15.3. Create Software RAID
23.15.3. Create Software RAID Note On System z, the storage subsystem uses RAID transparently. There is no need to set up a software RAID. Redundant arrays of independent disks (RAIDs) are constructed from multiple storage devices that are arranged to provide increased performance and - in some configurations - greater fault tolerance. Refer to the Red Hat Enterprise Linux Storage Administration Guide for a description of different kinds of RAIDs. To make a RAID device, you must first create software RAID partitions. Once you have created two or more software RAID partitions, select RAID to join the software RAID partitions into a RAID device. RAID Partition Choose this option to configure a partition for software RAID. This option is the only choice available if your disk contains no software RAID partitions. This is the same dialog that appears when you add a standard partition - refer to Section 23.15.2, "Adding Partitions" for a description of the available options. Note, however, that File System Type must be set to software RAID Figure 23.40. Create a software RAID partition RAID Device Choose this option to construct a RAID device from two or more existing software RAID partitions. This option is available if two or more software RAID partitions have been configured. Figure 23.41. Create a RAID device Select the file system type as for a standard partition. Anaconda automatically suggests a name for the RAID device, but you can manually select names from md0 to md15 . Click the checkboxes beside individual storage devices to include or remove them from this RAID. The RAID Level corresponds to a particular type of RAID. Choose from the following options: RAID 0 - distributes data across multiple storage devices. Level 0 RAIDs offer increased performance over standard partitions, and can be used to pool the storage of multiple devices into one large virtual device. Note that Level 0 RAIDS offer no redundancy and that the failure of one device in the array destroys the entire array. RAID 0 requires at least two RAID partitions. RAID 1 - mirrors the data on one storage device onto one or more other storage devices. Additional devices in the array provide increasing levels of redundancy. RAID 1 requires at least two RAID partitions. RAID 4 - distributes data across multiple storage devices, but uses one device in the array to store parity information that safeguards the array in case any device within the array fails. Because all parity information is stored on the one device, access to this device creates a bottleneck in the performance of the array. RAID 4 requires at least three RAID partitions. RAID 5 - distributes data and parity information across multiple storage devices. Level 5 RAIDs therefore offer the performance advantages of distributing data across multiple devices, but do not share the performance bottleneck of level 4 RAIDs because the parity information is also distributed through the array. RAID 5 requires at least three RAID partitions. RAID 6 - level 6 RAIDs are similar to level 5 RAIDs, but instead of storing only one set of parity data, they store two sets. RAID 6 requires at least four RAID partitions. RAID 10 - level 10 RAIDs are nested RAIDs or hybrid RAIDs . Level 10 RAIDs are constructed by distributing data over mirrored sets of storage devices. For example, a level 10 RAID constructed from four RAID partitions consists of two pairs of partitions in which one partition mirrors the other. Data is then distributed across both pairs of storage devices, as in a level 0 RAID. RAID 10 requires at least four RAID partitions.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/create_software_raid-s390
9.4. RAID Levels and Linear Support
9.4. RAID Levels and Linear Support RAID supports various configurations, including levels 0, 1, 4, 5, and linear. These RAID types are defined as follows: Level 0 - RAID level 0, often called "striping," is a performance-oriented striped data mapping technique. This means the data being written to the array is broken down into strips and written across the member disks of the array, allowing high I/O performance at low inherent cost but provides no redundancy. The storage capacity of a level 0 array is equal to the total capacity of the member disks in a Hardware RAID or the total capacity of member partitions in a Software RAID. Level 1 - RAID level 1, or "mirroring," has been used longer than any other form of RAID. Level 1 provides redundancy by writing identical data to each member disk of the array, leaving a "mirrored" copy on each disk. Mirroring remains popular due to its simplicity and high level of data availability. Level 1 operates with two or more disks that may use parallel access for high data-transfer rates when reading but more commonly operate independently to provide high I/O transaction rates. Level 1 provides very good data reliability and improves performance for read-intensive applications but at a relatively high cost. [3] The storage capacity of the level 1 array is equal to the capacity of one of the mirrored hard disks in a Hardware RAID or one of the mirrored partitions in a Software RAID. Level 4 - Level 4 uses parity [4] concentrated on a single disk drive to protect data. It is better suited to transaction I/O rather than large file transfers. Because the dedicated parity disk represents an inherent bottleneck, level 4 is seldom used without accompanying technologies such as write-back caching. Although RAID level 4 is an option in some RAID partitioning schemes, it is not an option allowed in Red Hat Enterprise Linux RAID installations. [5] The storage capacity of Hardware RAID level 4 is equal to the capacity of member disks, minus the capacity of one member disk. The storage capacity of Software RAID level 4 is equal to the capacity of the member partitions, minus the size of one of the partitions if they are of equal size. Level 5 - This is the most common type of RAID. By distributing parity across some or all of an array's member disk drives, RAID level 5 eliminates the write bottleneck inherent in level 4. The only performance bottleneck is the parity calculation process. With modern CPUs and Software RAID, that usually is not a very big problem. As with level 4, the result is asymmetrical performance, with reads substantially outperforming writes. Level 5 is often used with write-back caching to reduce the asymmetry. The storage capacity of Hardware RAID level 5 is equal to the capacity of member disks, minus the capacity of one member disk. The storage capacity of Software RAID level 5 is equal to the capacity of the member partitions, minus the size of one of the partitions if they are of equal size. Linear RAID - Linear RAID is a simple grouping of drives to create a larger virtual drive. In linear RAID, the chunks are allocated sequentially from one member drive, going to the drive only when the first is completely filled. This grouping provides no performance benefit, as it is unlikely that any I/O operations will be split between member drives. Linear RAID also offers no redundancy and, in fact, decreases reliability - if any one member drive fails, the entire array cannot be used. The capacity is the total of all member disks. [3] RAID level 1 comes at a high cost because you write the same information to all of the disks in the array, which wastes drive space. For example, if you have RAID level 1 set up so that your root ( / ) partition exists on two 40G drives, you have 80G total but are only able to access 40G of that 80G. The other 40G acts like a mirror of the first 40G. [4] Parity information is calculated based on the contents of the rest of the member disks in the array. This information can then be used to reconstruct data when one disk in the array fails. The reconstructed data can then be used to satisfy I/O requests to the failed disk before it is replaced and to repopulate the failed disk after it has been replaced. [5] RAID level 4 takes up the same amount of space as RAID level 5, but level 5 has more advantages. For this reason, level 4 is not supported.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/redundant_array_of_independent_disks_raid-raid_levels_and_linear_support
function::ns_uid
function::ns_uid Name function::ns_uid - Returns the user ID of a target process as seen in a user namespace Synopsis Arguments None Description This function returns the user ID of the target process as seen in the target user namespace if provided, or the stap process namespace.
[ "ns_uid:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ns-uid
Chapter 4. Configuration
Chapter 4. Configuration This chapter describes the process for binding the AMQ OpenWire JMS implementation to your JMS application and setting configuration options. JMS uses the Java Naming Directory Interface (JNDI) to register and look up API implementations and other resources. This enables you to write code to the JMS API without tying it to a particular implementation. Configuration options are exposed as query parameters on the connection URI. For more information about configuring AMQ OpenWire JMS, see the ActiveMQ user guide . 4.1. Configuring the JNDI initial context JMS applications use a JNDI InitialContext object obtained from an InitialContextFactory to look up JMS objects such as the connection factory. AMQ OpenWire JMS provides an implementation of the InitialContextFactory in the org.apache.activemq.jndi.ActiveMQInitialContextFactory class. The InitialContextFactory implementation is discovered when the InitialContext object is instantiated: javax.naming.Context context = new javax.naming.InitialContext(); To find an implementation, JNDI must be configured in your environment. There are three ways of achieving this: using a jndi.properties file, using a system property, or using the initial context API. Using a jndi.properties file Create a file named jndi.properties and place it on the Java classpath. Add a property with the key java.naming.factory.initial . Example: Setting the JNDI initial context factory using a jndi.properties file java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory In Maven-based projects, the jndi.properties file is placed in the <project-dir> /src/main/resources directory. Using a system property Set the java.naming.factory.initial system property. Example: Setting the JNDI initial context factory using a system property USD java -Djava.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory ... Using the initial context API Use the JNDI initial context API to set properties programatically. Example: Setting JNDI properties programatically Hashtable<Object, Object> env = new Hashtable<>(); env.put("java.naming.factory.initial", "org.apache.activemq.jndi.ActiveMQInitialContextFactory"); InitialContext context = new InitialContext(env); Note that you can use the same API to set the JNDI properties for connection factories, queues, and topics. 4.2. Configuring the connection factory The JMS connection factory is the entry point for creating connections. It uses a connection URI that encodes your application-specific configuration settings. To set the factory name and connection URI, create a property in the format below. You can store this configuration in a jndi.properties file or set the corresponding system property. The JNDI property format for connection factories connectionFactory. <lookup-name> = <connection-uri> For example, this is how you might configure a factory named app1 : Example: Setting the connection factory in a jndi.properties file connectionFactory.app1 = tcp://example.net:61616?jms.clientID=backend You can then use the JNDI context to look up your configured connection factory using the name app1 : ConnectionFactory factory = (ConnectionFactory) context.lookup("app1"); 4.3. Connection URIs Connections are configured using a connection URI. The connection URI specifies the remote host, port, and a set of configuration options, which are set as query parameters. For more information about the available options, see Chapter 5, Configuration options . The connection URI format The scheme is tcp for unencrypted connections and ssl for SSL/TLS connections. For example, the following is a connection URI that connects to host example.net at port 61616 and sets the client ID to backend : Example: A connection URI Failover URIs URIs used for reconnect and failover can contain multiple connection URIs. They take the following form: The failover URI format Transport options prefixed with nested. are applied to each connection URI in the list. 4.4. Configuring queue and topic names JMS provides the option of using JNDI to look up deployment-specific queue and topic resources. To set queue and topic names in JNDI, create properties in the following format. Either place this configuration in a jndi.properties file or set corresponding system properties. The JNDI property format for queues and topics queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name> For example, the following properties define the names jobs and notifications for two deployment-specific resources: Example: Setting queue and topic names in a jndi.properties file queue.jobs = app1/work-items topic.notifications = app1/updates You can then look up the resources by their JNDI names: Queue queue = (Queue) context.lookup("jobs"); Topic topic = (Topic) context.lookup("notifications");
[ "javax.naming.Context context = new javax.naming.InitialContext();", "java.naming.factory.initial = org.apache.activemq.jndi.ActiveMQInitialContextFactory", "java -Djava.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory", "Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.activemq.jndi.ActiveMQInitialContextFactory\"); InitialContext context = new InitialContext(env);", "connectionFactory. <lookup-name> = <connection-uri>", "connectionFactory.app1 = tcp://example.net:61616?jms.clientID=backend", "ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");", "<scheme>://<host>:<port>[?<option>=<value>[&<option>=<value>...]]", "tcp://example.net:61616?jms.clientID=backend", "failover:(<connection-uri>[,<connection-uri>])[?<option>=<value>[&<option>=<value>...]]", "queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>", "queue.jobs = app1/work-items topic.notifications = app1/updates", "Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/configuration
Chapter 4. Deploying functions
Chapter 4. Deploying functions You can deploy your functions to the cluster by using the kn func tool. 4.1. Deploying a function You can deploy a function to your cluster as a Knative service by using the kn func deploy command. If the targeted function is already deployed, it is updated with a new container image that is pushed to a container image registry, and the Knative service is updated. Prerequisites The OpenShift Serverless Operator and Knative Serving are installed on the cluster. You have installed the Knative ( kn ) CLI. You have created a project or have access to a project with the appropriate roles and permissions to create applications and other workloads in OpenShift Container Platform. You must have already created and initialized the function that you want to deploy. Procedure Deploy a function: USD kn func deploy [-n <namespace> -p <path> -i <image>] Example output Function deployed at: http://func.example.com If no namespace is specified, the function is deployed in the current namespace. The function is deployed from the current directory, unless a path is specified. The Knative service name is derived from the project name, and cannot be changed using this command. Note You can create a serverless function with a Git repository URL by using Import from Git or Create Serverless Function in the +Add view of the Developer perspective.
[ "kn func deploy [-n <namespace> -p <path> -i <image>]", "Function deployed at: http://func.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_serverless/1.33/html/functions/serverless-functions-deploying
Chapter 115. KafkaUserQuotas schema reference
Chapter 115. KafkaUserQuotas schema reference Used in: KafkaUserSpec Full list of KafkaUserQuotas schema properties Kafka allows a user to set quotas to control the use of resources by clients. 115.1. quotas You can configure your clients to use the following types of quotas: Network usage quotas specify the byte rate threshold for each group of clients sharing a quota. CPU utilization quotas specify a window for broker requests from clients. The window is the percentage of time for clients to make requests. A client makes requests on the I/O threads and network threads of the broker. Partition mutation quotas limit the number of partition mutations which clients are allowed to make per second. A partition mutation quota prevents Kafka clusters from being overwhelmed by concurrent topic operations. Partition mutations occur in response to the following types of user requests: Creating partitions for a new topic Adding partitions to an existing topic Deleting partitions from a topic You can configure a partition mutation quota to control the rate at which mutations are accepted for user requests. Using quotas for Kafka clients might be useful in a number of situations. Consider a wrongly configured Kafka producer which is sending requests at too high a rate. Such misconfiguration can cause a denial of service to other clients, so the problematic client ought to be blocked. By using a network limiting quota, it is possible to prevent this situation from significantly impacting other clients. Streams for Apache Kafka supports user-level quotas, but not client-level quotas. Example Kafka user quota configuration spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10 For more information about Kafka user quotas, refer to the Apache Kafka documentation . 115.2. KafkaUserQuotas schema properties Property Property type Description consumerByteRate integer A quota on the maximum bytes per-second that each client group can fetch from a broker before the clients in the group are throttled. Defined on a per-broker basis. controllerMutationRate number A quota on the rate at which mutations are accepted for the create topics request, the create partitions request and the delete topics request. The rate is accumulated by the number of partitions created or deleted. producerByteRate integer A quota on the maximum bytes per-second that each client group can publish to a broker before the clients in the group are throttled. Defined on a per-broker basis. requestPercentage integer A quota on the maximum CPU utilization of each client group as a percentage of network and I/O threads.
[ "spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-KafkaUserQuotas-reference
5.4. Determining the Requirements for Subsystem Certificates
5.4. Determining the Requirements for Subsystem Certificates The CA configuration determines many of the characteristics of the certificates which it issues, regardless of the actual type of certificate being issued. Constraints on the CA's own validity period, distinguished name, and allowed encryption algorithms impact the same characteristics in their issued certificates. Additionally, the Certificate Managers have predefined profiles that set rules for different kinds of certificates that they issue, and additional profiles can be added or modified. These profile configurations also impact issued certificates. 5.4.1. Determining Which Certificates to Install When a Certificate System subsystem is first installed and configured, the certificates necessary to access and administer it are automatically created. These include an agent's certificate, server certificate, and subsystem-specific certificates. These initial certificates are shown in Table 5.1, "Initial Subsystem Certificates" . Table 5.1. Initial Subsystem Certificates Subsystem Certificates Certificate Manager CA signing certificate OCSP signing certificate SSL/TLS server certificate Subsystem certificate User's (agent/administrator) certificate Audit log signing certificate OCSP OCSP signing certificate SSL/TLS server certificate Subsystem certificate User's (agent/administrator) certificate Audit log signing certificate KRA Transport certificate Storage certificate SSL/TLS server certificate Subsystem certificate User's (agent/administrator) certificate Audit log signing certificate TKS SSL/TLS server certificate User's (agent/administrator) certificate Audit log signing certificate TPS SSL/TLS server certificate User's (agent/administrator) certificate Audit log signing certificate There are some cautionary considerations about replacing existing subsystem certificates. Generating new key pairs when creating a new self-signed CA certificate for a root CA will invalidate all certificates issued under the CA certificate. This means none of the certificates issued or signed by the CA using its old key will work; subordinate Certificate Managers, KRAs, OCSPs, TKSs, and TPSs will no longer function, and agents can no longer to access agent interfaces. This same situation occurs if a subordinate CA's CA certificate is replaced by one with a new key pair; all certificates issued by that CA are invalidated and will no longer work. Instead of creating new certificates from new key pairs, consider renewing the existing CA signing certificate. If the CA is configured to publish to the OCSP and it has a new CA signing certificate or a new CRL signing certificate, the CA must be identified again to the OCSP. If a new transport certificate is created for the KRA, the KRA information must be updated in the CA's configuration file, CS.cfg . The existing transport certificate must be replaced with the new one in the ca.connector.KRA.transportCert parameter. If a CA is cloned, then when creating a new SSL/TLS server certificate for the master Certificate Manager, the clone CAs' certificate databases all need updated with the new SSL/TLS server certificate. If the Certificate Manager is configured to publish certificates and CRLs to an LDAP directory and uses the SSL/TLS server certificate for SSL/TLS client authentication, then the new SSL/TLS server certificate must be requested with the appropriate extensions. After installing the certificate, the publishing directory must be configured to use the new server certificate. Any number of SSL/TLS server certificates can be issued for a subsystem instance, but it really only needs one SSL/TLS certificate. This certificate can be renewed or replaced as many times as necessary. 5.4.2. Planning the CA Distinguished Name The core elements of a CA are a signing unit and the Certificate Manager identity. The signing unit digitally signs certificates requested by end entities. A Certificate Manager must have its own distinguished name (DN), which is listed in every certificate it issues. Like any other certificate, a CA certificate binds a DN to a public key. A DN is a series of name-value pairs that in combination uniquely identify an entity. For example, the following DN identifies a Certificate Manager for the Engineering department of a corporation named Example Corporation: Many combinations of name-value pairs are possible for the Certificate Manager's DN. The DN must be unique and readily identifiable, since any end entity can examine it. 5.4.3. Setting the CA Signing Certificate Validity Period Every certificate, including a Certificate Manager signing certificate, must have a validity period. The Certificate System does not restrict the validity period that can be specified. Set as long a validity period as possible, depending on the requirements for certificate renewal, the place of the CA in the certificate hierarchy, and the requirements of any public CAs that are included in the PKI. A Certificate Manager cannot issue a certificate that has a validity period longer than the validity period of its CA signing certificate. If a request is made for a period longer than the CA certificate's validity period, the requested validity date is ignored and the CA signing certificate validity period is used. 5.4.4. Choosing the Signing Key Type and Length A signing key is used by a subsystem to verify and "seal" something. CAs use a CA signing certificate to sign certificates or CRLs that it issues; OCSPs use signing certificates to verify their responses to certificate status requests; all subsystems use log file signing certificates to sign their audit logs. The signing key must be cryptographically strong to provide protection and security for its signing operations. The following signing algorithms are considered secure: SHA256withRSA SHA512withRSA SHA256withEC SHA512withEC Note Certificate System includes native ECC support. It is also possible to load and use a third-party PKCS #11 module with ECC-enabled. This is covered in Chapter 9, Installing an Instance with ECC System Certificates . Along with a key type , each key has a specific bit length . Longer keys are considered cryptographically stronger than shorter keys. However, longer keys require more time for signing operations. The default RSA key length in the configuration wizard is 2048 bits; for certificates that provide access to highly sensitive data or services, consider increasing the length to 4096 bits. ECC keys are much stronger than RSA keys, so the recommended length for ECC keys is 256 bits, which is equivalent in strength to a 2048-bit RSA key. 5.4.5. Using Certificate Extensions An X.509 v3 certificate contains an extension field that permits any number of additional fields to be added to the certificate. Certificate extensions provide a way of adding information such as alternative subject names and usage restrictions to certificates. Older Netscape servers, such as Red Hat Directory Server and Red Hat Certificate System, require Netscape-specific extensions because they were developed before PKIX part 1 standards were defined. The X.509 v1 certificate specification was originally designed to bind public keys to names in an X.500 directory. As certificates began to be used on the Internet and extranets and directory lookups could not always be performed, problem areas emerged that were not covered by the original specification. Trust . The X.500 specification establishes trust by means of a strict directory hierarchy. By contrast, Internet and extranet deployments frequently involve distributed trust models that do not conform to the hierarchical X.500 approach. Certificate use . Some organizations restrict how certificates are used. For example, some certificates may be restricted to client authentication only. Multiple certificates . It is not uncommon for certificate users to possess multiple certificates with identical subject names but different key material. In this case, it is necessary to identify which key and certificate should be used for what purpose. Alternate names . For some purposes, it is useful to have alternative subject names that are also bound to the public key in the certificate. Additional attributes . Some organizations store additional information in certificates, such as when it is not possible to look up information in a directory. Relationship with CA . When certificate chaining involves intermediate CAs, it is useful to have information about the relationships among CAs embedded in their certificates. CRL checking . Since it is not always possible to check a certificate's revocation status against a directory or with the original certificate authority, it is useful for certificates to include information about where to check CRLs. The X.509 v3 specification addressed these issues by altering the certificate format to include additional information within a certificate by defining a general format for certificate extensions and specifying extensions that can be included in the certificate. The extensions defined for X.509 v3 certificates enable additional attributes to be associated with users or public keys and manage the certification hierarchy. The Internet X.509 Public Key Infrastructure Certificate and CRL Profile recommends a set of extensions to use for Internet certificates and standard locations for certificate or CA information. These extensions are called standard extensions . Note For more information on standard extensions, see RFC 2459 , RFC 3280 , and RFC 3279 . The X.509 v3 standard for certificates allows organizations to define custom extensions and include them in certificates. These extensions are called private , proprietary , or custom extensions, and they carry information unique to an organization or business. Applications may not able to validate certificates that contain private critical extensions, so it not recommended that these be used in wide-spread situations. The X.500 and X.509 specifications are controlled by the International Telecommunication Union (ITU), an international organization that primarily serves large telecommunication companies, government organizations, and other entities concerned with the international telecommunications network. The Internet Engineering Task Force (IETF), which controls many of the standards that underlie the Internet, is currently developing public-key infrastructure X.509 (PKIX) standards. These proposed standards further refine the X.509 v3 approach to extensions for use on the Internet. The recommendations for certificates and CRLs have reached proposed standard status and are in a document referred to as PKIX Part 1 . Two other standards, Abstract Syntax Notation One (ASN.1) and Distinguished Encoding Rules (DER), are used with Certificate System and certificates in general. These are specified in the CCITT Recommendations X.208 and X.209. For a quick summary of ASN.1 and DER, see A Layman's Guide to a Subset of ASN.1, BER, and DER , which is available at RSA Laboratories' web site, http://www.rsa.com . 5.4.5.1. Structure of Certificate Extensions In RFC 3280, an X.509 certificate extension is defined as follows: The means a certificate extension consists of the following: The object identifier (OID) for the extension. This identifier uniquely identifies the extension. It also determines the ASN.1 type of value in the value field and how the value is interpreted. When an extension appears in a certificate, the OID appears as the extension ID field ( extnID ) and the corresponding ASN.1 encoded structure appears as the value of the octet string ( extnValue ). A flag or Boolean field called critical . The value, which can be either true or false , assigned to this field indicates whether the extension is critical or noncritical to the certificate. If the extension is critical and the certificate is sent to an application that does not understand the extension based on the extension's ID, the application must reject the certificate. If the extension is not critical and the certificate is sent to an application that does not understand the extension based on the extension's ID, the application can ignore the extension and accept the certificate. An octet string containing the DER encoding of the value of the extension. Typically, the application receiving the certificate checks the extension ID to determine if it can recognize the ID. If it can, it uses the extension ID to determine the type of value used. Some of the standard extensions defined in the X.509 v3 standard include the following: Authority Key Identifier extension, which identifies the CA's public key, the key used to sign the certificate. Subject Key Identifier extension, which identifies the subject's public key, the key being certified. Note Not all applications support certificates with version 3 extensions. Applications that do support these extensions may not be able to interpret some or all of these specific extensions. 5.4.6. Using and Customizing Certificate Profiles Certificates have different types and different applications. They can be used to establish a single sign-on environment for a corporate network, to set up VPNs, to encrypt email, or to authenticate to a website. The requirements for all of these certificates can be different, just as there may also be different requirements for the same type of certificate for different kinds of users. These certificate characteristics are set in certificate profiles . The Certificate Manager defines a set of certificate profiles that it uses as enrollment forms when users or machines request certificates. Certificate Profiles A certificate profile defines everything associated with issuing a particular type of certificate, including the authentication method, the certificate content (defaults), constraints for the values of the content, and the contents of the input and output for the certificate profile. Enrollment requests are submitted to a certificate profile and are then subject to the defaults and constraints set in that certificate profile. These constraints are in place whether the request is submitted through the input form associated with the certificate profile or through other means. The certificate that is issued from a certificate profile request contains the content required by the defaults with the information required by the default parameters. The constraints provide rules for what content is allowed in the certificate. For example, a certificate profile for user certificates defines all aspects of that certificate, including the validity period of the certificate. The default validity period can be set to two years, and a constraint can be set on the profile that the validity period for certificates requested through this certificate profile cannot exceed two years. When a user requests a certificate using the input form associated with this certificate profile, the issued certificate contains the information specified in the defaults and will be valid for two years. If the user submits a pre-formatted request for a certificate with a validity period of four years, the request is rejected since the constraints allow a maximum of two years validity period for this type of certificate. A set of certificate profiles have been predefined for the most common certificates issued. These certificate profiles define defaults and constraints, associate the authentication method, and define the needed inputs and outputs for the certificate profile. Modifying the Certificate Profile Parameters The parameters of the default certificate profiles can be modified; this includes the authentication method, the defaults, the constraints used in each profile, the values assigned to any of the parameters in a profile, the input, and the output. It is also possible to create new certificate profiles for other types of certificates or for creating more than one certificate profile for a certificate type. There can be multiple certificate profiles for a particular type of certificate to issue the same type of certificate with a different authentication method or different definitions for the defaults and constraints. For example, there can be two certificate profiles for enrollment of SSL/TLS server certificates where one certificate profile issues certificates with a validity period of six months and another certificate profile issues certificates with a validity period of two years. An input sets a text field in the enrollment form and what kind of information needs gathered from the end entity; this includes setting the text area for a certificate request to be pasted, which allows a request to be created outside the input form with any of the request information required. The input values are set as values in the certificate. The default inputs are not configurable in the Certificate System. An output specifies how the response page to a successful enrollment is presented. It usually displays the certificate in a user-readable format. The default output shows a printable version of the resultant certificate; other outputs set the type of information generated at the end of the enrollment, such as PKCS #7. Policy sets are sets of constraints and default extensions attached to every certificate processed through the profile. The extensions define certificate content such as validity periods and subject name requirements. A profile handles one certificate request, but a single request can contain information for multiple certificates. A PKCS#10 request contains a single public key. One CRMF request can contain multiple public keys, meaning multiple certificate requests. A profile may contain multiple sets of policies, with each set specifying how to handle one certificate request within a CRMF request. Certificate Profile Administration An administrator sets up a certificate profile by associating an existing authentication plug-in, or method, with the certificate profile; enabling and configuring defaults and constraints; and defining inputs and outputs. The administrator can use the existing certificate profiles, modify the existing certificate profiles, create new certificate profiles, and delete any certificate profile that will not be used in this PKI. Once a certificate profile is set up, it appears on the Manage Certificate Profiles page of the agent services page where an agent can approve, and thus enable, a certificate profile. Once the certificate profile is enabled, it appears on the Certificate Profile tab of the end-entities page where end entities can enroll for a certificate using the certificate profile. The certificate profile enrollment page in the end-entities interface contains links to each certificate profile that has been enabled by the agents. When an end entity selects one of those links, an enrollment page appears containing an enrollment form specific to that certificate profile. The enrollment page is dynamically generated from the inputs defined for the profile. If an authentication plug-in is configured, additional fields may be added to authenticate the user. When an end entity submits a certificate profile request that is associated with an agent-approved (manual) enrollment, an enrollment where no authentication plug-in is configured, the certificate request is queued in the agent services interface. The agent can change some aspects of the enrollment, request, validate it, cancel it, reject it, update it, or approve it. The agent is able to update the request without submitting it or validate that the request adheres to the profile's defaults and constraints. This validation procedure is only for verification and does not result in the request being submitted. The agent is bound by the constraints set; they cannot change the request in such a way that a constraint is violated. The signed approval is immediately processed, and a certificate is issued. When a certificate profile is associated with an authentication method, the request is approved immediately and generates a certificate automatically if the user successfully authenticates, all the information required is provided, and the request does not violate any of the constraints set up for the certificate profile. There are profile policies which allow user-supplied settings like subject names or validity periods. The certificate profile framework can also preserve user-defined content set in the original certificate request in the issued certificate. The issued certificate contains the content defined in the defaults for this certificate profile, such as the extensions and validity period for the certificate. The content of the certificate is constrained by the constraints set for each default. Multiple policies (defaults and constraints) can be set for one profile, distinguishing each set by using the same value in the policy set ID. This is particularly useful for dealing with dual keys enrollment where encryption keys and signing keys are submitted to the same profile. The server evaluates each set with each request it receives. When a single certificate is issued, one set is evaluated, and any other sets are ignored. When dual-key pairs are issued, the first set is evaluated with the first certificate request, and the second set is evaluated with the second certificate request. There is no need for more than one set for issuing a single certificate or more than two sets for issuing dual-key pairs. Guidelines for Customizing Certificate Profiles Tailor the profiles for the organization to the real needs and anticipated certificate types used by the organization: Decide which certificate profiles are needed in the PKI. There should be at least one profile for each type of certificate issued. There can be more than one certificate profile for each type of certificate to set different authentication methods or different defaults and constraints for a particular type of certificate type. Any certificate profile available in the administrative interface can be approved by an agent and then used by an end entity to enroll. Delete any certificate profiles that will not be used. Modify the existing certificate profiles for specific characteristics for the company's certificates. Change the defaults set up in the certificate profile, the values of the parameters set in the defaults, or the constraints that control the certificate content. Change the constraints set up by changing the value of the parameters. Change the authentication method. Change the inputs by adding or deleting inputs in the certificate profile, which control the fields on the input page. Add or delete the output. 5.4.6.1. Adding SAN Extensions to the SSL Server Certificate Certificate System enables adding Subject Alternative Name (SAN) extensions to the SSL server certificate during the installation of non-root CA or other Certificate System instances. To do so, follow the instructions in the /usr/share/pki/ca/profiles/ca/caInternalAuthServerCert.cfg file and add the following parameters to the configuration file supplied to the pkispawn utility: pki_san_inject Set the value of this parameter to True . pki_san_for_server_cert Provide a list of the required SAN extensions separated by commas (,). For example: 5.4.7. Planning Authentication Methods As implied in Section 5.4.6, "Using and Customizing Certificate Profiles" , authentication for the certificate process means the way that a user or entity requesting a certificate proves that they are who they say they are. There are three ways that the Certificate System can authenticate an entity: In agent-approved enrollment, end-entity requests are sent to an agent for approval. The agent approves the certificate request. In automatic enrollment, end-entity requests are authenticated using a plug-in, and then the certificate request is processed; an agent is not involved in the enrollment process. In CMC enrollment , a third party application can create a request that is signed by an agent and then automatically processed. A Certificate Manager is initially configured for agent-approved enrollment and for CMC authentication. Automated enrollment is enabled by configuring one of the authentication plug-in modules. More than one authentication method can be configured in a single instance of a subsystem. The HTML registration pages contain hidden values specifying the method used. With certificate profiles, the end-entity enrollment pages are dynamically-generated for each enabled profile. The authentication method associated with this certificate profile is specified in the dynamically-generated enrollment page. The authentication process is simple. An end entity submits a request for enrollment. The form used to submit the request identifies the method of authentication and enrollment. All HTML forms are dynamically-generated by the profiles, which automatically associate the appropriate authentication method with the form. If the authentication method is an agent-approved enrollment, the request is sent to the request queue of the CA agent. If the automated notification for a request in queue is set, an email is sent to the appropriate agent that a new request has been received. The agent can modify the request as allowed for that form and the profile constraints. Once approved, the request must pass the certificate profiles set for the Certificate Manager, and then the certificate is issued. When the certificate is issued, it is stored in the internal database and can be retrieved by the end entity from the end-entities page by serial number or by request ID. If the authentication method is automated, the end entity submits the request along with required information to authenticate the user, such as an LDAP user name and password. When the user is successfully authenticated, the request is processed without being sent to an agent's queue. If the request passes the certificate profile configuration of the Certificate Manager, the certificate is issued and stored in the internal database. It is delivered to the end entity immediately through the HTML forms. The requirements for how a certificate request is authenticated can have a direct impact on the necessary subsystems and profile settings. For example, if an agent-approved enrollment requires that an agent meet the requester in person and verify their identity through supported documentation, the authentication process can be time-intensive, as well as constrained by the physical availability of both the agent and the requester. 5.4.8. Publishing Certificates and CRLs A CA can publish both certificates and CRLs. Certificates can be published to a plain file or to an LDAP directory; CRLs can be published to file or an LDAP directory, as well, and can also be published to an OCSP responder to handle certificate verification. Configuring publishing is fairly straightforward and is easily adjusted. For continuity and accessibility, though, it is good to plan out where certificates and CRLs need to be published and what clients need to be able to access them. Publishing to an LDAP directory requires special configuration in the directory for publishing to work: If certificates are published to the directory, than every user or server to which a certificate is issued must have a corresponding entry in the LDAP directory. If CRLs are published to the directory, than they must be published to an entry for the CA which issued them. For SSL/TLS, the directory service has to be configured in SSL/TLS and, optionally, be configured to allow the Certificate Manager to use certificate-based authentication. The directory administrator should configure appropriate access control rules to control DN (entry name) and password based authentication to the LDAP directory. 5.4.9. Renewing or Reissuing CA Signing Certificates When a CA signing certificate expires, all certificates signed with the CA's corresponding signing key become invalid. End entities use information in the CA certificate to verify the certificate's authenticity. If the CA certificate itself has expired, applications cannot chain the certificate to a trusted CA. There are two ways of resolving CA certificate expiration: Renewing a CA certificate involves issuing a new CA certificate with the same subject name and public and private key material as the old CA certificate, but with an extended validity period. As long as the new CA certificate is distributed to all users before the old CA certificate expires, renewing the certificate allows certificates issued under the old CA certificate to continue working for the full duration of their validity periods. Reissuing a CA certificate involves issuing a new CA certificate with a new name, public and private key material, and validity period. This avoids some problems associated with renewing a CA certificate, but it requires more work for both administrators and users to implement. All certificates issued by the old CA, including those that have not yet expired, must be renewed by the new CA. There are problems and advantages with either renewing or reissuing a CA certificate. Begin planning the CA certificate renewal or re-issuance before installing any Certificate Managers, and consider the ramifications the planned procedures may have for extensions, policies, and other aspects of the PKI deployment. Note Correct use of extensions, for example the authorityKeyIdentifier extension, can affect the transition from an old CA certificate to a new one.
[ "cn=demoCA, o=Example Corporation, ou=Engineering, c=US", "Extension ::= SEQUENCE { extnID OBJECT IDENTIFIER, critical BOOLEAN DEFAULT FALSE, extnValue OCTET STRING }", "pki_san_inject=True pki_san_for_server_cert=intca01.example.com,intca02.example.com,intca.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide/sect-deployment_guide-planning_your_crts-determining_the_requirements_for_subsystem_certificates
Chapter 3. An active/passive NFS Server in a Red Hat High Availability Cluster
Chapter 3. An active/passive NFS Server in a Red Hat High Availability Cluster This chapter describes how to configure a highly available active/passive NFS server on a two-node Red Hat Enterprise Linux High Availability Add-On cluster using shared storage. The procedure uses pcs to configure Pacemaker cluster resources. In this use case, clients access the NFS file system through a floating IP address. The NFS server runs on one of two nodes in the cluster. If the node on which the NFS server is running becomes inoperative, the NFS server starts up again on the second node of the cluster with minimal service interruption. This use case requires that your system include the following components: Two nodes, which will be used to create the cluster running the Apache HTTP server. In this example, the nodes used are z1.example.com and z2.example.com . A power fencing device for each node of the cluster. This example uses two ports of the APC power switch with a host name of zapc.example.com . A public virtual IP address, required for the NFS server. Shared storage for the nodes in the cluster, using iSCSI, Fibre Channel, or other shared network block device. Configuring a highly available active/passive NFS server on a two-node Red Hat Enterprise Linux High requires that you perform the following steps. Create the cluster that will run the NFS server and configure fencing for each node in the cluster, as described in Section 3.1, "Creating the NFS Cluster" . Configure an ext4 file system mounted on the LVM logical volume my_lv on the shared storage for the nodes in the cluster, as described in Section 3.2, "Configuring an LVM Volume with an ext4 File System" . Configure an NFS share on the shared storage on the LVM logical volume, as described in Section 3.3, "NFS Share Setup" . Ensure that only the cluster is capable of activating the LVM volume group that contains the logical volume my_lv , and that the volume group will not be activated outside of the cluster on startup, as described in Section 3.4, "Exclusive Activation of a Volume Group in a Cluster" . Create the cluster resources as described in Section 3.5, "Configuring the Cluster Resources" . Test the NFS server you have configured, as described in Section 3.6, "Testing the Resource Configuration" . 3.1. Creating the NFS Cluster Use the following procedure to install and create the NFS cluster. Install the cluster software on nodes z1.example.com and z2.example.com , using the procedure provided in Section 1.1, "Cluster Software Installation" . Create the two-node cluster that consists of z1.example.com and z2.example.com , using the procedure provided in Section 1.2, "Cluster Creation" . As in that example procedure, this use case names the cluster my_cluster . Configure fencing devices for each node of the cluster, using the procedure provided in Section 1.3, "Fencing Configuration" . This example configures fencing using two ports of the APC power switch with a host name of zapc.example.com .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/ch-nfsserver-HAAA
Migrating from version 3 to 4
Migrating from version 3 to 4 OpenShift Container Platform 4.7 Migrating to OpenShift Container Platform 4 Red Hat OpenShift Documentation Team
[ "oc expose svc <app1-svc> --hostname <app1.apps.source.example.com> -n <app1-namespace>", "sudo podman login registry.redhat.io", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc run test --image registry.redhat.io/ubi8 --command sleep infinity", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "BUCKET=<your_bucket>", "REGION=<your_region>", "aws s3api create-bucket --bucket USDBUCKET --region USDREGION --create-bucket-configuration LocationConstraint=USDREGION 1", "aws iam create-user --user-name velero 1", "cat > velero-policy.json <<EOF { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"ec2:DescribeVolumes\", \"ec2:DescribeSnapshots\", \"ec2:CreateTags\", \"ec2:CreateVolume\", \"ec2:CreateSnapshot\", \"ec2:DeleteSnapshot\" ], \"Resource\": \"*\" }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:GetObject\", \"s3:DeleteObject\", \"s3:PutObject\", \"s3:AbortMultipartUpload\", \"s3:ListMultipartUploadParts\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}/*\" ] }, { \"Effect\": \"Allow\", \"Action\": [ \"s3:ListBucket\", \"s3:GetBucketLocation\", \"s3:ListBucketMultipartUploads\" ], \"Resource\": [ \"arn:aws:s3:::USD{BUCKET}\" ] } ] } EOF", "aws iam put-user-policy --user-name velero --policy-name velero --policy-document file://velero-policy.json", "aws iam create-access-key --user-name velero", "{ \"AccessKey\": { \"UserName\": \"velero\", \"Status\": \"Active\", \"CreateDate\": \"2017-07-31T22:24:41.576Z\", \"SecretAccessKey\": <AWS_SECRET_ACCESS_KEY>, \"AccessKeyId\": <AWS_ACCESS_KEY_ID> } }", "gcloud auth login", "BUCKET=<bucket> 1", "gsutil mb gs://USDBUCKET/", "PROJECT_ID=USD(gcloud config get-value project)", "gcloud iam service-accounts create velero --display-name \"Velero service account\"", "gcloud iam service-accounts list", "SERVICE_ACCOUNT_EMAIL=USD(gcloud iam service-accounts list --filter=\"displayName:Velero service account\" --format 'value(email)')", "ROLE_PERMISSIONS=( compute.disks.get compute.disks.create compute.disks.createSnapshot compute.snapshots.get compute.snapshots.create compute.snapshots.useReadOnly compute.snapshots.delete compute.zones.get )", "gcloud iam roles create velero.server --project USDPROJECT_ID --title \"Velero Server\" --permissions \"USD(IFS=\",\"; echo \"USD{ROLE_PERMISSIONS[*]}\")\"", "gcloud projects add-iam-policy-binding USDPROJECT_ID --member serviceAccount:USDSERVICE_ACCOUNT_EMAIL --role projects/USDPROJECT_ID/roles/velero.server", "gsutil iam ch serviceAccount:USDSERVICE_ACCOUNT_EMAIL:objectAdmin gs://USD{BUCKET}", "gcloud iam service-accounts keys create credentials-velero --iam-account USDSERVICE_ACCOUNT_EMAIL", "az login", "AZURE_RESOURCE_GROUP=Velero_Backups", "az group create -n USDAZURE_RESOURCE_GROUP --location CentralUS 1", "AZURE_STORAGE_ACCOUNT_ID=\"veleroUSD(uuidgen | cut -d '-' -f5 | tr '[A-Z]' '[a-z]')\"", "az storage account create --name USDAZURE_STORAGE_ACCOUNT_ID --resource-group USDAZURE_RESOURCE_GROUP --sku Standard_GRS --encryption-services blob --https-only true --kind BlobStorage --access-tier Hot", "BLOB_CONTAINER=velero", "az storage container create -n USDBLOB_CONTAINER --public-access off --account-name USDAZURE_STORAGE_ACCOUNT_ID", "AZURE_SUBSCRIPTION_ID=`az account list --query '[?isDefault].id' -o tsv` AZURE_TENANT_ID=`az account list --query '[?isDefault].tenantId' -o tsv` AZURE_CLIENT_SECRET=`az ad sp create-for-rbac --name \"velero\" --role \"Contributor\" --query 'password' -o tsv` AZURE_CLIENT_ID=`az ad sp list --display-name \"velero\" --query '[0].appId' -o tsv`", "cat << EOF > ./credentials-velero AZURE_SUBSCRIPTION_ID=USD{AZURE_SUBSCRIPTION_ID} AZURE_TENANT_ID=USD{AZURE_TENANT_ID} AZURE_CLIENT_ID=USD{AZURE_CLIENT_ID} AZURE_CLIENT_SECRET=USD{AZURE_CLIENT_SECRET} AZURE_RESOURCE_GROUP=USD{AZURE_RESOURCE_GROUP} AZURE_CLOUD_NAME=AzurePublicCloud EOF", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "sudo podman login registry.redhat.io", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc", "registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator", "containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3", "oc create -f operator.yml", "namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists", "oc create -f controller.yml", "oc get pods -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "oc delete migrationcontroller <migration_controller>", "oc delete USD(oc get crds -o name | grep 'migration.openshift.io')", "oc delete USD(oc get crds -o name | grep 'velero')", "oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')", "oc delete clusterrole migration-operator", "oc delete USD(oc get clusterroles -o name | grep 'velero')", "oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')", "oc delete clusterrolebindings migration-operator", "oc delete USD(oc get clusterrolebindings -o name | grep 'velero')", "sudo podman login registry.redhat.io", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./", "oc replace --force -f operator.yml", "oc scale -n openshift-migration --replicas=0 deployment/migration-operator", "oc scale -n openshift-migration --replicas=1 deployment/migration-operator", "oc -o yaml -n openshift-migration get deployment/migration-operator | grep image: | awk -F \":\" '{ print USDNF }'", "sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./", "oc create -f controller.yml", "oc sa get-token migration-controller -n openshift-migration", "oc get pods -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "spec: indirectImageMigration: true indirectVolumeMigration: true", "oc replace -f migplan.yaml -n openshift-migration", "oc get migplan <migplan> -o yaml -n openshift-migration", "oc get pv", "oc get pods --all-namespaces | egrep -v 'Running | Completed'", "oc get pods --all-namespaces --field-selector=status.phase=Running -o json | jq '.items[]|select(any( .status.containerStatuses[]; .restartCount > 3))|.metadata.name'", "oc get csr -A | grep pending -i", "oc get -n openshift-migration route/migration -o go-template='https://{{ .spec.host }}'", "oc sa get-token migration-controller -n openshift-migration", "eyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJtaWciLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlY3JldC5uYW1lIjoibWlnLXRva2VuLWs4dDJyIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQubmFtZSI6Im1pZyIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VydmljZS1hY2NvdW50LnVpZCI6ImE1YjFiYWMwLWMxYmYtMTFlOS05Y2NiLTAyOWRmODYwYjMwOCIsInN1YiI6InN5c3RlbTpzZXJ2aWNlYWNjb3VudDptaWc6bWlnIn0.xqeeAINK7UXpdRqAtOj70qhBJPeMwmgLomV9iFxr5RoqUgKchZRG2J2rkqmPm6vr7K-cm7ibD1IBpdQJCcVDuoHYsFgV4mp9vgOfn9osSDp2TGikwNz4Az95e81xnjVUmzh-NjDsEpw71DH92iHV_xt2sTwtzftS49LpPW2LjrV0evtNBP_t_RfskdArt5VSv25eORl7zScqfe1CiMkcVbf2UqACQjo3LbkpfN26HAioO2oH0ECPiRzT0Xyh-KwFutJLS9Xgghyw-LD9kPKcE_xbbJ9Y4Rqajh7WdPYuB0Jd9DPVrslmzK-F6cgHHYoZEv0SvLQi-PO0rpDrcjOEQQ", "oc create route passthrough --service=docker-registry -n default", "oc create route passthrough --service=image-registry -n openshift-image-registry", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress", "apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny", "spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"", "oc get migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2", "oc replace -f migration-controller.yaml -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <host_cluster> namespace: openshift-migration spec: isHostCluster: true EOF", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: name: <cluster_secret> namespace: openshift-config type: Opaque data: saToken: <sa_token> 1 EOF", "oc sa get-token migration-controller -n openshift-migration | base64 -w 0", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: name: <remote_cluster> 1 namespace: openshift-migration spec: exposedRegistryPath: <exposed_registry_route> 2 insecure: false 3 isHostCluster: false serviceAccountSecretRef: name: <remote_cluster_secret> 4 namespace: openshift-config url: <remote_cluster_url> 5 EOF", "oc describe cluster <cluster>", "cat << EOF | oc apply -f - apiVersion: v1 kind: Secret metadata: namespace: openshift-config name: <migstorage_creds> type: Opaque data: aws-access-key-id: <key_id_base64> 1 aws-secret-access-key: <secret_key_base64> 2 EOF", "echo -n \"<key>\" | base64 -w 0 1", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: name: <migstorage> namespace: openshift-migration spec: backupStorageConfig: awsBucketName: <bucket> 1 credsSecretRef: name: <storage_secret> 2 namespace: openshift-config backupStorageProvider: <storage_provider> 3 volumeSnapshotConfig: credsSecretRef: name: <storage_secret> 4 namespace: openshift-config volumeSnapshotProvider: <storage_provider> 5 EOF", "oc describe migstorage <migstorage>", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: destMigClusterRef: name: <host_cluster> namespace: openshift-migration indirectImageMigration: true 1 indirectVolumeMigration: true 2 migStorageRef: name: <migstorage> 3 namespace: openshift-migration namespaces: - <application_namespace> 4 srcMigClusterRef: name: <remote_cluster> 5 namespace: openshift-migration EOF", "oc describe migplan <migplan> -n openshift-migration", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: name: <migmigration> namespace: openshift-migration spec: migPlanRef: name: <migplan> 1 namespace: openshift-migration quiescePods: true 2 stage: false 3 rollback: false 4 EOF", "oc watch migmigration <migmigration> -n openshift-migration", "Name: c8b034c0-6567-11eb-9a4f-0bc004db0fbc Namespace: openshift-migration Labels: migration.openshift.io/migplan-name=django Annotations: openshift.io/touch: e99f9083-6567-11eb-8420-0a580a81020c API Version: migration.openshift.io/v1alpha1 Kind: MigMigration Spec: Mig Plan Ref: Name: migplan Namespace: openshift-migration Stage: false Status: Conditions: Category: Advisory Last Transition Time: 2021-02-02T15:04:09Z Message: Step: 19/47 Reason: InitialBackupCreated Status: True Type: Running Category: Required Last Transition Time: 2021-02-02T15:03:19Z Message: The migration is ready. Status: True Type: Ready Category: Required Durable: true Last Transition Time: 2021-02-02T15:04:05Z Message: The migration registries are healthy. Status: True Type: RegistriesHealthy Itinerary: Final Observed Digest: 7fae9d21f15979c71ddc7dd075cb97061895caac5b936d92fae967019ab616d5 Phase: InitialBackupCreated Pipeline: Completed: 2021-02-02T15:04:07Z Message: Completed Name: Prepare Started: 2021-02-02T15:03:18Z Message: Waiting for initial Velero backup to complete. Name: Backup Phase: InitialBackupCreated Progress: Backup openshift-migration/c8b034c0-6567-11eb-9a4f-0bc004db0fbc-wpc44: 0 out of estimated total of 0 objects backed up (5s) Started: 2021-02-02T15:04:07Z Message: Not started Name: StageBackup Message: Not started Name: StageRestore Message: Not started Name: DirectImage Message: Not started Name: DirectVolume Message: Not started Name: Restore Message: Not started Name: Cleanup Start Timestamp: 2021-02-02T15:03:18Z Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Running 57s migmigration_controller Step: 2/47 Normal Running 57s migmigration_controller Step: 3/47 Normal Running 57s (x3 over 57s) migmigration_controller Step: 4/47 Normal Running 54s migmigration_controller Step: 5/47 Normal Running 54s migmigration_controller Step: 6/47 Normal Running 52s (x2 over 53s) migmigration_controller Step: 7/47 Normal Running 51s (x2 over 51s) migmigration_controller Step: 8/47 Normal Ready 50s (x12 over 57s) migmigration_controller The migration is ready. Normal Running 50s migmigration_controller Step: 9/47 Normal Running 50s migmigration_controller Step: 10/47", "- hosts: localhost gather_facts: false tasks: - name: get pod name shell: oc get po --all-namespaces", "- hosts: localhost gather_facts: false tasks: - name: Get pod k8s_info: kind: pods api: v1 namespace: openshift-migration name: \"{{ lookup( 'env', 'HOSTNAME') }}\" register: pods - name: Print pod name debug: msg: \"{{ pods.resources[0].metadata.name }}\"", "- hosts: localhost gather_facts: false tasks: - name: Set a boolean set_fact: do_fail: true - name: \"fail\" fail: msg: \"Cause a failure\" when: do_fail", "- hosts: localhost gather_facts: false tasks: - set_fact: namespaces: \"{{ (lookup( 'env', 'MIGRATION_NAMESPACES')).split(',') }}\" - debug: msg: \"{{ item }}\" with_items: \"{{ namespaces }}\" - debug: msg: \"{{ lookup( 'env', 'MIGRATION_PLAN_NAME') }}\"", "oc edit migrationcontroller <migration_controller> -n openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: disable_image_migration: true 1 disable_pv_migration: true 2 excluded_resources: 3 - imagetags - templateinstances - clusterserviceversions - packagemanifests - subscriptions - servicebrokers - servicebindings - serviceclasses - serviceinstances - serviceplans - operatorgroups - events - events.events.k8s.io", "oc get deployment -n openshift-migration migration-controller -o yaml | grep EXCLUDED_RESOURCES -A1", "- name: EXCLUDED_RESOURCES value: imagetags,templateinstances,clusterserviceversions,packagemanifests,subscriptions,servicebrokers,servicebindings,serviceclasses,serviceinstances,serviceplans,imagestreams,persistentvolumes,persistentvolumeclaims", "spec: namespaces: - namespace_2 - namespace_1:namespace_2", "spec: namespaces: - namespace_1:namespace_1", "spec: namespaces: - namespace_1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: selection: action: skip", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: <pv_name> pvc: name: <source_pvc>:<destination_pvc> 1", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: persistentVolumes: - capacity: 10Gi name: pvc-095a6559-b27f-11eb-b27f-021bddcaf6e4 proposedCapacity: 10Gi pvc: accessModes: - ReadWriteMany hasReference: true name: mysql namespace: mysql-persistent selection: action: <copy> 1 copyMethod: <filesystem> 2 verify: true 3 storageClass: <gp2> 4 accessMode: <ReadWriteMany> 5 storageClass: cephfs", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\"", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: name: <migplan> namespace: openshift-migration spec: includedResources: - kind: <kind> 1 group: \"\" - kind: <kind> group: \"\" labelSelector: matchLabels: <label> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: generateName: <migplan> namespace: openshift-migration spec: migPlanRef: name: <migplan> namespace: openshift-migration stage: false", "oc edit migrationcontroller -n openshift-migration", "mig_controller_limits_cpu: \"1\" 1 mig_controller_limits_memory: \"10Gi\" 2 mig_controller_requests_cpu: \"100m\" 3 mig_controller_requests_memory: \"350Mi\" 4 mig_pv_limit: 100 5 mig_pod_limit: 100 6 mig_namespace_limit: 10 7", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"enable_dvm_pv_resizing\":true}}' \\ 1 --type='merge' -n openshift-migration", "oc patch migrationcontroller migration-controller -p '{\"spec\":{\"pv_resizing_threshold\":41}}' \\ 1 --type='merge' -n openshift-migration", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-06-17T08:57:01Z\" message: 'Capacity of the following volumes will be automatically adjusted to avoid disk capacity issues in the target cluster: [pvc-b800eb7b-cf3b-11eb-a3f7-0eae3e0555f3]' reason: Done status: \"False\" type: PvCapacityAdjustmentRequired", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_enable_cache\", \"value\": true}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_limits_memory\", \"value\": <10Gi>}]'", "oc -n openshift-migration patch migrationcontroller migration-controller --type=json --patch '[{ \"op\": \"replace\", \"path\": \"/spec/mig_controller_requests_memory\", \"value\": <350Mi>}]'", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration namespaces: 1 - <source_namespace_1> - <source_namespace_2>:<destination_namespace_3> 2", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectImageStreamMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_image_stream_migration> spec: srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration imageStreamRef: name: <image_stream> namespace: <source_image_stream_namespace> destNamespace: <destination_image_stream_namespace>", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigration metadata: name: <direct_volume_migration> namespace: openshift-migration spec: createDestinationNamespaces: false 1 deleteProgressReportingCRs: false 2 destMigClusterRef: name: <host_cluster> 3 namespace: openshift-migration persistentVolumeClaims: - name: <pvc> 4 namespace: <pvc_namespace> srcMigClusterRef: name: <source_cluster> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: DirectVolumeMigrationProgress metadata: labels: controller-tools.k8s.io: \"1.0\" name: <direct_volume_migration_progress> spec: clusterRef: name: <source_cluster> namespace: openshift-migration podRef: name: <rsync_pod> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigAnalytic metadata: annotations: migplan: <migplan> name: <miganalytic> namespace: openshift-migration labels: migplan: <migplan> spec: analyzeImageCount: true <.> analyzeK8SResources: true <.> analyzePVCapacity: true <.> listImages: false <.> listImagesLimit: 50 <.> migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigCluster metadata: labels: controller-tools.k8s.io: \"1.0\" name: <host_cluster> 1 namespace: openshift-migration spec: isHostCluster: true 2 The 'azureResourceGroup' parameter is relevant only for Microsoft Azure. azureResourceGroup: <azure_resource_group> 3 caBundle: <ca_bundle_base64> 4 insecure: false 5 refresh: false 6 The 'restartRestic' parameter is relevant for a source cluster. restartRestic: true 7 The following parameters are relevant for a remote cluster. exposedRegistryPath: <registry_route> 8 url: <destination_cluster_url> 9 serviceAccountSecretRef: name: <source_secret> 10 namespace: openshift-config", "apiVersion: migration.openshift.io/v1alpha1 kind: MigHook metadata: generateName: <hook_name_prefix> 1 name: <mighook> 2 namespace: openshift-migration spec: activeDeadlineSeconds: 1800 3 custom: false 4 image: <hook_image> 5 playbook: <ansible_playbook_base64> 6 targetCluster: source 7", "apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: canceled: false 1 rollback: false 2 stage: false 3 quiescePods: true 4 keepAnnotations: true 5 verify: false 6 migPlanRef: name: <migplan> namespace: openshift-migration", "apiVersion: migration.openshift.io/v1alpha1 kind: MigPlan metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migplan> namespace: openshift-migration spec: closed: false 1 srcMigClusterRef: name: <source_cluster> namespace: openshift-migration destMigClusterRef: name: <destination_cluster> namespace: openshift-migration hooks: 2 - executionNamespace: <namespace> 3 phase: <migration_phase> 4 reference: name: <hook> 5 namespace: <hook_namespace> 6 serviceAccount: <service_account> 7 indirectImageMigration: true 8 indirectVolumeMigration: false 9 migStorageRef: name: <migstorage> namespace: openshift-migration namespaces: - <source_namespace_1> 10 - <source_namespace_2> - <source_namespace_3>:<destination_namespace_4> 11 refresh: false 12", "apiVersion: migration.openshift.io/v1alpha1 kind: MigStorage metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migstorage> namespace: openshift-migration spec: backupStorageProvider: <backup_storage_provider> 1 volumeSnapshotProvider: <snapshot_storage_provider> 2 backupStorageConfig: awsBucketName: <bucket> 3 awsRegion: <region> 4 credsSecretRef: namespace: openshift-config name: <storage_secret> 5 awsKmsKeyId: <key_id> 6 awsPublicUrl: <public_url> 7 awsSignatureVersion: <signature_version> 8 volumeSnapshotConfig: awsRegion: <region> 9 credsSecretRef: namespace: openshift-config name: <storage_secret> 10 refresh: false 11", "oc -n openshift-migration get pods | grep log", "oc -n openshift-migration logs -f <mig-log-reader-pod> -c color 1", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7", "oc adm must-gather --image=registry.redhat.io/rhmtc/openshift-migration-must-gather-rhel8:v1.7 -- /usr/bin/gather_metrics_dump", "tar -xvzf must-gather/metrics/prom_data.tar.gz", "make prometheus-run", "Started Prometheus on http://localhost:9090", "make prometheus-cleanup", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> <command> <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero --help", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> describe <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero backup describe 0e44ae00-5dc3-11eb-9ca8-df7e5254778b-2d8ql", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero <backup_restore_cr> logs <cr_name>", "oc -n openshift-migration exec deployment/velero -c velero -- ./velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "oc get migmigration <migmigration> -o yaml", "status: conditions: - category: Warn durable: true lastTransitionTime: \"2021-01-26T20:48:40Z\" message: 'Final Restore openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf: partially failed on destination cluster' status: \"True\" type: VeleroFinalRestorePartiallyFailed - category: Advisory durable: true lastTransitionTime: \"2021-01-26T20:48:42Z\" message: The migration has completed with warnings, please look at `Warn` conditions. reason: Completed status: \"True\" type: SucceededWithWarnings", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore describe <restore>", "Phase: PartiallyFailed (run 'velero restore logs ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf' for more information) Errors: Velero: <none> Cluster: <none> Namespaces: migration-example: error restoring example.com/migration-example/migration-example: the server could not find the requested resource", "oc -n {namespace} exec deployment/velero -c velero -- ./velero restore logs <restore>", "time=\"2021-01-26T20:48:37Z\" level=info msg=\"Attempting to restore migration-example: migration-example\" logSource=\"pkg/restore/restore.go:1107\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf time=\"2021-01-26T20:48:37Z\" level=info msg=\"error restoring migration-example: the server could not find the requested resource\" logSource=\"pkg/restore/restore.go:1170\" restore=openshift-migration/ccc7c2d0-6017-11eb-afab-85d0007f5a19-x4lbf", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "labels: migrationcontroller: ebe13bee-c803-47d0-a9e9-83f380328b93", "oc get migmigration -n openshift-migration", "NAME AGE 88435fe0-c9f8-11e9-85e6-5d593ce65e10 6m42s", "oc describe migmigration 88435fe0-c9f8-11e9-85e6-5d593ce65e10 -n openshift-migration", "name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10 namespace: openshift-migration labels: <none> annotations: touch: 3b48b543-b53e-4e44-9d34-33563f0f8147 apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: creationTimestamp: 2019-08-29T01:01:29Z generation: 20 resourceVersion: 88179 selfLink: /apis/migration.openshift.io/v1alpha1/namespaces/openshift-migration/migmigrations/88435fe0-c9f8-11e9-85e6-5d593ce65e10 uid: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 spec: migPlanRef: name: socks-shop-mig-plan namespace: openshift-migration quiescePods: true stage: false status: conditions: category: Advisory durable: True lastTransitionTime: 2019-08-29T01:03:40Z message: The migration has completed successfully. reason: Completed status: True type: Succeeded phase: Completed startTimestamp: 2019-08-29T01:01:29Z events: <none>", "apiVersion: velero.io/v1 kind: Backup metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.105.179:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-44dd3bd5-c9f8-11e9-95ad-0205fe66cbb6 openshift.io/orig-reclaim-policy: delete creationTimestamp: \"2019-08-29T01:03:15Z\" generateName: 88435fe0-c9f8-11e9-85e6-5d593ce65e10- generation: 1 labels: app.kubernetes.io/part-of: migration migmigration: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 migration-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 velero.io/storage-location: myrepo-vpzq9 name: 88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 namespace: openshift-migration resourceVersion: \"87313\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/backups/88435fe0-c9f8-11e9-85e6-5d593ce65e10-59gb7 uid: c80dbbc0-c9f8-11e9-95ad-0205fe66cbb6 spec: excludedNamespaces: [] excludedResources: [] hooks: resources: [] includeClusterResources: null includedNamespaces: - sock-shop includedResources: - persistentvolumes - persistentvolumeclaims - namespaces - imagestreams - imagestreamtags - secrets - configmaps - pods labelSelector: matchLabels: migration-included-stage-backup: 8886de4c-c9f8-11e9-95ad-0205fe66cbb6 storageLocation: myrepo-vpzq9 ttl: 720h0m0s volumeSnapshotLocations: - myrepo-wv6fx status: completionTimestamp: \"2019-08-29T01:02:36Z\" errors: 0 expiration: \"2019-09-28T01:02:35Z\" phase: Completed startTimestamp: \"2019-08-29T01:02:35Z\" validationErrors: null version: 1 volumeSnapshotsAttempted: 0 volumeSnapshotsCompleted: 0 warnings: 0", "apiVersion: velero.io/v1 kind: Restore metadata: annotations: openshift.io/migrate-copy-phase: final openshift.io/migrate-quiesce-pods: \"true\" openshift.io/migration-registry: 172.30.90.187:5000 openshift.io/migration-registry-dir: /socks-shop-mig-plan-registry-36f54ca7-c925-11e9-825a-06fa9fb68c88 creationTimestamp: \"2019-08-28T00:09:49Z\" generateName: e13a1b60-c927-11e9-9555-d129df7f3b96- generation: 3 labels: app.kubernetes.io/part-of: migration migmigration: e18252c9-c927-11e9-825a-06fa9fb68c88 migration-final-restore: e18252c9-c927-11e9-825a-06fa9fb68c88 name: e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx namespace: openshift-migration resourceVersion: \"82329\" selfLink: /apis/velero.io/v1/namespaces/openshift-migration/restores/e13a1b60-c927-11e9-9555-d129df7f3b96-gb8nx uid: 26983ec0-c928-11e9-825a-06fa9fb68c88 spec: backupName: e13a1b60-c927-11e9-9555-d129df7f3b96-sz24f excludedNamespaces: null excludedResources: - nodes - events - events.events.k8s.io - backups.velero.io - restores.velero.io - resticrepositories.velero.io includedNamespaces: null includedResources: null namespaceMapping: null restorePVs: true status: errors: 0 failureReason: \"\" phase: Completed validationErrors: null warnings: 15", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman login -u USD(oc whoami) -p USD(oc whoami -t) --tls-verify=false <registry_url>:<port>", "podman pull <registry_url>:<port>/openshift/<image>", "podman tag <registry_url>:<port>/openshift/<image> \\ 1 <registry_url>:<port>/openshift/<image> 2", "podman push <registry_url>:<port>/openshift/<image> 1", "oc get imagestream -n openshift | grep <image>", "NAME IMAGE REPOSITORY TAGS UPDATED my_image image-registry.openshift-image-registry.svc:5000/openshift/my_image latest 32 seconds ago", "oc describe migmigration <pod> -n openshift-migration", "Some or all transfer pods are not running for more than 10 mins on destination cluster", "oc get namespace <namespace> -o yaml 1", "oc edit namespace <namespace>", "apiVersion: v1 kind: Namespace metadata: annotations: openshift.io/node-selector: \"region=east\"", "echo -n | openssl s_client -connect <host_FQDN>:<port> \\ 1 | sed -ne '/-BEGIN CERTIFICATE-/,/-END CERTIFICATE-/p' > <ca_bundle.cert> 2", "Error checking repository for stale locks Error getting backup storage location: backupstoragelocation.velero.io \\\"my-bsl\\\" not found", "level=error msg=\"Error backing up item\" backup=velero/monitoring error=\"timed out waiting for all PodVolumeBackups to complete\" error.file=\"/go/src/github.com/ heptio/velero/pkg/restic/backupper.go:165\" error.function=\"github.com/heptio/ velero/pkg/restic.(*backupper).BackupPodVolumes\" group=v1", "spec: restic_timeout: 1h 1", "status: conditions: - category: Warn durable: true lastTransitionTime: 2020-04-16T20:35:16Z message: There were verify errors found in 1 Restic volume restores. See restore `<registry-example-migration-rvwcm>` for details 1 status: \"True\" type: ResticVerifyErrors 2", "oc describe <registry-example-migration-rvwcm> -n openshift-migration", "status: phase: Completed podVolumeRestoreErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration podVolumeRestoreResticErrors: - kind: PodVolumeRestore name: <registry-example-migration-rvwcm-98t49> namespace: openshift-migration", "oc describe <migration-example-rvwcm-98t49>", "completionTimestamp: 2020-05-01T20:49:12Z errors: 1 resticErrors: 1 resticPod: <restic-nr2v5>", "oc logs -f <restic-nr2v5>", "backup=openshift-migration/<backup_id> controller=pod-volume-backup error=\"fork/exec /usr/bin/restic: permission denied\" error.file=\"/go/src/github.com/vmware-tanzu/ velero/pkg/controller/pod_volume_backup_controller.go:280\" error.function= \"github.com/vmware-tanzu/velero/pkg/controller.(*podVolumeBackupController).processBackup\" logSource=\"pkg/controller/pod_volume_backup_controller.go:280\" name=<backup_id> namespace=openshift-migration", "spec: restic_supplemental_groups: <group_id> 1", "spec: restic_supplemental_groups: - 5555 - 6666", "cat << EOF | oc apply -f - apiVersion: migration.openshift.io/v1alpha1 kind: MigMigration metadata: labels: controller-tools.k8s.io: \"1.0\" name: <migmigration> namespace: openshift-migration spec: rollback: true migPlanRef: name: <migplan> 1 namespace: openshift-migration EOF", "oc delete USD(oc get pods -l migration.openshift.io/is-stage-pod -n <namespace>) 1", "oc scale deployment <deployment> --replicas=<premigration_replicas>", "apiVersion: extensions/v1beta1 kind: Deployment metadata: annotations: deployment.kubernetes.io/revision: \"1\" migration.openshift.io/preQuiesceReplicas: \"1\"", "oc get pod -n <namespace>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html-single/migrating_from_version_3_to_4/index
Chapter 7. Migration Toolkit for Virtualization 2.0
Chapter 7. Migration Toolkit for Virtualization 2.0 You can migrate virtual machines (VMs) from VMware vSphere with the Migration Toolkit for Virtualization (MTV). The release notes describe new features and enhancements, known issues, and technical changes. 7.1. New features and enhancements This release adds the following features and improvements. Warm migration Warm migration reduces downtime by copying most of the VM data during a precopy stage while the VMs are running. During the cutover stage, the VMs are stopped and the rest of the data is copied. Cancel migration You can cancel an entire migration plan or individual VMs while a migration is in progress. A canceled migration plan can be restarted in order to migrate the remaining VMs. Migration network You can select a migration network for the source and target providers for improved performance. By default, data is copied using the VMware administration network and the Red Hat OpenShift pod network. Validation service The validation service checks source VMs for issues that might affect migration and flags the VMs with concerns in the migration plan. Important The validation service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview/ . 7.2. Known issues This section describes known issues and mitigations. QEMU guest agent is not installed on migrated VMs The QEMU guest agent is not installed on migrated VMs. Workaround: Install the QEMU guest agent with a post-migration hook. ( BZ#2018062 ) Network map displays a "Destination network not found" error If the network map remains in a NotReady state and the NetworkMap manifest displays a Destination network not found error, the cause is a missing network attachment definition. You must create a network attachment definition for each additional destination network before you create the network map. ( BZ#1971259 ) Warm migration gets stuck during third precopy Warm migration uses changed block tracking snapshots to copy data during the precopy stage. The snapshots are created at one-hour intervals by default. When a snapshot is created, its contents are copied to the destination cluster. However, when the third snapshot is created, the first snapshot is deleted and the block tracking is lost. ( BZ#1969894 ) You can do one of the following to mitigate this issue: Start the cutover stage no more than one hour after the precopy stage begins so that only one internal snapshot is created. Increase the snapshot interval in the vm-import-controller-config config map to 720 minutes: USD oc patch configmap/vm-import-controller-config \ -n openshift-cnv -p '{"data": \ {"warmImport.intervalMinutes": "720"}}'
[ "oc patch configmap/vm-import-controller-config -n openshift-cnv -p '{\"data\": {\"warmImport.intervalMinutes\": \"720\"}}'" ]
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.6/html/release_notes/rn-20_release-notes
Chapter 11. Updating the overcloud for director Operator
Chapter 11. Updating the overcloud for director Operator After you update the openstackclient pod, update the overcloud by running the overcloud and container image preparation deployments, updating your nodes, and running the overcloud update converge deployment. During a minor update, the control plane API is available. 11.1. Preparing director Operator for a minor update The Red Hat OpenStack Platform (RHOSP) minor update process workflow: Prepare your environment for the RHOSP minor update. Update the openstackclient pod image to the latest OpenStack 16.2.z version. Update the overcloud to the latest OpenStack 16.2.z version. Update all Red Hat Ceph Storage services. Run the convergence deployment to refresh your overcloud stack. 11.1.1. Locking the environment to a Red Hat Enterprise Linux release Red Hat OpenStack Platform (RHOSP) 16.2 is supported on Red Hat Enterprise Linux (RHEL) 8.4. Before you perform the update, lock the overcloud repositories to the RHEL 8.4 release to avoid upgrading the operating system to a newer minor release. Procedure Copy the rhsm.yaml file to openstackclient : Open a remote shell on the openstackclient pod: Open the rhsm.yaml file and check if your subscription management configuration includes the rhsm_release parameter. If the rhsm_release parameter is not present, add it and set it to 8.4 : Save the overcloud subscription management environment file. Create a playbook that contains a task to lock the operating system version to RHEL 8.4 on all nodes: Run the ansible playbook on the openstackclient pod: Use the --limit option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because you are probably using a different subscription for these nodes. Note To manually lock a node to a version, log in to the node and run the subscription-manager release command: 11.1.2. Changing to Extended Update Support (EUS) repositories Your Red Hat OpenStack Platform (RHOSP) subscription includes repositories for Red Hat Enterprise Linux (RHEL) 8.4 Extended Update Support (EUS). The EUS repositories include the latest security patches and bug fixes for RHEL 8.4. Switch to the following repositories before you perform an update. Table 11.1. EUS repositories for RHEL 8.4 Standard repository EUS repository rhel-8-for-x86_64-baseos-rpms rhel-8-for-x86_64-baseos-eus-rpms rhel-8-for-x86_64-appstream-rpms rhel-8-for-x86_64-appstream-eus-rpms rhel-8-for-x86_64-highavailability-rpms rhel-8-for-x86_64-highavailability-eus-rpms Important You must use EUS repositories to retain compatibility with a specific version of Podman. Later versions of Podman are untested with RHOSP 16.2 and can cause unexpected results. Prerequisites Copy the rhsm.yaml file for the openstackclient pod to the /home/cloud-admin directory. Procedure Open a remote shell on the openstackclient pod: Open the rhsm.yaml file and check the rhsm_repos parameter in your subscription management configuration. If this parameter does not include the EUS repositories, change the relevant repositories to the EUS versions: parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16.2-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms Save the overcloud subscription management environment file. Create a playbook that contains a task to set the repositories to RHEL 8.4 EUS on all nodes: Run the change_eus.yaml playbook: USD ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/change_eus.yaml --limit Controller,Compute Use the --limit option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because they use a different subscription. 11.1.3. Updating Red Hat Openstack Platform and Ansible repositories Update your repositories to use Red Hat OpenStack Platform (RHOSP) 16.2 and Ansible 2.9 packages. For more information, see Overcloud repositories . Prerequisites You have copied the rhsm.yaml file for the openstackclient pod to the /home/cloud-admin directory. Procedure Open a remote shell on the openstackclient pod: Open the rhsm.yaml file and check the rhsm_repos parameter in your subscription management configuration. If the rhsm_repos parameter is using the RHOSP 16.1 and Ansible 2.8 repositories, change the repository to the correct versions: parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16.2-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms Save the overcloud subscription management environment file. Create a playbook that contains a task to set the repositories to RHOSP {osp_curr_ver} on all RHOSP nodes: USD cat > ~/update_rhosp_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --disable=openstack-16.1-for-rhel-8-x86_64-rpms --enable=openstack-16.2-for-rhel-8-x86_64-rpms --disable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms become: true EOF Run the update_rhosp_repos.yaml playbook: USD ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,Compute Use the --limit option to apply the content to all RHOSP nodes. Do not run this playbook against Red Hat Ceph Storage nodes because they use a different subscription. Create a playbook that contains a task to set the repositories to RHOSP {osp_curr_ver} on all Red Hat Ceph Storage nodes: USD cat > ~/update_ceph_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --disable=openstack-16-deployment-tools-for-rhel-8-x86_64-rpms --enable=openstack-16.2-deployment-tools-for-rhel-8-x86_64-rpms --disable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms become: true EOF Run the update_ceph_repos.yaml playbook: USD ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorage Use the --limit option to apply the content to Red Hat Ceph Storage nodes. 11.1.4. Setting the container-tools version Set the container-tools module to version 2.0 to ensure you use the correct package versions on all nodes. Procedure Open a remote shell on the openstackclient pod: Create a playbook that contains a task to set the container-tools module to version 3.0 on all nodes: USD cat > ~/container-tools.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: disable default dnf module for container-tools command: dnf module reset container-tools become: true - name: set dnf module for container-tools:3.0 command: dnf module enable -y container-tools:3.0 become: true - name: disable dnf module for virt:8.2 command: dnf module disable -y virt:8.2 become: true - name: set dnf module for virt:rhel command: dnf module enable -y virt:rhel become: true EOF Run the container-tools.yaml playbook against all nodes: USD ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ~/container-tools.yaml 11.1.5. Updating the container image preparation file The container preparation file is the file that contains the ContainerImagePrepare parameter. You use this file to define the rules for obtaining container images for the overcloud. Before you update your environment, check the file to ensure that you obtain the correct image versions. Procedure Edit the container preparation file. The default name for this file is usually containers-prepare-parameter.yaml . Check the tag parameter is set to 16.2 for each rule set: parameter_defaults: ContainerImagePrepare: - push_destination: true set: ... tag: '16.2' tag_from_label: '{version}-{release}' Note If you do not want to use a specific tag for the update, such as 16.2 or 16.2.2 , remove the tag key-value pair and specify tag_from_label only. This uses the installed Red Hat OpenStack Platform version to determine the value for the tag to use as part of the update process. Save this file. 11.1.6. Disabling fencing in the overcloud Before you update the overcloud, ensure that fencing is disabled. If fencing is deployed in your environment during the Controller nodes update process, the overcloud might detect certain nodes as disabled and attempt fencing operations, which can cause unintended results. If you have enabled fencing in the overcloud, you must temporarily disable fencing for the duration of the update to avoid any unintended results. Procedure Open a remote shell on the openstackclient pod: Log in to a Controller node and run the Pacemaker command to disable fencing: Replace <controller-0.ctlplane> with the name of your Controller node. In the fencing.yaml environment file, set the EnableFencing parameter to false to ensure that fencing stays disabled during the update process. Additional Resources Fencing Controller nodes with STONITH 11.2. Running the overcloud update preparation for director Operator To prepare the overcloud for the update process, generate an update prepare configuration, which creates updated ansible playbooks and prepares the nodes for the update. Procedure Modify the heat parameter ConfigMap called fencing.yaml , to disable fencing for the duration of the update: Create an OpenStackConfigGenerator resource called osconfiggenerator-update-prepare.yaml : Apply the configuration: Wait until the update preparation process completes. 11.3. Running the container image preparation for director Operator Before you can update the overcloud, you must prepare all container image configurations that are required for your environment. To complete the container image preparation, you must run the overcloud deployment against tasks that have the container_image_prepare tag. Procedure Create an osdeploy job called osdeploy-container-image-prepare.yaml : Apply the configuration: 11.4. Optional: Updating the ovn-controller container on all overcloud servers If you deployed your overcloud with the Modular Layer 2 Open Virtual Network mechanism driver (ML2/OVN), update the ovn-controller container to the latest RHOSP 16.2 version. The update occurs on every overcloud server that runs the ovn-controller container. Important The following procedure updates the ovn-controller containers on servers that are assigned the Compute role before it updates the ovn-northd service on servers that are assigned the Controller role. If you accidentally updated the ovn-northd service before following this procedure, you might not be able to reach your virtual machines or create new virtual machines or virtual networks. The following procedure restores connectivity. Procedure Create an osdeploy job called osdeploy-ovn-update.yaml : Apply the configuration: Wait until the ovn-controller container update completes. 11.5. Updating all Controller nodes on director Operator Update all the Controller nodes to the latest Red Hat OpenStack Platform (RHOSP) 16.2 version. Important Until BZ#1872404 is resolved, for nodes based on composable roles, you must update the Database role first, before you can update Controller , Messaging , Compute , Ceph , and other roles. Procedure Create an osdeploy job called osdeploy-controller-update.yaml : Apply the configuration: Wait until the Controller node update completes. 11.6. Updating all Compute nodes on director Operator Update all Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 16.2 version. To update Compute nodes, run a deployment with the limit: Compute option to restrict operations to the Compute nodes only. Procedure Create an osdeploy job called osdeploy-compute-update.yaml : Apply the configuration: Wait until the Compute node update completes. 11.7. Updating all HCI Compute nodes on director Operator Update the Hyperconverged Infrastructure (HCI) Compute nodes to the latest Red Hat OpenStack Platform (RHOSP) 16.2 version. To update the HCI Compute nodes, run a deployment with the limit: ComputeHCI option to restrict operations to only the HCI nodes. You must also run a deployment with the mode: external-update and tags: ["ceph"] options to perform an update to a containerized Red Hat Ceph Storage 4 cluster. Procedure Create an osdeploy job called osdeploy-computehci-update.yaml : Apply the configuration: Wait until the ComputeHCI node update completes. Create an osdeploy job called osdeploy-ceph-update.yaml : Apply the configuration: Wait until the Red Hat Ceph Storage node update completes. 11.8. Updating all Red Hat Ceph Storage nodes on director Operator Update the Red Hat Ceph Storage nodes to the latest Red Hat OpenStack Platform (RHOSP) 16.2 version. Important RHOSP 16.2 is supported on RHEL 8.4. However, hosts that are mapped to the CephStorage role update to the latest major RHEL release. For more information, see Red Hat Ceph Storage: Supported configurations . Procedure Create an osdeploy job called osdeploy-cephstorage-update.yaml : Apply the configuration: Wait until the Red Hat Ceph Storage node update completes. Create an osdeploy job called osdeploy-ceph-update.yaml : Apply the configuration: Wait until the Red Hat Ceph Storage node update completes. 11.9. Performing online database updates on director Operator Some overcloud components require an online update or migration of their databases tables. Online database updates apply to the following components: OpenStack Block Storage (cinder) OpenStack Compute (nova) Procedure Create an osdeploy job called osdeploy-online-migration.yaml : Apply the configuration: 11.10. Finalizing the update To finalize the update to the latest Red Hat OpenStack Platform 16.2 version, you must update the overcloud generated configuration. This ensures that the stack resource structure aligns with a regular deployment of OSP 16.2 and you can perform standard overcloud deployments in the future. Procedure Re-enable fencing in the fencing.yaml environment file: Regenerate the default configuration, ensuring that lifecycle/update-prepare.yaml is not included in the heatEnvs. For more information, see Configuring overcloud software with the director Operator . Delete OpenStackConfigGenerator, ConfigVersion, and configuration deployment resources. Replace <type> with the type of resource to delete. Replace <name> with the name of the resource to delete. Wait until the update finalization completes.
[ "oc cp rhsm.yaml openstackclient:/home/cloud-admin/rhsm.yaml", "oc rsh openstackclient", "parameter_defaults: RhsmVars: ... rhsm_username: \"myusername\" rhsm_password: \"p@55w0rd!\" rhsm_org_id: \"1234567\" rhsm_pool_ids: \"1a85f9223e3d5e43013e3d6e8ff506fd\" rhsm_method: \"portal\" rhsm_release: \"8.4\"", "cat > ~/set_release.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: set release to 8.4 command: subscription-manager release --set=8.4 become: true EOF", "ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/set_release.yaml --limit Controller,Compute", "sudo subscription-manager release --set=8.4", "oc rsh openstackclient", "parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16.2-for-rhel-8-x86_64-rpms - rhceph-4-tools-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms", "cat > ~/change_eus.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change to eus repos command: subscription-manager repos --disable=rhel-8-for-x86_64-baseos-rpms --disable=rhel-8-for-x86_64-appstream-rpms --disable=rhel-8-for-x86_64-highavailability-rpms --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhel-8-for-x86_64-highavailability-eus-rpms become: true EOF", "ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/change_eus.yaml --limit Controller,Compute", "oc rsh openstackclient", "parameter_defaults: RhsmVars: rhsm_repos: - rhel-8-for-x86_64-baseos-eus-rpms - rhel-8-for-x86_64-appstream-eus-rpms - rhel-8-for-x86_64-highavailability-eus-rpms - ansible-2.9-for-rhel-8-x86_64-rpms - openstack-16.2-for-rhel-8-x86_64-rpms - fast-datapath-for-rhel-8-x86_64-rpms", "cat > ~/update_rhosp_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change osp repos command: subscription-manager repos --disable=openstack-16.1-for-rhel-8-x86_64-rpms --enable=openstack-16.2-for-rhel-8-x86_64-rpms --disable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms become: true EOF", "ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_rhosp_repos.yaml --limit Controller,Compute", "cat > ~/update_ceph_repos.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: change ceph repos command: subscription-manager repos --disable=openstack-16-deployment-tools-for-rhel-8-x86_64-rpms --enable=openstack-16.2-deployment-tools-for-rhel-8-x86_64-rpms --disable=ansible-2.8-for-rhel-8-x86_64-rpms --enable=ansible-2.9-for-rhel-8-x86_64-rpms become: true EOF", "ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory /home/cloud-admin/update_ceph_repos.yaml --limit CephStorage", "oc rsh openstackclient", "cat > ~/container-tools.yaml <<'EOF' - hosts: all gather_facts: false tasks: - name: disable default dnf module for container-tools command: dnf module reset container-tools become: true - name: set dnf module for container-tools:3.0 command: dnf module enable -y container-tools:3.0 become: true - name: disable dnf module for virt:8.2 command: dnf module disable -y virt:8.2 become: true - name: set dnf module for virt:rhel command: dnf module enable -y virt:rhel become: true EOF", "ansible-playbook -i /home/cloud-admin/ctlplane-ansible-inventory ~/container-tools.yaml", "parameter_defaults: ContainerImagePrepare: - push_destination: true set: tag: '16.2' tag_from_label: '{version}-{release}'", "oc rsh openstackclient", "ssh <controller-0.ctlplane> \"sudo pcs property set stonith-enabled=false\"", "parameter_defaults: EnableFencing: false", "cat <<EOF > osconfiggenerator-update-prepare.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackConfigGenerator metadata: name: \"update\" namespace: openstack spec: gitSecret: git-secret heatEnvs: - lifecycle/update-prepare.yaml heatEnvConfigMap: heat-env-config-update tarballConfigMap: tripleo-tarball-config-update EOF", "oc apply -f osconfiggenerator-update-prepare.yaml", "cat <<EOF > osdeploy-container-image-prepare.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: container_image_prepare spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - container_image_prepare EOF", "oc apply -f osdeploy-container-image-prepare.yaml", "cat <<EOF > osdeploy-ovn-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ovn-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: tags: - ovn EOF", "oc apply -f osdeploy-ovn-update.yaml", "cat <<EOF > osdeploy-controller-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: controller-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: Controller EOF", "oc apply -f osdeploy-controller-update.yaml", "cat <<EOF > osdeploy-compute-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: compute-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: Compute EOF", "oc apply -f osdeploy-compute-update.yaml", "cat <<EOF > osdeploy-computehci-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: computehci-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: ComputeHCI EOF", "oc apply -f osdeploy-computehci-update.yaml", "cat <<EOF > osdeploy-ceph-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-update spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - ceph EOF", "oc apply -f osdeploy-ceph-update.yaml", "cat <<EOF > osdeploy-cephstorage-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: cephstorage-update spec: configVersion: <config_version> configGenerator: update mode: update advancedSettings: limit: CephStorage EOF", "oc apply -f osdeploy-cephstorage-update.yaml", "cat <<EOF > osdeploy-ceph-update.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: ceph-update spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - ceph EOF", "oc apply -f osdeploy-ceph-update.yaml", "cat <<EOF > osdeploy-online-migration.yaml apiVersion: osp-director.openstack.org/v1beta1 kind: OpenStackDeploy metadata: name: online-migration spec: configVersion: <config_version> configGenerator: update mode: external-update advancedSettings: tags: - online_upgrade EOF", "oc apply -f osdeploy-online-migration.yaml", "parameter_defaults: EnableFencing: true", "oc delete <type> <name>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/rhosp_director_operator_for_openshift_container_platform/assembly_updating-the-overcloud-for-director-operator_rhosp-director-operator
B.45. libvirt
B.45. libvirt B.45.1. RHSA-2011:0391 - Important: libvirt security update Updated libvirt packages that fix one security issue are now available for Red Hat Enterprise Linux 5 and 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. The libvirt library is a C API for managing and interacting with the virtualization capabilities of Linux and other operating systems. In addition, libvirt provides tools for remotely managing virtualized systems. CVE-2011-1146 It was found that several libvirt API calls did not honor the read-only permission for connections. A local attacker able to establish a read-only connection to libvirtd on a server could use this flaw to execute commands that should be restricted to read-write connections, possibly leading to a denial of service or privilege escalation. Note Previously, using rpmbuild without the '--define "rhel 5"' option to build the libvirt source RPM on Red Hat Enterprise Linux 5 failed with a "Failed build dependencies" error for the device-mapper-devel package, as this -devel sub-package is not available on Red Hat Enterprise Linux 5. With this update, the -devel sub-package is no longer checked by default as a dependency when building on Red Hat Enterprise Linux 5, allowing the libvirt source RPM to build as expected. All libvirt users are advised to upgrade to these updated packages, which contain a backported patch to resolve this issue. After installing the updated packages, libvirtd must be restarted ("service libvirtd restart") for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/libvirt
Chapter 7. Configuring screen rotation
Chapter 7. Configuring screen rotation 7.1. Configuring screen rotation for a single user This procedure sets screen rotation for the current user. Procedure Go to the system menu , which is accessible from the top-right screen corner, and click the Settings icon. In the Settings Devices section, choose Displays . Configure the rotation using the Orientation field. Confirm your choice by clicking Apply . If you are satisfied with the new setup preview, click on Keep changes . The setting persists to your login. Additional resources For information about rotating the screen for all users on a system, see Configuring screen rotation for all users . 7.2. Configuring screen rotation for all users This procedure sets a default screen rotation for all users on a system and is suitable for mass deployment of homogenized display configuration. Procedure Prepare the preferable setup for a single user as in Configuring the screen rotation for a single user . Copy the transform section of the ~/.config/monitors.xml configuration file, which configures the screen rotation. An example portrait orientation: <?xml version="1.0" encoding="UTF-8"?> <transform> <rotation>left</rotation> <flipped>no</flipped> </transform> Paste the content in the /etc/xdg/monitors.xml file that stores system-wide configuration. Save the changes. The new setup takes effect for all the users the time they log in in the system. Additional resources Configuring screen rotation for a single user 7.3. Configuring screen rotation for multiple monitors In a multi-monitor setup, you can configure individual monitors with different screen rotation so that you can adjust monitor layout to your display needs. Procedure In the Settings application, go to Displays . Identify the monitor that you want to rotate from the visual representation of your connected monitors. Select the monitor whose orientation you want to configure. Select orientation: Landscape: Default orientation. Portrait Right: Rotates the screen by 90 degrees to the right. Portrait Left: Rotates the screen by 90 degrees to the left. Landscape (flipped): Rotates the screen by 180 degrees upside down. Click Apply to display preview. If you are satisfied with the preview, click Keep Changes . Alternatively, go back to the original orientation by clicking Revert Changes .
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <transform> <rotation>left</rotation> <flipped>no</flipped> </transform>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/configuring-screen-rotation_customizing-the-gnome-desktop-environment
Using Eclipse 4.18
Using Eclipse 4.18 Red Hat Developer Tools 1 Installing Eclipse 4.18 and the first steps with the application Eva-Lotte Gebhardt [email protected] Olga Tikhomirova [email protected] Peter Macko Kevin Owen Yana Hontyk Red Hat Developer Group Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_developer_tools/1/html/using_eclipse_4.18/index
8.16. clustermon
8.16. clustermon 8.16.1. RHBA-2013:1602 - clustermon bug fix update Updated clustermon packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The clustermon packages are used for remote cluster management. The modclusterd service provides an abstraction of cluster status used by conga and by the Simple Network Management (SNMP) and Common Information Model (CIM) modules of clustermon. Bug Fixes BZ# 951470 Prior to this update, the modclusterd service made an improper CMAN API call when attempting to associate the local machine's address with a particular cluster node entry, but with no success. Consequently, modclusterd returned log messages every five seconds. In addition, when logging for CMAN was enabled, membership messages included, messages arising from the CMAN API misuse were emitted. Now, the CMAN API call is used properly, which corrects the aforementioned consequences. BZ# 908728 Previously, the modclusterd service terminated unexpectedly in IPv4-only environments when stopped due to accessing unitialized memory only used when IPv6 was available. With this update, modclusterd no longer crashes in IPv4-only environments. BZ# 888543 Previously, the SNMP (Simple Network Management Protocol) agent exposing the cluster status and shipped as cluster-snmp caused the SNMP server (snmpd) to terminate unexpectedly with a segmentation fault when this module was loaded, and the containing server was instructed to reload. This was caused by an improper disposal of the resources facilitated by this server, alarms in particular. Now, the module properly cleans up such resources when being unloaded, preventing the crash on reload. Users of clustermon are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.5_technical_notes/clustermon
Chapter 42. Data objects
Chapter 42. Data objects Data objects are the building blocks for the rule assets that you create. Data objects are custom data types implemented as Java objects in specified packages of your project. For example, you might create a Person object with data fields Name , Address , and DateOfBirth to specify personal details for loan application rules. These custom data types determine what data your assets and your decision services are based on. 42.1. Creating data objects The following procedure is a generic overview of creating data objects. It is not specific to a particular business asset. Procedure In Business Central, go to Menu Design Projects and click the project name. Click Add Asset Data Object . Enter a unique Data Object name and select the Package where you want the data object to be available for other rule assets. Data objects with the same name cannot exist in the same package. In the specified DRL file, you can import a data object from any package. Importing data objects from other packages You can import an existing data object from another package directly into the asset designers like guided rules or guided decision table designers. Select the relevant rule asset within the project and in the asset designer, go to Data Objects New item to select the object to be imported. To make your data object persistable, select the Persistable checkbox. Persistable data objects are able to be stored in a database according to the JPA specification. The default JPA is Hibernate. Click Ok . In the data object designer, click add field to add a field to the object with the attributes Id , Label , and Type . Required attributes are marked with an asterisk (*). Id: Enter the unique ID of the field. Label: (Optional) Enter a label for the field. Type: Enter the data type of the field. List: (Optional) Select this check box to enable the field to hold multiple items for the specified type. Figure 42.1. Add data fields to a data object Click Create to add the new field, or click Create and continue to add the new field and continue adding other fields. Note To edit a field, select the field row and use the general properties on the right side of the screen.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_decision_services_in_red_hat_process_automation_manager/data-objects-con_decision-tables
5.270. qt
5.270. qt 5.270.1. RHBA-2012:1246 - qt bug fix update Updated qt packages that fix two bugs are now available for Red Hat Enterprise Linux 6. The qt packages contain a software toolkit that simplifies the task of writing and maintaining GUI (Graphical User Interface) applications for the X Window System. Bug Fixes BZ# 678604 Prior to this update, the mouse pointer could, under certain circumstances, disappear when using the IRC client Konversation. This update modifies the underlying codes to reset the cursor on the parent and set the cursor on the new window handle. Now, the mouse pointer no longer disappears. BZ# 847866 Prior to this update, the high precision coordinates of the QTabletEvent class failed to handle multiple Wacom devices. As a consequence, only the device that was loaded first worked correctly. This update modifies the underlying code so that multiple Wacom devices are handled as expected. All users of qt are advised to upgrade to these updated packages, which fix this bugs. 5.270.2. RHSA-2012:0880 - Moderate: qt security and bug fix update Updated qt packages that fix two security issues and three bugs are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. Qt is a software toolkit that simplifies the task of writing and maintaining GUI (Graphical User Interface) applications for the X Window System. HarfBuzz is an OpenType text shaping engine. Security Fixes CVE-2011-3922 A buffer overflow flaw was found in the harfbuzz module in Qt. If a user loaded a specially-crafted font file with an application linked against Qt, it could cause the application to crash or, possibly, execute arbitrary code with the privileges of the user running the application. CVE-2010-5076 A flaw was found in the way Qt handled X.509 certificates with IP address wildcards. An attacker able to obtain a certificate with a Common Name containing an IP wildcard could possibly use this flaw to impersonate an SSL server to client applications that are using Qt. This update also introduces more strict handling for hostname wildcard certificates by disallowing the wildcard character to match more than one hostname component. Bug Fixes BZ# 694684 The Phonon API allowed premature freeing of the media object. Consequently, GStreamer could terminate unexpectedly as it failed to access the released media object. This update modifies the underlying Phonon API code and the problem no longer occurs. BZ# 757793 Previously, Qt could output the "Unrecognized OpenGL version" error and fall back to OpenGL-version-1 compatibility mode. This happened because Qt failed to recognize the version of OpenGL installed on the system if the system was using a version of OpenGL released later than the Qt version in use. This update adds the code for recognition of OpenGL versions to Qt and if the OpenGL version is unknown, Qt assumes that the last-known version of OpenGL is available. BZ# 734444 Previously Qt included a compiled-in list of trusted CA (Certificate Authority) certificates, that could have been used if Qt failed to open a system's ca-bundle.crt file. With this update, Qt no longer includes compiled-in CA certificates and only uses the system bundle. Users of Qt should upgrade to these updated packages, which contain backported patches to correct these issues. All running applications linked against Qt libraries must be restarted for this update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/qt
Part VIII. Kernel, Module and Driver Configuration
Part VIII. Kernel, Module and Driver Configuration This part covers various tools that assist administrators with kernel customization.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/part-kernel_module_and_driver_configuration
Chapter 11. Configuring polyinstantiated directories
Chapter 11. Configuring polyinstantiated directories By default, all programs, services, and users use the /tmp , /var/tmp , and home directories for temporary storage. This makes these directories vulnerable to race condition attacks and information leaks based on file names. You can make /tmp/ , /var/tmp/ , and the home directory instantiated so that they are no longer shared between all users, and each user's /tmp-inst and /var/tmp/tmp-inst is separately mounted to the /tmp and /var/tmp directory. Procedure Enable polyinstantiation in SELinux: You can verify that polyinstantiation is enabled in SELinux by entering the getsebool allow_polyinstantiation command. Create the directory structure for data persistence over reboot with the necessary permissions: Restore the entire security context including the SELinux user part: If your system uses the fapolicyd application control framework, allow fapolicyd to monitor file access events on the underlying file system when they are bind mounted by enabling the allow_filesystem_mark option in the /etc/fapolicyd/fapolicyd.conf configuration file. Enable instantiation of the /tmp , /var/tmp/ , and users' home directories: Important Use /etc/security/namespace.conf instead of a separate file in the /etc/security/namespace.d/ directory because the pam_namespace_helper program does not read additional files in /etc/security/namespace.d . On a system with multi-level security (MLS), uncomment the last three lines in the /etc/security/namespace.conf file: On a system without multi-level security (MLS), add the following lines in the /etc/security/namespace.conf file: Verify that the pam_namespace.so module is configured for the session: Optional: Enable cloud users to access the system with SSH keys: Install the openssh-keycat package. Create a file in the /etc/ssh/sshd_config.d/ directory with the following content: Verify that public key authentication is enabled by checking that the PubkeyAuthentication variable in sshd_config is set to yes . By default, PubkeyAuthentication is set to yes, even though the line in sshd_config is commented out. Add the session required pam_namespace.so unmnt_remnt entry into the module for each service for which polyinstantiation should apply, after the session include system-auth line. For example, in /etc/pam.d/su , /etc/pam.d/sudo , /etc/pam.d/ssh , and /etc/pam.d/sshd : Verification Log in as a non-root user. Users that were logged in before polyinstantiation was configured must log out and log in before the changes take effect for them. Check that the /tmp/ directory is mounted under /tmp-inst/ : The SOURCE output differs based on your environment. * On virutal systems, it shows /dev/vda_<number>_ . * On bare-metal systems it shows /dev/sda_<number>_ or /dev/nvme* Additional resources /usr/share/doc/pam-docs/txts/README.pam_namespace readme file installed with the pam-docs package.
[ "setsebool -P allow_polyinstantiation 1", "mkdir /tmp-inst /var/tmp/tmp-inst --mode 000", "restorecon -Fv /tmp-inst /var/tmp/tmp-inst Relabeled /tmp-inst from unconfined_u:object_r:default_t:s0 to system_u:object_r:tmp_t:s0 Relabeled /var/tmp/tmp-inst from unconfined_u:object_r:tmp_t:s0 to system_u:object_r:tmp_t:s0", "allow_filesystem_mark = 1", "/tmp /tmp-inst/ level root,adm /var/tmp /var/tmp/tmp-inst/ level root,adm USDHOME USDHOME/USDUSER.inst/ level", "/tmp /tmp-inst/ user root,adm /var/tmp /var/tmp/tmp-inst/ user root,adm USDHOME USDHOME/USDUSER.inst/ user", "grep namespace /etc/pam.d/login session required pam_namespace.so", "AuthorizedKeysCommand /usr/libexec/openssh/ssh-keycat AuthorizedKeysCommandRunAs root", "grep -r PubkeyAuthentication /etc/ssh/ /etc/ssh/sshd_config:#PubkeyAuthentication yes", "[...] session include system-auth session required pam_namespace.so unmnt_remnt [...]", "findmnt --mountpoint /tmp/ TARGET SOURCE FSTYPE OPTIONS /tmp /dev/vda1[/tmp-inst/ <user> ] xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_selinux/configuring-polyinstantiated-directories_using-selinux
Chapter 25. Virtualization
Chapter 25. Virtualization The following chapters contain the most notable changes to virtualization between RHEL 8 and RHEL 9. 25.1. Notable changes to KVM KVM virtualization is no longer supported on IBM POWER Red Hat Kernel-based Virtual Machine (KVM) for RHEL 9.0 and later is not supported on IBM POWER hardware. KVM virtualization fully supported on 64-bit ARM architecture In RHEL 9.4 and later, creating KVM virtual machines on systems that use 64-bit ARM (also known as AArch64) CPUs is fully supported. Note, however, that certain virtualization features and functionalities that are available on AMD64 and Intel 64 systems might work differently or be unsupported on 64-bit ARM systems. For details, see How virtualization on ARM 64 differs from AMD 64 and Intel 64 . VM machine types based on RHEL 7.5 and earlier are unsupported In RHEL 9, virtual machines (VMs) no longer support machine types based on RHEL 7.5 and earlier. These also include pc-i440fx-rhel7.5.0 and earlier machine types, which were default in earlier major versions of RHEL. As a consequence, attempting to start a VM with such machine types on a RHEL 9 host fails with an unsupported configuration error. If you encounter this problem after upgrading your host to RHEL 9, see the Red Hat Knowledgebase solution Invalid virtual machines that used to work with RHEL 9 and newer hypervisors . RHEL 9 still supports the pc-i440fx-rhel7.6.0 machine type. However, RHEL will remove support for all i440x machine types in a future major update. 25.2. Notable changes to libvirt Modular libvirt daemons In RHEL 9, the libvirt library uses modular daemons that handle individual virtualization driver sets on your host. For example, the virtqemud daemon handles QEMU drivers. This makes it possible to fine-grain a variety of tasks that involve virtualization drivers, such as resource load optimization and monitoring. In addition, the monolithic libvirt daemon, libvirtd , has become deprecated. However, if you upgrade from RHEL 8 to RHEL 9, your host will still use libvirtd , which you can continue using in RHEL 9. Nevertheless, Red Hat recommends enabling modular libvirt daemons instead. For instructions, see the Enabling modular libvirt daemons document. Note, however, that if you switch to using modular libvirt daemons, pre-configured tasks that use libvirtd will stop working. External snapshots for virtual machines RHEL 9.4 and later supports the external snapshot mechanism for virtual machines (VMs), which replaces the previously deprecated internal snapshot mechanism. As a result, you can create, delete, and revert to VM snapshots that are fully supported. External snapshots work more reliably both on the command line and in the RHEL web console. This also applies to snapshots of running VMs, known as live snapshots. Note, however, that some commands and utilities might still create internal snapshots. To verify that your snapshot is fully supported, ensure that it is configured as external . For example: virsh iface-* commands are now unsupported The virsh iface-* commands, such as virsh iface-start and virsh iface-destroy , are no longer supported in RHEL 9. Due to the removal of the netcf package, the majority of them do not work. To create and modify network interfaces, use NetworkManager utilities, such as nmcli . 25.3. Notable changes to QEMU QEMU no longer includes the SGA option ROM In RHEL 9, the Serial Graphics Adapter (SGA) option ROM has been replaced by an equivalent functionality in SeaBIOS. However, if your virtual machine (VM) configuration uses the following XML fragament, this change will not affect your VM functionality. TPM passthrough has been removed It is no longer possible to assign a physical Trusted Platform Module (TPM) device using the passthrough back end to a VM on RHEL 9. Note that this was an unsupported feature in RHEL 8. Instead, use the vTPM functionality, which uses the emulator back end, and is fully supported. Other unsupported devices QEMU no longer supports the following virtual devices: The Cirrus graphics device. The default graphics devices are now set to stdvga on BIOS-based machines and bochs-display on UEFI-based machines. The ac97 audio device. In RHEL 9, libvirt uses the ich9 device instead. Intel vGPU removed The packages required for the Intel vGPU feature were removed in RHEL 9.3. Previously, as a Technology Preview, it was possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices could then be assigned to multiple virtual machines (VMs) as virtual GPUs. Since RHEL 9.3, you cannot use this feature. 25.4. Notable changes to SPICE SPICE has become unsupported In RHEL 9, the SPICE remote display protocol is no longer supported. QXL, the graphics device used by SPICE, has also become unsupported. On a RHEL 9 host, VMs configured to use SPICE or QXL fail to start and instead display an unsupported configuration error. Instead of SPICE, Red Hat recommends using alternate solutions for remote display streaming: For remote console access, use the VNC protocol. However, note that certain features available on SPICE are currently unsupported or do not work well on VNC. This includes: Smart card sharing from the host to the VM (It is now supported only by third party remote visualization solutions.) Audio playback from the VM to the host Automated VM screen resizing USB redirection from the host to the VM Drag & drop file transfer from the host to the VM Clipboard sharing between the host and the VM Uninterrupted connection to VM during live migration Dynamic resizing of the VM screen with the client window In addition, VNC cannot be used by the GNOME Boxes application. As a consequence, Boxes is currently not available in RHEL 9. For advanced remote display functions, use third party tools such as RDP, HP ZCentral Remote Boost, or Mechdyne TGX. For graphical VMs hosted on RHEL 9, Red Hat recommends using the virtio-vga and virtio-gpu virtual graphics cards. For more information on how to switch a VM from the SPICE protocol to VNC , see the Red Hat Knowledgebase solution Unable to define, create or start a Virtual Machine using spice or qxl in RHEL 9 KVM .
[ "virsh snapshot-dumpxml VM-name snapshot-name | grep external <disk name='vda' snapshot='external' type='file'>", "<bios useserial='yes'/>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/considerations_in_adopting_rhel_9/assembly_virtualization_considerations-in-adopting-rhel-9
Chapter 4. Getting Started with Virtualization Command-line Interface
Chapter 4. Getting Started with Virtualization Command-line Interface The standard method of operating virtualization on Red Hat Enterprise Linux 7 is using the command-line user interface (CLI). Entering CLI commands activates system utilities that create or interact with virtual machines on the host system. This method offers more detailed control than using graphical applications such as virt-manager and provides opportunities for scripting and automation. 4.1. Primary Command-line Utilities for Virtualization The following subsections list the main command-line utilities you can use to set up and manage virtualization on Red Hat Enterprise Linux 7. These commands, as well as numerous other virtualization utilities, are included in packages provided by the Red Hat Enterprise Linux repositories and can be installed using the Yum package manager . For more information about installing virtualization packages, see the Virtualization Deployment and Administration Guide . 4.1.1. virsh virsh is a CLI utility for managing hypervisors and guest virtual machines. It is the primary means of controlling virtualization on Red Hat Enterprise Linux 7. Its capabilities include: Creating, configuring, pausing, listing, and shutting down virtual machines Managing virtual networks Loading virtual machine disk images The virsh utility is ideal for creating virtualization administration scripts. Users without root privileges can use virsh as well, but in read-only mode. Using virsh The virsh utility can be used in a standard command-line input, but also as an interactive shell. In shell mode, the virsh command prefix is not needed, and the user is always registered as root. The following example uses the virsh hostname command to display the hypervisor's host name - first in standard mode, then in interactive mode. Important When using virsh as a non-root user, you enter an unprivileged libvirt session , which means you cannot see or interact with guests or any other virtualized elements created by the root. To gain read-only access to the elements, use virsh with the -c qemu:///system option. Getting help with virsh Like with all Linux bash commands, you can obtain help with virsh by using the man virsh command or the --help option. In addition, the virsh help command can be used to view the help text of a specific virsh command, or, by using a keyword, to list all virsh commands that belong to a certain group. The virsh command groups and their respective keywords are as follows: Guest management - keyword domain Guest monitoring - keyword monitor Host and hypervisor monitoring and management- keyword host Host system network interface management - keyword interface Virtual network management - keyword network Network filter management - keyword filter Node device management - keyword nodedev Management of secrets, such as passphrases or encryption keys - keyword secret Snapshot management - keyword snapshot Storage pool management - keyword pool Storage volume management - keyword volume General virsh usage - keyword virsh In the following example, you need to learn how to rename a guest virtual machine. By using virsh help , you first find the proper command to use and then learn its syntax. Finally, you use the command to rename a guest called Fontaine to Atlas . Example 4.1. How to list help for all commands with a keyword Note For more information about managing virtual machines using virsh , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 4.1.2. virt-install virt-install is a CLI utility for creating new virtual machines. It supports both text-based and graphical installations, using serial console, SPICE, or VNC client-server pair graphics. Installation media can be local, or exist remotely on an NFS, HTTP, or FTP server. The tool can also be configured to run unattended and use the kickstart method to prepare the guest, allowing for easy automation of installation. This tool is included in the virt-install package. Important When using virt-install as a non-root user, you enter an unprivileged libvirt session . This means that the created guest will only be visible to you, and it will not have access to certain capabilities that guests created by the root have. Note For more information about using virt-install , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide . 4.1.3. virt-xml virt-xml is a command-line utility for editing domain XML files. For the XML configuration to be modified successfully, the name of the guest, the XML action, and the change to make must be included with the command. For example, the following lists the suboptions that relate to guest boot configuration, and then turns on the boot menu on the example_domain guest: Note that each invocation of the command can perform one action on one domain XML file. Note This tool is included in the virt-install package. For more information about using virt-xml , see the virt-xml man pages. 4.1.4. guestfish guestfish is a command-line utility for examining and modifying virtual machine disk images. It uses the libguestfs library and exposes all functionalities provided by the libguestfs API. Using guestfish The guestfish utility can be used in a standard command-line input mode, but also as an interactive shell. In shell mode, the guestfish command prefix is not needed, and the user is always registered as root. The following example uses the guestfish to display the file systems on the testguest virtual machine - first in standard mode, then in interactive mode. In addition, guestfish can be used in bash scripts for automation purposes. Important When using guestfish as a non-root user, you enter an unprivileged libvirt session . This means you cannot see or interact with disk images on guests created by the root. To gain read-only access to these disk images, use guestfish with the -ro -c qemu:///system options. In addition, you must have read privileges for the disk image files. Getting help with guestfish Like with all Linux bash commands, you can obtain help with guestfish by using the man guestfish command or the --help option. In addition, the guestfish help command can be used to view detailed information about a specific guestfish command. The following example displays information about the guestfish add command: Note For more information about guestfish , see the Red Hat Enterprise Linux 7 Virtualization Deployment and Administration Guide .
[ "virsh hostname localhost.localdomain USD virsh Welcome to virsh, the virtualization interactive terminal. Type: 'help' for help with commands 'quit' to quit virsh # hostname localhost.localdomain", "virsh help domain Domain Management (help keyword 'domain'): attach-device attach device from an XML file attach-disk attach disk device [...] domname convert a domain id or UUID to domain name domrename rename a domain [...] virsh help domrename NAME domrename - rename a domain SYNOPSIS domrename <domain> <new-name> DESCRIPTION Rename an inactive domain. OPTIONS [--domain] <string> domain name, id or uuid [--new-name] <string> new domain name virsh domrename --domain Fontaine --new-name Atlas Domain successfully renamed", "virt-xml boot=? --boot options: arch cdrom [...] menu network nvram nvram_template os_type smbios_mode uefi useserial virt-xml example_domain --edit --boot menu=on Domain 'example_domain' defined successfully.", "guestfish domain testguest : run : list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap guestfish Welcome to guestfish, the guest filesystem shell for editing virtual machine filesystems and disk images. Type: 'help' for help on commands 'man' to read the manual 'quit' to quit the shell ><fs> domain testguest ><fs> run ><fs> list-filesystems /dev/sda1: xfs /dev/rhel/root: xfs /dev/rhel/swap: swap", "guestfish help add NAME add-drive - add an image to examine or modify SYNOPSIS add-drive filename [readonly:true|false] [format:..] [iface:..] [name:..] [label:..] [protocol:..] [server:..] [username:..] [secret:..] [cachemode:..] [discard:..] [copyonread:true|false] DESCRIPTION This function adds a disk image called filename to the handle. filename may be a regular host file or a host device. [...]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_getting_started_guide/chap-cli-intro
4.13. Hardening TLS Configuration
4.13. Hardening TLS Configuration TLS ( Transport Layer Security ) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols , authentication methods , and encryption algorithms , it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. Note that the default settings provided by libraries included in Red Hat Enterprise Linux 7 are secure enough for most deployments. The TLS implementations use secure algorithms where possible while not preventing connections from or to legacy clients or servers. Apply the hardened settings described in this section in environments with strict security requirements where legacy clients or servers that do not support secure algorithms or protocols are not expected or allowed to connect. 4.13.1. Choosing Algorithms to Enable There are several components that need to be selected and configured. Each of the following directly influences the robustness of the resulting configuration (and, consequently, the level of support in clients) or the computational demands that the solution has on the system. Protocol Versions The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to include support for older versions of TLS (or even SSL ), allow your systems to negotiate connections using only the latest version of TLS . Do not allow negotiation using SSL version 2 or 3. Both of those versions have serious security vulnerabilities. Only allow negotiation using TLS version 1.0 or higher. The current version of TLS , 1.2, should always be preferred. Note Please note that currently, the security of all versions of TLS depends on the use of TLS extensions, specific ciphers (see below), and other workarounds. All TLS connection peers need to implement secure renegotiation indication ( RFC 5746 ), must not support compression, and must implement mitigating measures for timing attacks against CBC -mode ciphers (the Lucky Thirteen attack). TLS 1.0 clients need to additionally implement record splitting (a workaround against the BEAST attack). TLS 1.2 supports Authenticated Encryption with Associated Data ( AEAD ) mode ciphers like AES-GCM , AES-CCM , or Camellia-GCM , which have no known issues. All the mentioned mitigations are implemented in cryptographic libraries included in Red Hat Enterprise Linux. See Table 4.6, "Protocol Versions" for a quick overview of protocol versions and recommended usage. Table 4.6. Protocol Versions Protocol Version Usage Recommendation SSL v2 Do not use. Has serious security vulnerabilities. SSL v3 Do not use. Has serious security vulnerabilities. TLS 1.0 Use for interoperability purposes where needed. Has known issues that cannot be mitigated in a way that guarantees interoperability, and thus mitigations are not enabled by default. Does not support modern cipher suites. TLS 1.1 Use for interoperability purposes where needed. Has no known issues but relies on protocol fixes that are included in all the TLS implementations in Red Hat Enterprise Linux. Does not support modern cipher suites. TLS 1.2 Recommended version. Supports the modern AEAD cipher suites. Some components in Red Hat Enterprise Linux are configured to use TLS 1.0 even though they provide support for TLS 1.1 or even 1.2 . This is motivated by an attempt to achieve the highest level of interoperability with external services that may not support the latest versions of TLS . Depending on your interoperability requirements, enable the highest available version of TLS . Important SSL v3 is not recommended for use. However, if, despite the fact that it is considered insecure and unsuitable for general use, you absolutely must leave SSL v3 enabled, see Section 4.8, "Using stunnel" for instructions on how to use stunnel to securely encrypt communications even when using services that do not support encryption or are only capable of using obsolete and insecure modes of encryption. Cipher Suites Modern, more secure cipher suites should be preferred to old, insecure ones. Always disable the use of eNULL and aNULL cipher suites, which do not offer any encryption or authentication at all. If at all possible, ciphers suites based on RC4 or HMAC-MD5 , which have serious shortcomings, should also be disabled. The same applies to the so-called export cipher suites, which have been intentionally made weaker, and thus are easy to break. While not immediately insecure, cipher suites that offer less than 128 bits of security should not be considered for their short useful life. Algorithms that use 128 bit of security or more can be expected to be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES ciphers advertise the use of 168 bits, they actually offer 112 bits of security. Always give preference to cipher suites that support (perfect) forward secrecy ( PFS ), which ensures the confidentiality of encrypted data even in case the server key is compromised. This rules out the fast RSA key exchange, but allows for the use of ECDHE and DHE . Of the two, ECDHE is the faster and therefore the preferred choice. You should also give preference to AEAD ciphers, such as AES-GCM , before CBC -mode ciphers as they are not vulnerable to padding oracle attacks. Additionally, in many cases, AES-GCM is faster than AES in CBC mode, especially when the hardware has cryptographic accelerators for AES . Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even faster than pure RSA key exchange. To provide support for legacy clients, you can install two pairs of certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for legacy ones). Public Key Length When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which is sufficiently large for true 128 bits of security. Warning Keep in mind that the security of your system is only as strong as the weakest link in the chain. For example, a strong cipher alone does not guarantee good security. The keys and the certificates are just as important, as well as the hash functions and keys used by the Certification Authority ( CA ) to sign your keys. 4.13.2. Using Implementations of TLS Red Hat Enterprise Linux 7 is distributed with several full-featured implementations of TLS . In this section, the configuration of OpenSSL and GnuTLS is described. See Section 4.13.3, "Configuring Specific Applications" for instructions on how to configure TLS support in individual applications. The available TLS implementations offer support for various cipher suites that define all the elements that come together when establishing and using TLS -secured communications. Use the tools included with the different implementations to list and specify cipher suites that provide the best possible security for your use case while considering the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . The resulting cipher suites can then be used to configure the way individual applications negotiate and secure connections. Important Be sure to check your settings following every update or upgrade of the TLS implementation you use or the applications that utilize that implementation. New versions may introduce new cipher suites that you do not want to have enabled and that your current configuration does not disable. 4.13.2.1. Working with Cipher Suites in OpenSSL OpenSSL is a toolkit and a cryptography library that support the SSL and TLS protocols. On Red Hat Enterprise Linux 7, a configuration file is provided at /etc/pki/tls/openssl.cnf . The format of this configuration file is described in config (1) . See also Section 4.7.9, "Configuring OpenSSL" . To get a list of all cipher suites supported by your installation of OpenSSL , use the openssl command with the ciphers subcommand as follows: Pass other parameters (referred to as cipher strings and keywords in OpenSSL documentation) to the ciphers subcommand to narrow the output. Special keywords can be used to only list suites that satisfy a certain condition. For example, to only list suites that are defined as belonging to the HIGH group, use the following command: See the ciphers (1) manual page for a list of available keywords and cipher strings. To obtain a list of cipher suites that satisfy the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command omits all insecure ciphers, gives preference to ephemeral elliptic curve Diffie-Hellman key exchange and ECDSA ciphers, and omits RSA key exchange (thus ensuring perfect forward secrecy ). Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients. 4.13.2.2. Working with Cipher Suites in GnuTLS GnuTLS is a communications library that implements the SSL and TLS protocols and related technologies. Note The GnuTLS installation on Red Hat Enterprise Linux 7 offers optimal default configuration values that provide sufficient security for the majority of use cases. Unless you need to satisfy special security requirements, it is recommended to use the supplied defaults. Use the gnutls-cli command with the -l (or --list ) option to list all supported cipher suites: To narrow the list of cipher suites displayed by the -l option, pass one or more parameters (referred to as priority strings and keywords in GnuTLS documentation) to the --priority option. See the GnuTLS documentation at http://www.gnutls.org/manual/gnutls.html#Priority-Strings for a list of all available priority strings. For example, issue the following command to get a list of cipher suites that offer at least 128 bits of security: To obtain a list of cipher suites that satisfy the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" , use a command similar to the following: The above command limits the output to ciphers with at least 128 bits of security while giving preference to the stronger ones. It also forbids RSA key exchange and DSS authentication. Note that this is a rather strict configuration, and it might be necessary to relax the conditions in real-world scenarios to allow for a compatibility with a broader range of clients. 4.13.3. Configuring Specific Applications Different applications provide their own configuration mechanisms for TLS . This section describes the TLS -related configuration files employed by the most commonly used server applications and offers examples of typical configurations. Regardless of the configuration you choose to use, always make sure to mandate that your server application enforces server-side cipher order , so that the cipher suite to be used is determined by the order you configure. 4.13.3.1. Configuring the Apache HTTP Server The Apache HTTP Server can use both OpenSSL and NSS libraries for its TLS needs. Depending on your choice of the TLS library, you need to install either the mod_ssl or the mod_nss module (provided by eponymous packages). For example, to install the package that provides the OpenSSL mod_ssl module, issue the following command as root: The mod_ssl package installs the /etc/httpd/conf.d/ssl.conf configuration file, which can be used to modify the TLS -related settings of the Apache HTTP Server . Similarly, the mod_nss package installs the /etc/httpd/conf.d/nss.conf configuration file. Install the httpd-manual package to obtain complete documentation for the Apache HTTP Server , including TLS configuration. The directives available in the /etc/httpd/conf.d/ssl.conf configuration file are described in detail in /usr/share/httpd/manual/mod/mod_ssl.html . Examples of various settings are in /usr/share/httpd/manual/ssl/ssl_howto.html . When modifying the settings in the /etc/httpd/conf.d/ssl.conf configuration file, be sure to consider the following three directives at the minimum: SSLProtocol Use this directive to specify the version of TLS (or SSL ) you want to allow. SSLCipherSuite Use this directive to specify your preferred cipher suite or disable the ones you want to disallow. SSLHonorCipherOrder Uncomment and set this directive to on to ensure that the connecting clients adhere to the order of ciphers you specified. For example: Note that the above configuration is the bare minimum, and it can be hardened significantly by following the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . To configure and use the mod_nss module, modify the /etc/httpd/conf.d/nss.conf configuration file. The mod_nss module is derived from mod_ssl , and as such it shares many features with it, not least the structure of the configuration file, and the directives that are available. Note that the mod_nss directives have a prefix of NSS instead of SSL . See https://git.fedorahosted.org/cgit/mod_nss.git/plain/docs/mod_nss.html for an overview of information about mod_nss , including a list of mod_ssl configuration directives that are not applicable to mod_nss . 4.13.3.2. Configuring the Dovecot Mail Server To configure your installation of the Dovecot mail server to use TLS , modify the /etc/dovecot/conf.d/10-ssl.conf configuration file. You can find an explanation of some of the basic configuration directives available in that file in /usr/share/doc/dovecot-2.2.10/wiki/SSL.DovecotConfiguration.txt (this help file is installed along with the standard installation of Dovecot ). When modifying the settings in the /etc/dovecot/conf.d/10-ssl.conf configuration file, be sure to consider the following three directives at the minimum: ssl_protocols Use this directive to specify the version of TLS (or SSL ) you want to allow. ssl_cipher_list Use this directive to specify your preferred cipher suites or disable the ones you want to disallow. ssl_prefer_server_ciphers Uncomment and set this directive to yes to ensure that the connecting clients adhere to the order of ciphers you specified. For example: Note that the above configuration is the bare minimum, and it can be hardened significantly by following the recommendations outlined in Section 4.13.1, "Choosing Algorithms to Enable" . 4.13.4. Additional Information For more information about TLS configuration and related topics, see the resources listed below. Installed Documentation config (1) - Describes the format of the /etc/ssl/openssl.conf configuration file. ciphers (1) - Includes a list of available OpenSSL keywords and cipher strings. /usr/share/httpd/manual/mod/mod_ssl.html - Contains detailed descriptions of the directives available in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/httpd/manual/ssl/ssl_howto.html - Contains practical examples of real-world settings in the /etc/httpd/conf.d/ssl.conf configuration file used by the mod_ssl module for the Apache HTTP Server . /usr/share/doc/dovecot-2.2.10/wiki/SSL.DovecotConfiguration.txt - Explains some of the basic configuration directives available in the /etc/dovecot/conf.d/10-ssl.conf configuration file used by the Dovecot mail server. Online Documentation Red Hat Enterprise Linux 7 SELinux User's and Administrator's Guide - The SELinux User's and Administrator's Guide for Red Hat Enterprise Linux 7 describes the basic principles of SELinux and documents in detail how to configure and use SELinux with various services, such as the Apache HTTP Server . http://tools.ietf.org/html/draft-ietf-uta-tls-bcp-00 - Recommendations for secure use of TLS and DTLS . See Also Section A.2.4, "SSL/TLS" provides a concise description of the SSL and TLS protocols. Section 4.7, "Using OpenSSL" describes, among other things, how to use OpenSSL to create and manage keys, generate certificates, and encrypt and decrypt files.
[ "~]USD openssl ciphers -v 'ALL:COMPLEMENTOFALL'", "~]USD openssl ciphers -v 'HIGH'", "~]USD openssl ciphers -v 'kEECDH+aECDSA+AES:kEECDH+AES+aRSA:kEDH+aRSA+AES' | column -t ECDHE-ECDSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(256) Mac=AEAD ECDHE-ECDSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA384 ECDHE-ECDSA-AES256-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(256) Mac=SHA1 ECDHE-ECDSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AESGCM(128) Mac=AEAD ECDHE-ECDSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA256 ECDHE-ECDSA-AES128-SHA SSLv3 Kx=ECDH Au=ECDSA Enc=AES(128) Mac=SHA1 ECDHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(256) Mac=AEAD ECDHE-RSA-AES256-SHA384 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA384 ECDHE-RSA-AES256-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(256) Mac=SHA1 ECDHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AESGCM(128) Mac=AEAD ECDHE-RSA-AES128-SHA256 TLSv1.2 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA256 ECDHE-RSA-AES128-SHA SSLv3 Kx=ECDH Au=RSA Enc=AES(128) Mac=SHA1 DHE-RSA-AES256-GCM-SHA384 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(256) Mac=AEAD DHE-RSA-AES256-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(256) Mac=SHA256 DHE-RSA-AES256-SHA SSLv3 Kx=DH Au=RSA Enc=AES(256) Mac=SHA1 DHE-RSA-AES128-GCM-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AESGCM(128) Mac=AEAD DHE-RSA-AES128-SHA256 TLSv1.2 Kx=DH Au=RSA Enc=AES(128) Mac=SHA256 DHE-RSA-AES128-SHA SSLv3 Kx=DH Au=RSA Enc=AES(128) Mac=SHA1", "~]USD gnutls-cli -l", "~]USD gnutls-cli --priority SECURE128 -l", "~]USD gnutls-cli --priority SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC -l Cipher suites for SECURE256:+SECURE128:-VERS-TLS-ALL:+VERS-TLS1.2:-RSA:-DHE-DSS:-CAMELLIA-128-CBC:-CAMELLIA-256-CBC TLS_ECDHE_ECDSA_AES_256_GCM_SHA384 0xc0, 0x2c TLS1.2 TLS_ECDHE_ECDSA_AES_256_CBC_SHA384 0xc0, 0x24 TLS1.2 TLS_ECDHE_ECDSA_AES_256_CBC_SHA1 0xc0, 0x0a SSL3.0 TLS_ECDHE_ECDSA_AES_128_GCM_SHA256 0xc0, 0x2b TLS1.2 TLS_ECDHE_ECDSA_AES_128_CBC_SHA256 0xc0, 0x23 TLS1.2 TLS_ECDHE_ECDSA_AES_128_CBC_SHA1 0xc0, 0x09 SSL3.0 TLS_ECDHE_RSA_AES_256_GCM_SHA384 0xc0, 0x30 TLS1.2 TLS_ECDHE_RSA_AES_256_CBC_SHA1 0xc0, 0x14 SSL3.0 TLS_ECDHE_RSA_AES_128_GCM_SHA256 0xc0, 0x2f TLS1.2 TLS_ECDHE_RSA_AES_128_CBC_SHA256 0xc0, 0x27 TLS1.2 TLS_ECDHE_RSA_AES_128_CBC_SHA1 0xc0, 0x13 SSL3.0 TLS_DHE_RSA_AES_256_CBC_SHA256 0x00, 0x6b TLS1.2 TLS_DHE_RSA_AES_256_CBC_SHA1 0x00, 0x39 SSL3.0 TLS_DHE_RSA_AES_128_GCM_SHA256 0x00, 0x9e TLS1.2 TLS_DHE_RSA_AES_128_CBC_SHA256 0x00, 0x67 TLS1.2 TLS_DHE_RSA_AES_128_CBC_SHA1 0x00, 0x33 SSL3.0 Certificate types: CTYPE-X.509 Protocols: VERS-TLS1.2 Compression: COMP-NULL Elliptic curves: CURVE-SECP384R1, CURVE-SECP521R1, CURVE-SECP256R1 PK-signatures: SIGN-RSA-SHA384, SIGN-ECDSA-SHA384, SIGN-RSA-SHA512, SIGN-ECDSA-SHA512, SIGN-RSA-SHA256, SIGN-DSA-SHA256, SIGN-ECDSA-SHA256", "~]# yum install mod_ssl", "SSLProtocol all -SSLv2 -SSLv3 SSLCipherSuite HIGH:!aNULL:!MD5 SSLHonorCipherOrder on", "ssl_protocols = !SSLv2 !SSLv3 ssl_cipher_list = HIGH:!aNULL:!MD5 ssl_prefer_server_ciphers = yes" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-hardening_tls_configuration
Chapter 7. Preparing to update a cluster with manually maintained credentials
Chapter 7. Preparing to update a cluster with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.10 to 4.11, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.10.0 to 4.10.1, no permissions are added or changed, so the update is not blocked. Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to. 7.1. Update requirements for clusters with manually maintained credentials Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release. If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources. After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update. Note The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command line tools. 7.1.1. Cloud credential configuration options and update requirements by platform type Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements. For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration. Figure 7.1. Credentials update requirements by platform type Red Hat OpenStack Platform (RHOSP), Red Hat Virtualization (RHV), and VMware vSphere These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation. Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process. Alibaba Cloud and IBM Cloud Clusters installed on these platforms are configured using the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Configure the ccoctl utility for the new release. Use the ccoctl utility to update the cloud provider resources. Indicate that the cluster is ready to update with the upgradeable-to annotation. Microsoft Azure Stack Hub These clusters use manual mode with long-lived credentials and do not use the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Manually update the cloud provider resources for the new release. Indicate that the cluster is ready to update with the upgradeable-to annotation. Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) Clusters installed on these platforms support multiple CCO modes. The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information. Additional resources Determining the Cloud Credential Operator mode by using the web console Determining the Cloud Credential Operator mode by using the CLI Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials About the Cloud Credential Operator 7.1.2. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials 7.1.3. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS or GCP cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS or GCP cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster using the AWS Security Token Service (STS) or GCP Workload Identity. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS or GCP clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with AWS STS or GCP Workload Identity to create and manage cloud credentials from outside of the cluster. These clusters are configured using the ccoctl utility. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Updating cloud provider resources with manually maintained credentials 7.2. Configuring the Cloud Credential Operator utility for a cluster update To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. Procedure Obtain the OpenShift Container Platform release image: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Get the CCO container image from the OpenShift Container Platform release image: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image: USD oc image extract USDCCO_IMAGE --file="/usr/bin/ccoctl" -a ~/.pull-secret Change the permissions to make ccoctl executable: USD chmod 775 ccoctl Verification To verify that ccoctl is ready to use, display the help file: USD ccoctl --help Output of ccoctl --help OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 7.3. Updating cloud provider resources with the Cloud Credential Operator utility The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility ( ccoctl ) is similar to creating the cloud provider resources during installation. Note By default, ccoctl creates objects in the directory in which the commands are run. To create the objects in a different directory, use the --output-dir flag. This procedure uses <path_to_ccoctl_output_dir> to refer to this directory. On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites Obtain the OpenShift Container Platform release image for the version that you are upgrading to. Extract and prepare the ccoctl binary from the release image. Procedure Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract --credentials-requests \ --cloud=<provider_type> \ --to=<path_to_directory_with_list_of_credentials_requests>/credrequests \ quay.io/<path_to>/ocp-release:<version> where: <provider_type> is the value for your cloud provider. Valid values are alibabacloud , aws , gcp , and ibmcloud . credrequests is the directory where the list of CredentialsRequest objects is stored. This command creates the directory if it does not exist. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored. Sample AWS CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" 1 This field indicates the namespace which needs to exist to hold the generated secret. The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace , create the namespace: USD oc create namespace <component_namespace> Use the ccoctl tool to process all CredentialsRequest objects in the credrequests directory by running the command for your cloud provider. The following commands process CredentialsRequest objects: Alibaba Cloud: ccoctl alibabacloud create-ram-users Amazon Web Services (AWS): ccoctl aws create-iam-roles Google Cloud Platform (GCP): ccoctl gcp create-all IBM Cloud: ccoctl ibmcloud create-service-id Important Refer to the ccoctl utility instructions in the installation content for your cloud provider for important platform-specific details about the required arguments and special considerations. For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Apply the secrets to your cluster: USD ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verification You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Creating Alibaba Cloud credentials for OpenShift Container Platform components with the ccoctl tool Creating AWS resources with the Cloud Credential Operator utility Creating GCP resources with the Cloud Credential Operator utility Manually creating IAM for IBM Cloud VPC Indicating that the cluster is ready to upgrade 7.4. Updating cloud provider resources with manually maintained credentials Before upgrading a cluster with manually maintained credentials, you must create any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Procedure Extract and examine the CredentialsRequest custom resource for the new release. The "Manually creating IAM" section of the installation content for your cloud provider explains how to obtain and use the credentials required for your cloud. Update the manually maintained credentials on your cluster: Create new secrets for any CredentialsRequest custom resources that are added by the new release image. If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating IAM for AWS Manually creating IAM for Azure Manually creating IAM for Azure Stack Hub Manually creating IAM for GCP Indicating that the cluster is ready to upgrade 7.5. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.10.2 for OpenShift Container Platform 4.10.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade.
[ "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl\" -a ~/.pull-secret", "chmod 775 ccoctl", "ccoctl --help", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: alibabacloud Manage credentials objects for alibaba cloud aws Manage credentials objects for AWS cloud gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "oc adm release extract --credentials-requests --cloud=<provider_type> --to=<path_to_directory_with_list_of_credentials_requests>/credrequests quay.io/<path_to>/ocp-release:<version>", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\"", "oc create namespace <component_namespace>", "ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/updating_clusters/preparing-manual-creds-update
Backup and restore
Backup and restore OpenShift Container Platform 4.11 Backing up and restoring your OpenShift Container Platform cluster Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/backup_and_restore/index
Appendix C. Using AMQ Broker with the examples
Appendix C. Using AMQ Broker with the examples The AMQ JMS Pool examples require a running message broker with a queue named queue . Use the procedures below to install and start the broker and define the queue. C.1. Installing the broker Follow the instructions in Getting Started with AMQ Broker to install the broker and create a broker instance . Enable anonymous access. The following procedures refer to the location of the broker instance as <broker-instance-dir> . C.2. Starting the broker Procedure Use the artemis run command to start the broker. USD <broker-instance-dir> /bin/artemis run Check the console output for any critical errors logged during startup. The broker logs Server is now live when it is ready. USD example-broker/bin/artemis run __ __ ____ ____ _ /\ | \/ |/ __ \ | _ \ | | / \ | \ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\ \ | |\/| | | | | | _ <| '__/ _ \| |/ / _ \ '__| / ____ \| | | | |__| | | |_) | | | (_) | < __/ | /_/ \_\_| |_|\___\_\ |____/|_| \___/|_|\_\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server ... 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live ... C.3. Creating a queue In a new terminal, use the artemis queue command to create a queue named queue . USD <broker-instance-dir> /bin/artemis queue create --name queue --address queue --auto-create-address --anycast You are prompted to answer a series of yes or no questions. Answer N for no to all of them. Once the queue is created, the broker is ready for use with the example programs. C.4. Stopping the broker When you are done running the examples, use the artemis stop command to stop the broker. USD <broker-instance-dir> /bin/artemis stop Revised on 2021-08-24 14:27:43 UTC
[ "<broker-instance-dir> /bin/artemis run", "example-broker/bin/artemis run __ __ ____ ____ _ /\\ | \\/ |/ __ \\ | _ \\ | | / \\ | \\ / | | | | | |_) |_ __ ___ | | _____ _ __ / /\\ \\ | |\\/| | | | | | _ <| '__/ _ \\| |/ / _ \\ '__| / ____ \\| | | | |__| | | |_) | | | (_) | < __/ | /_/ \\_\\_| |_|\\___\\_\\ |____/|_| \\___/|_|\\_\\___|_| Red Hat AMQ <version> 2020-06-03 12:12:11,807 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server 2020-06-03 12:12:12,336 INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live", "<broker-instance-dir> /bin/artemis queue create --name queue --address queue --auto-create-address --anycast", "<broker-instance-dir> /bin/artemis stop" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_jms_pool_library/using_the_broker_with_the_examples
function::sock_state_num2str
function::sock_state_num2str Name function::sock_state_num2str - Given a socket state number, return a string representation. Synopsis Arguments state The state number.
[ "function sock_state_num2str:string(state:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-sock-state-num2str
Chapter 4. Deprecated features
Chapter 4. Deprecated features This section describes features that are supported, but have been deprecated from AMQ Broker. upgrade attribute in Custom Resource Starting in 7.11, the upgrade attribute and the associated enabled and minor attributes are deprecated because they cannot work as originally designed. Use the image or version attributes to deploy specific broker container images. queues configuration element Starting in 7.10, the <queues> configuration element is deprecated. You can use the <addresses> configuration element to create addresses and associated queues. The <queues> configuration element will be removed in a future release. getAddressesSettings method Starting in 7.10, the get Addresses Settings method, which is included in the org.apache.activemq.artemis.core.config.Configuration interface, is deprecated. Use the get Address Settings method to configure addresses and queues for the broker programatically. OpenWire protocol Starting in 7.9, the OpenWire protocol is a deprecated feature. If you are creating a new AMQ Broker-based system, use one of the other supported protocols. In the 8.0 release, the Openwire protocol will be removed from AMQ Broker. Adding users when broker instance is not running Starting in 7.8, when an AMQ Broker instance is not running, the ability to add users to the broker from the CLI interface is removed. Network pinger Starting in 7.5, network pinging is a deprecated feature. Network pinging cannot protect a broker cluster from network isolation issues that can lead to irrecoverable message loss. This feature will be removed in a future release. Red Hat continues to support existing AMQ Broker deployments that use network pinging. However, Red Hat no longer recommends use of network pinging in new deployments. For guidance on configuring a broker cluster for high availability and to avoid network isolation issues, see Implementing high availability in Configuring AMQ Broker . Hawtio dispatch console plugin Starting in 7.3, AMQ Broker no longer ships with the Hawtio dispatch console plugin, dispatch-hawtio-console.war . Previously, the dispatch console was used to manage AMQ Interconnect. However, AMQ Interconnect now uses its own, standalone web console.
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/release_notes_for_red_hat_amq_broker_7.11/deprecated_features
Chapter 91. Ehcache Component
Chapter 91. Ehcache Component Available as of Camel version 2.18 The ehcache component enables you to perform caching operations using Ehcache 3 as the Cache Implementation. This component supports producer and event based consumer endpoints. The Cache consumer is an event based consumer and can be used to listen and respond to specific cache activities. Maven users will need to add the following dependency to their pom.xml for this component: <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ehcache</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency> 91.1. URI format ehcache://cacheName[?options] You can append query options to the URI in the following format, ?option=value&option=#beanRef&... 91.2. Options The Ehcache component supports 7 options, which are listed below. Name Description Default Type configuration (advanced) Sets the global component configuration EhcacheConfiguration cacheManager (common) The cache manager CacheManager cacheManager Configuration (common) The cache manager configuration Configuration cacheConfiguration (common) The default cache configuration to be used to create caches. CacheConfiguration cachesConfigurations (common) A map of caches configurations to be used to create caches. Map cacheConfigurationUri (common) URI pointing to the Ehcache XML configuration file's location String resolveProperty Placeholders (advanced) Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true boolean The Ehcache endpoint is configured using URI syntax: with the following path and query parameters: 91.2.1. Path Parameters (1 parameters): Name Description Default Type cacheName Required the cache name String 91.2.2. Query Parameters (17 parameters): Name Description Default Type cacheManager (common) The cache manager CacheManager cacheManagerConfiguration (common) The cache manager configuration Configuration configurationUri (common) URI pointing to the Ehcache XML configuration file's location String createCacheIfNotExist (common) Configure if a cache need to be created if it does exist or can't be pre-configured. true boolean bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean eventFiring (consumer) Set the delivery mode (synchronous, asynchronous) ASYNCHRONOUS EventFiring eventOrdering (consumer) Set the delivery mode (ordered, unordered) ORDERED EventOrdering eventTypes (consumer) Set the type of events to listen for EVICTED,EXPIRED,REMOVED,CREATED,UPDATED Set exceptionHandler (consumer) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer) Sets the exchange pattern when the consumer creates an exchange. ExchangePattern action (producer) To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence. String key (producer) To configure the default action key. If a key is set in the message header, then the key from the header takes precedence. Object configuration (advanced) The default cache configuration to be used to create caches. CacheConfiguration configurations (advanced) A map of cache configuration to be used to create caches. Map keyType (advanced) The cache key type, default java.lang.Object java.lang.Object String synchronous (advanced) Sets whether synchronous processing should be strictly used, or Camel is allowed to use asynchronous processing (if supported). false boolean valueType (advanced) The cache value type, default java.lang.Object java.lang.Object String 91.3. Spring Boot Auto-Configuration The component supports 25 options, which are listed below. Name Description Default Type camel.component.ehcache.cache-configuration The default cache configuration to be used to create caches. The option is a org.ehcache.config.CacheConfiguration<?,?> type. String camel.component.ehcache.cache-configuration-uri URI pointing to the Ehcache XML configuration file's location String camel.component.ehcache.cache-manager The cache manager. The option is a org.ehcache.CacheManager type. String camel.component.ehcache.cache-manager-configuration The cache manager configuration. The option is a org.ehcache.config.Configuration type. String camel.component.ehcache.caches-configurations A map of caches configurations to be used to create caches. Map camel.component.ehcache.configuration.action To configure the default cache action. If an action is set in the message header, then the operation from the header takes precedence. String camel.component.ehcache.configuration.cache-manager The cache manager CacheManager camel.component.ehcache.configuration.cache-manager-configuration The cache manager configuration Configuration camel.component.ehcache.configuration.configuration The default cache configuration to be used to create caches. CacheConfiguration camel.component.ehcache.configuration.configuration-uri URI pointing to the Ehcache XML configuration file's location String camel.component.ehcache.configuration.configurations A map of cache configuration to be used to create caches. Map camel.component.ehcache.configuration.create-cache-if-not-exist Configure if a cache need to be created if it does exist or can't be pre-configured. true Boolean camel.component.ehcache.configuration.event-firing Set the delivery mode (synchronous, asynchronous) EventFiring camel.component.ehcache.configuration.event-ordering Set the delivery mode (ordered, unordered) EventOrdering camel.component.ehcache.configuration.event-types Set the type of events to listen for Set camel.component.ehcache.configuration.key To configure the default action key. If a key is set in the message header, then the key from the header takes precedence. Object camel.component.ehcache.configuration.key-type The cache key type, default java.lang.Object java.lang.Object String camel.component.ehcache.configuration.value-type The cache value type, default java.lang.Object java.lang.Object String camel.component.ehcache.customizer.cache-configuration.enabled Enable or disable the cache-configuration customizer. true Boolean camel.component.ehcache.customizer.cache-configuration.mode Configure if the cache configurations have be added or they have to replace those already configured on the component. CacheConfiguration CustomizerConfigurationUSD Mode camel.component.ehcache.customizer.cache-manager.enabled Enable or disable the cache-manager customizer. true Boolean camel.component.ehcache.customizer.cache-manager.override Configure if the cache manager eventually set on the component should be overridden by the customizer. false Boolean camel.component.ehcache.enabled Enable ehcache component true Boolean camel.component.ehcache.resolve-property-placeholders Whether the component should resolve property placeholders on itself when starting. Only properties which are of String type can use property placeholders. true Boolean camel.component.ehcache.configuration.config-uri (DEPRECATED - Use set configurationUri(String) instead.) URI pointing to the Ehcache XML configuration file's location String 91.3.1. Message Headers Camel Header Type Description CamelEhcacheAction String The operation to be perfomed on the cache, valid options are: * CLEAR * PUT * PUT_ALL * PUT_IF_ABSENT * GET * GET_ALL * REMOVE * REMOVE_ALL * REPLACE CamelEhcacheActionHasResult Boolean Set to true if the action has a result CamelEhcacheActionSucceeded Boolean Set to true if the actionsuccedded CamelEhcacheKey Object The cache key used for an action CamelEhcacheKeys Set<Object> A list of keys, used in * PUT_ALL * GET_ALL * REMOVE_ALL CamelEhcacheValue Object The value to put in the cache or the result of an operation CamelEhcacheOldValue Object The old value associated to a key for actions like PUT_IF_ABSENT or the Object used for comparison for actions like REPLACE CamelEhcacheEventType EventType The type of event received 91.4. Ehcache based idempotent repository example: CacheManager manager = CacheManagerBuilder.newCacheManager(new XmlConfiguration("ehcache.xml")); EhcacheIdempotentRepository repo = new EhcacheIdempotentRepository(manager, "idempotent-cache"); from("direct:in") .idempotentConsumer(header("messageId"), idempotentRepo) .to("mock:out"); 91.5. Ehcache based aggregation repository example: public class EhcacheAggregationRepositoryRoutesTest extends CamelTestSupport { private static final String ENDPOINT_MOCK = "mock:result"; private static final String ENDPOINT_DIRECT = "direct:one"; private static final int[] VALUES = generateRandomArrayOfInt(10, 0, 30); private static final int SUM = IntStream.of(VALUES).reduce(0, (a, b) -> a + b); private static final String CORRELATOR = "CORRELATOR"; @EndpointInject(uri = ENDPOINT_MOCK) private MockEndpoint mock; @Produce(uri = ENDPOINT_DIRECT) private ProducerTemplate producer; @Test public void checkAggregationFromOneRoute() throws Exception { mock.expectedMessageCount(VALUES.length); mock.expectedBodiesReceived(SUM); IntStream.of(VALUES).forEach( i -> producer.sendBodyAndHeader(i, CORRELATOR, CORRELATOR) ); mock.assertIsSatisfied(); } private Exchange aggregate(Exchange oldExchange, Exchange newExchange) { if (oldExchange == null) { return newExchange; } else { Integer n = newExchange.getIn().getBody(Integer.class); Integer o = oldExchange.getIn().getBody(Integer.class); Integer v = (o == null ? 0 : o) + (n == null ? 0 : n); oldExchange.getIn().setBody(v, Integer.class); return oldExchange; } } @Override protected RoutesBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(ENDPOINT_DIRECT) .routeId("AggregatingRouteOne") .aggregate(header(CORRELATOR)) .aggregationRepository(createAggregateRepository()) .aggregationStrategy(EhcacheAggregationRepositoryRoutesTest.this::aggregate) .completionSize(VALUES.length) .to("log:org.apache.camel.component.ehcache.processor.aggregate.level=INFO&showAll=true&mulltiline=true") .to(ENDPOINT_MOCK); } }; } protected EhcacheAggregationRepository createAggregateRepository() throws Exception { CacheManager cacheManager = CacheManagerBuilder.newCacheManager(new XmlConfiguration("ehcache.xml")); cacheManager.init(); EhcacheAggregationRepository repository = new EhcacheAggregationRepository(); repository.setCacheManager(cacheManager); repository.setCacheName("aggregate"); return repository; } }
[ "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-ehcache</artifactId> <version>x.x.x</version> <!-- use the same version as your Camel core version --> </dependency>", "ehcache://cacheName[?options]", "ehcache:cacheName", "CacheManager manager = CacheManagerBuilder.newCacheManager(new XmlConfiguration(\"ehcache.xml\")); EhcacheIdempotentRepository repo = new EhcacheIdempotentRepository(manager, \"idempotent-cache\"); from(\"direct:in\") .idempotentConsumer(header(\"messageId\"), idempotentRepo) .to(\"mock:out\");", "public class EhcacheAggregationRepositoryRoutesTest extends CamelTestSupport { private static final String ENDPOINT_MOCK = \"mock:result\"; private static final String ENDPOINT_DIRECT = \"direct:one\"; private static final int[] VALUES = generateRandomArrayOfInt(10, 0, 30); private static final int SUM = IntStream.of(VALUES).reduce(0, (a, b) -> a + b); private static final String CORRELATOR = \"CORRELATOR\"; @EndpointInject(uri = ENDPOINT_MOCK) private MockEndpoint mock; @Produce(uri = ENDPOINT_DIRECT) private ProducerTemplate producer; @Test public void checkAggregationFromOneRoute() throws Exception { mock.expectedMessageCount(VALUES.length); mock.expectedBodiesReceived(SUM); IntStream.of(VALUES).forEach( i -> producer.sendBodyAndHeader(i, CORRELATOR, CORRELATOR) ); mock.assertIsSatisfied(); } private Exchange aggregate(Exchange oldExchange, Exchange newExchange) { if (oldExchange == null) { return newExchange; } else { Integer n = newExchange.getIn().getBody(Integer.class); Integer o = oldExchange.getIn().getBody(Integer.class); Integer v = (o == null ? 0 : o) + (n == null ? 0 : n); oldExchange.getIn().setBody(v, Integer.class); return oldExchange; } } @Override protected RoutesBuilder createRouteBuilder() throws Exception { return new RouteBuilder() { @Override public void configure() throws Exception { from(ENDPOINT_DIRECT) .routeId(\"AggregatingRouteOne\") .aggregate(header(CORRELATOR)) .aggregationRepository(createAggregateRepository()) .aggregationStrategy(EhcacheAggregationRepositoryRoutesTest.this::aggregate) .completionSize(VALUES.length) .to(\"log:org.apache.camel.component.ehcache.processor.aggregate.level=INFO&showAll=true&mulltiline=true\") .to(ENDPOINT_MOCK); } }; } protected EhcacheAggregationRepository createAggregateRepository() throws Exception { CacheManager cacheManager = CacheManagerBuilder.newCacheManager(new XmlConfiguration(\"ehcache.xml\")); cacheManager.init(); EhcacheAggregationRepository repository = new EhcacheAggregationRepository(); repository.setCacheManager(cacheManager); repository.setCacheName(\"aggregate\"); return repository; } }" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/ehcache-component
Chapter 24. DHCP
Chapter 24. DHCP The dhcpd daemon is used in Red Hat Enterprise Linux to dynamically deliver and configure Layer 3 TCP/IP details for clients. The dhcp package provides the DHCP server and the dhcpd daemon. Enter the following command to see if the dhcp package is installed: If it is not installed, use the yum utility as root to install it: 24.1. DHCP and SELinux When dhcpd is enabled, it runs confined by default. Confined processes run in their own domains, and are separated from other confined processes. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. The following example demonstrates dhcpd and related processes running in their own domain. This example assumes the dhcp package is installed and that the dhcpd service has been started: Run the getenforce command to confirm SELinux is running in enforcing mode: The command returns Enforcing when SELinux is running in enforcing mode. Enter the following command as the root user to start dhcpd : Confirm that the service is running. The output should include the information below (only the time stamp will differ): Run following command to view the dhcpd processes: The SELinux context associated with the dhcpd process is system_u:system_r:dhcpd_t:s0 .
[ "~]# rpm -q dhcp package dhcp is not installed", "~]# yum install dhcp", "~]USD getenforce Enforcing", "~]# systemctl start dhcpd.service", "~]# systemctl status dhcpd.service dhcpd.service - DHCPv4 Server Daemon Loaded: loaded (/usr/lib/systemd/system/dhcpd.service; disabled) Active: active (running) since Mon 2013-08-05 11:49:07 CEST; 3h 20min ago", "~]USD ps -eZ | grep dhcpd system_u:system_r:dhcpd_t:s0 5483 ? 00:00:00 dhcpd" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-managing_confined_services-dhcp
Chapter 2. Deployment configuration
Chapter 2. Deployment configuration This chapter describes how to configure different aspects of the supported deployments: Kafka clusters Kafka Connect clusters Kafka Connect clusters with Source2Image support Kafka Mirror Maker Kafka Bridge OAuth 2.0 token-based authentication OAuth 2.0 token-based authorization 2.1. Kafka cluster configuration The full schema of the Kafka resource is described in the Section B.2, " Kafka schema reference" . All labels that are applied to the desired Kafka resource will also be applied to the OpenShift resources making up the Kafka cluster. This provides a convenient mechanism for resources to be labeled as required. 2.1.1. Sample Kafka YAML configuration For help in understanding the configuration options available for your Kafka deployment, refer to sample YAML file provided here. The sample shows only some of the possible configuration options, but those that are particularly important include: Resource requests (CPU / Memory) JVM options for maximum and minimum memory allocation Listeners (and authentication) Authentication Storage Rack awareness Metrics apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 1.6 2 resources: 3 requests: memory: 64Gi cpu: "8" limits: 4 memory: 64Gi cpu: "12" jvmOptions: 5 -Xms: 8192m -Xmx: 8192m listeners: 6 - name: plain 7 port: 9092 8 type: internal 9 tls: false 10 configuration: useServiceDnsDomain: true 11 - name: tls port: 9093 type: internal tls: true authentication: 12 type: tls - name: external 13 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 14 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 15 type: simple config: 16 auto.create.topics.enable: "false" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 17 ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" storage: 18 type: persistent-claim 19 size: 10000Gi 20 rack: 21 topologyKey: topology.kubernetes.io/zone metrics: 22 lowercaseOutputName: true rules: 23 # Special cases and very specific rules - pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: "USD3" topic: "USD4" partition: "USD5" # ... zookeeper: 24 replicas: 3 resources: requests: memory: 8Gi cpu: "2" limits: memory: 8Gi cpu: "2" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metrics: # ... entityOperator: 25 topicOperator: resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" userOperator: resources: requests: memory: 512Mi cpu: "1" limits: memory: 512Mi cpu: "1" kafkaExporter: 26 # ... cruiseControl: 27 # ... 1 Replicas specifies the number of broker nodes . 2 Kafka version, which can be changed by following the upgrade procedure . 3 Resource requests specify the resources to reserve for a given container . 4 Resource limits specify the maximum resources that can be consumed by a container. 5 JVM options can specify the minimum ( -Xms ) and maximum ( -Xmx ) memory allocation for JVM . 6 Listeners configure how clients connect to the Kafka cluster via bootstrap addresses. Listeners are configured as internal or external listeners for connection inside or outside the OpenShift cluster . 7 Name to identify the listener. Must be unique within the Kafka cluster. 8 Port number used by the listener inside Kafka. The port number has to be unique within a given Kafka cluster. Allowed port numbers are 9092 and higher with the exception of ports 9404 and 9999, which are already used for Prometheus and JMX. Depending on the listener type, the port number might not be the same as the port number that connects Kafka clients. 9 Listener type specified as internal , or for external listeners, as route , loadbalancer , nodeport or ingress . 10 Enables TLS encryption for each listener. Default is false . TLS encryption is not required for route listeners. 11 Defines whether the fully-qualified DNS names including the cluster service suffix (usually .cluster.local ) are assigned. 12 Listener authentication mechanism specified as mutual TLS, SCRAM-SHA-512 or token-based OAuth 2.0 . 13 External listener configuration specifies how the Kafka cluster is exposed outside OpenShift, such as through a route , loadbalancer or nodeport . 14 Optional configuration for a Kafka listener certificate managed by an external Certificate Authority. The brokerCertChainAndKey property specifies a Secret that holds a server certificate and a private key. Kafka listener certificates can also be configured for TLS listeners. 15 Authorization enables simple, OAUTH 2.0 or OPA authorization on the Kafka broker. Simple authorization uses the AclAuthorizer Kafka plugin. 16 Config specifies the broker configuration. Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams . 17 SSL properties for external listeners to run with a specific cipher suite for a TLS version . 18 Storage is configured as ephemeral , persistent-claim or jbod . 19 Storage size for persistent volumes may be increased and additional volumes may be added to JBOD storage . 20 Persistent storage has additional configuration options , such as a storage id and class for dynamic volume provisioning. 21 Rack awareness is configured to spread replicas across different racks . A topology key must match the label of a cluster node. 22 Kafka metrics configuration for use with Prometheus . 23 Kafka rules for exporting metrics to a Grafana dashboard through the JMX Exporter. A set of rules provided with AMQ Streams may be copied to your Kafka resource configuration. 24 ZooKeeper-specific configuration , which contains properties similar to the Kafka configuration. 25 Entity Operator configuration, which specifies the configuration for the Topic Operator and User Operator . 26 Kafka Exporter configuration, which is used to expose data as Prometheus metrics . 27 Cruise Control configuration, which is used to rebalance the Kafka cluster . 2.1.2. Data storage considerations An efficient data storage infrastructure is essential to the optimal performance of AMQ Streams. Block storage is required. File storage, such as NFS, does not work with Kafka. For your block storage, you can choose, for example: Cloud-based block storage solutions, such as Amazon Elastic Block Store (EBS) Local persistent volumes Storage Area Network (SAN) volumes accessed by a protocol such as Fibre Channel or iSCSI Note AMQ Streams does not require OpenShift raw block volumes. 2.1.2.1. File systems It is recommended that you configure your storage system to use the XFS file system. AMQ Streams is also compatible with the ext4 file system, but this might require additional configuration for best results. 2.1.2.2. Apache Kafka and ZooKeeper storage Use separate disks for Apache Kafka and ZooKeeper. Three types of data storage are supported: Ephemeral (Recommended for development only) Persistent JBOD (Just a Bunch of Disks, suitable for Kafka only) For more information, see Kafka and ZooKeeper storage . Solid-state drives (SSDs), though not essential, can improve the performance of Kafka in large clusters where data is sent to and received from multiple topics asynchronously. SSDs are particularly effective with ZooKeeper, which requires fast, low latency data access. Note You do not need to provision replicated storage because Kafka and ZooKeeper both have built-in data replication. 2.1.3. Kafka and ZooKeeper storage types As stateful applications, Kafka and ZooKeeper need to store data on disk. AMQ Streams supports three storage types for this data: Ephemeral Persistent JBOD storage Note JBOD storage is supported only for Kafka, not for ZooKeeper. When configuring a Kafka resource, you can specify the type of storage used by the Kafka broker and its corresponding ZooKeeper node. You configure the storage type using the storage property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper The storage type is configured in the type field. Warning The storage type cannot be changed after a Kafka cluster is deployed. Additional resources For more information about ephemeral storage, see ephemeral storage schema reference . For more information about persistent storage, see persistent storage schema reference . For more information about JBOD storage, see JBOD schema reference . For more information about the schema for Kafka , see Kafka schema reference . 2.1.3.1. Ephemeral storage Ephemeral storage uses the emptyDir volumes to store data. To use ephemeral storage, the type field should be set to ephemeral . Important emptyDir volumes are not persistent and the data stored in them will be lost when the Pod is restarted. After the new pod is started, it has to recover all data from other nodes of the cluster. Ephemeral storage is not suitable for use with single node ZooKeeper clusters and for Kafka topics with replication factor 1, because it will lead to data loss. An example of Ephemeral storage apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: ephemeral # ... zookeeper: # ... storage: type: ephemeral # ... 2.1.3.1.1. Log directories The ephemeral volume will be used by the Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data/kafka-log_idx_ Where idx is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0 . 2.1.3.2. Persistent storage Persistent storage uses Persistent Volume Claims to provision persistent volumes for storing data. Persistent Volume Claims can be used to provision volumes of many different types, depending on the Storage Class which will provision the volume. The data types which can be used with persistent volume claims include many types of SAN storage as well as Local persistent volumes . To use persistent storage, the type has to be set to persistent-claim . Persistent storage supports additional configuration options: id (optional) Storage identification number. This option is mandatory for storage volumes defined in a JBOD storage declaration. Default is 0 . size (required) Defines the size of the persistent volume claim, for example, "1000Gi". class (optional) The OpenShift Storage Class to use for dynamic volume provisioning. selector (optional) Allows selecting a specific persistent volume to use. It contains key:value pairs representing labels for selecting such a volume. deleteClaim (optional) Boolean value which specifies if the Persistent Volume Claim has to be deleted when the cluster is undeployed. Default is false . Warning Increasing the size of persistent volumes in an existing AMQ Streams cluster is only supported in OpenShift versions that support persistent volume resizing. The persistent volume to be resized must use a storage class that supports volume expansion. For other versions of OpenShift and storage classes which do not support volume expansion, you must decide the necessary storage size before deploying the cluster. Decreasing the size of existing persistent volumes is not possible. Example fragment of persistent storage configuration with 1000Gi size # ... storage: type: persistent-claim size: 1000Gi # ... The following example demonstrates the use of a storage class. Example fragment of persistent storage configuration with specific Storage Class # ... storage: type: persistent-claim size: 1Gi class: my-storage-class # ... Finally, a selector can be used to select a specific labeled persistent volume to provide needed features such as an SSD. Example fragment of persistent storage configuration with selector # ... storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true # ... 2.1.3.2.1. Storage class overrides You can specify a different storage class for one or more Kafka brokers or ZooKeeper nodes, instead of using the default storage class. This is useful if, for example, storage classes are restricted to different availability zones or data centers. You can use the overrides field for this purpose. In this example, the default storage class is named my-storage-class : Example AMQ Streams cluster using storage class overrides apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # ... kafka: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # ... As a result of the configured overrides property, the volumes use the following storage classes: The persistent volumes of ZooKeeper node 0 will use my-storage-class-zone-1a . The persistent volumes of ZooKeeper node 1 will use my-storage-class-zone-1b . The persistent volumes of ZooKeeepr node 2 will use my-storage-class-zone-1c . The persistent volumes of Kafka broker 0 will use my-storage-class-zone-1a . The persistent volumes of Kafka broker 1 will use my-storage-class-zone-1b . The persistent volumes of Kafka broker 2 will use my-storage-class-zone-1c . The overrides property is currently used only to override storage class configurations. Overriding other storage configuration fields is not currently supported. Other fields from the storage configuration are currently not supported. 2.1.3.2.2. Persistent Volume Claim naming When persistent storage is used, it creates Persistent Volume Claims with the following names: data- cluster-name -kafka- idx Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx . data- cluster-name -zookeeper- idx Persistent Volume Claim for the volume used for storing data for the ZooKeeper node pod idx . 2.1.3.2.3. Log directories The persistent volume will be used by the Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data/kafka-log_idx_ Where idx is the Kafka broker pod index. For example /var/lib/kafka/data/kafka-log0 . 2.1.3.3. Resizing persistent volumes You can provision increased storage capacity by increasing the size of the persistent volumes used by an existing AMQ Streams cluster. Resizing persistent volumes is supported in clusters that use either a single persistent volume or multiple persistent volumes in a JBOD storage configuration. Note You can increase but not decrease the size of persistent volumes. Decreasing the size of persistent volumes is not currently supported in OpenShift. Prerequisites An OpenShift cluster with support for volume resizing. The Cluster Operator is running. A Kafka cluster using persistent volumes created using a storage class that supports volume expansion. Procedure In a Kafka resource, increase the size of the persistent volume allocated to the Kafka cluster, the ZooKeeper cluster, or both. To increase the volume size allocated to the Kafka cluster, edit the spec.kafka.storage property. To increase the volume size allocated to the ZooKeeper cluster, edit the spec.zookeeper.storage property. For example, to increase the volume size from 1000Gi to 2000Gi : apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: persistent-claim size: 2000Gi class: my-storage-class # ... zookeeper: # ... Create or update the resource. Use oc apply : oc apply -f your-file OpenShift increases the capacity of the selected persistent volumes in response to a request from the Cluster Operator. When the resizing is complete, the Cluster Operator restarts all pods that use the resized persistent volumes. This happens automatically. Additional resources For more information about resizing persistent volumes in OpenShift, see Resizing Persistent Volumes using Kubernetes . 2.1.3.4. JBOD storage overview You can configure AMQ Streams to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. It can also improve performance. A JBOD configuration is described by one or more volumes, each of which can be either ephemeral or persistent . The rules and constraints for JBOD volume declarations are the same as those for ephemeral and persistent storage. For example, you cannot change the size of a persistent storage volume after it has been provisioned. 2.1.3.4.1. JBOD configuration To use JBOD with AMQ Streams, the storage type must be set to jbod . The volumes property allows you to describe the disks that make up your JBOD storage array or configuration. The following fragment shows an example JBOD configuration: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false # ... The ids cannot be changed once the JBOD volumes are created. Users can add or remove volumes from the JBOD configuration. 2.1.3.4.2. JBOD and Persistent Volume Claims When persistent storage is used to declare JBOD volumes, the naming scheme of the resulting Persistent Volume Claims is as follows: data- id - cluster-name -kafka- idx Where id is the ID of the volume used for storing data for Kafka broker pod idx . 2.1.3.4.3. Log directories The JBOD volumes will be used by the Kafka brokers as log directories mounted into the following path: /var/lib/kafka/data- id /kafka-log_idx_ Where id is the ID of the volume used for storing data for Kafka broker pod idx . For example /var/lib/kafka/data-0/kafka-log0 . 2.1.3.5. Adding volumes to JBOD storage This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. Note When adding a new volume under an id which was already used in the past and removed, you have to make sure that the previously used PersistentVolumeClaims have been deleted. Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with JBOD storage Procedure Edit the spec.kafka.storage.volumes property in the Kafka resource. Add the new volumes to the volumes array. For example, add the new volume with id 2 : apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f KAFKA-CONFIG-FILE Create new topics or reassign existing partitions to the new disks. Additional resources For more information about reassigning topics, see Section 2.1.24.2, "Partition reassignment" . 2.1.3.6. Removing volumes from JBOD storage This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. The JBOD storage always has to contain at least one volume. Important To avoid data loss, you have to move all partitions before removing the volumes. Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with JBOD storage with two or more volumes Procedure Reassign all partitions from the disks which are you going to remove. Any data in partitions still assigned to the disks which are going to be removed might be lost. Edit the spec.kafka.storage.volumes property in the Kafka resource. Remove one or more volumes from the volumes array. For example, remove the volumes with ids 1 and 2 : apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file Additional resources For more information about reassigning topics, see Section 2.1.24.2, "Partition reassignment" . 2.1.4. Kafka broker replicas A Kafka cluster can run with many brokers. You can configure the number of brokers used for the Kafka cluster in Kafka.spec.kafka.replicas . The best number of brokers for your cluster has to be determined based on your specific use case. 2.1.4.1. Configuring the number of broker nodes This procedure describes how to configure the number of Kafka broker nodes in a new cluster. It only applies to new clusters with no partitions. If your cluster already has topics defined, see Section 2.1.24, "Scaling clusters" . Prerequisites An OpenShift cluster A running Cluster Operator A Kafka cluster with no topics defined yet Procedure Edit the replicas property in the Kafka resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... replicas: 3 # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file Additional resources If your cluster already has topics defined, see Section 2.1.24, "Scaling clusters" . 2.1.5. Kafka broker configuration AMQ Streams allows you to customize the configuration of the Kafka brokers in your Kafka cluster. You can specify and configure most of the options listed in the "Broker Configs" section of the Apache Kafka documentation . You cannot configure options that are related to the following areas: Security (Encryption, Authentication, and Authorization) Listener configuration Broker ID configuration Configuration of log data directories Inter-broker communication ZooKeeper connectivity These options are automatically configured by AMQ Streams. For more information on broker configuration, see the KafkaClusterSpec schema . Listener configuration You configure listeners for connecting to Kafka brokers. For more information on configuring listeners, see Listener configuration Authorizing access to Kafka You can configure your Kafka cluster to allow or decline actions executed by users. For more information on securing access to Kafka brokers, see Managing access to Kafka . 2.1.5.1. Configuring Kafka brokers You can configure an existing Kafka broker, or create a new Kafka broker with a specified configuration. Prerequisites An OpenShift cluster is available. The Cluster Operator is running. Procedure Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment. In the spec.kafka.config property in the Kafka resource, enter one or more Kafka configuration settings. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... config: default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 # ... zookeeper: # ... Apply the new configuration to create or update the resource. Use oc apply : oc apply -f kafka.yaml where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml . 2.1.6. Listener configuration Listeners are used to connect to Kafka brokers. AMQ Streams provides a generic GenericKafkaListener schema with properties to configure listeners through the Kafka resource. The GenericKafkaListener provides a flexible approach to listener configuration. You can specify properties to configure internal listeners for connecting within the OpenShift cluster, or external listeners for connecting outside the OpenShift cluster. Generic listener configuration Each listener is defined as an array in the Kafka resource . For more information on listener configuration, see the GenericKafkaListener schema reference . Generic listener configuration replaces the approach to listener configuration using the KafkaListeners schema reference , which is deprecated . However, you can convert the old format into the new format with backwards compatibility. The KafkaListeners schema uses sub-properties for plain , tls and external listeners, with fixed ports for each. Because of the limits inherent in the architecture of the schema, it is only possible to configure three listeners, with configuration options limited to the type of listener. With the GenericKafkaListener schema, you can configure as many listeners as required, as long as their names and ports are unique. You might want to configure multiple external listeners, for example, to handle access from networks that require different authentication mechanisms. Or you might need to join your OpenShift network to an outside network. In which case, you can configure internal listeners (using the useServiceDnsDomain property) so that the OpenShift service DNS domain (typically .cluster.local ) is not used. Configuring listeners to secure access to Kafka brokers You can configure listeners for secure connection using authentication. For more information on securing access to Kafka brokers, see Managing access to Kafka . Configuring external listeners for client access outside OpenShift You can configure external listeners for client access outside an OpenShift environment using a specified connection mechanism, such as a loadbalancer. For more information on the configuration options for connecting an external client, see Configuring external listeners . Listener certificates You can provide your own server certificates, called Kafka listener certificates , for TLS listeners or external listeners which have TLS encryption enabled. For more information, see Kafka listener certificates . 2.1.7. ZooKeeper replicas ZooKeeper clusters or ensembles usually run with an odd number of nodes, typically three, five, or seven. The majority of nodes must be available in order to maintain an effective quorum. If the ZooKeeper cluster loses its quorum, it will stop responding to clients and the Kafka brokers will stop working. Having a stable and highly available ZooKeeper cluster is crucial for AMQ Streams. Three-node cluster A three-node ZooKeeper cluster requires at least two nodes to be up and running in order to maintain the quorum. It can tolerate only one node being unavailable. Five-node cluster A five-node ZooKeeper cluster requires at least three nodes to be up and running in order to maintain the quorum. It can tolerate two nodes being unavailable. Seven-node cluster A seven-node ZooKeeper cluster requires at least four nodes to be up and running in order to maintain the quorum. It can tolerate three nodes being unavailable. Note For development purposes, it is also possible to run ZooKeeper with a single node. Having more nodes does not necessarily mean better performance, as the costs to maintain the quorum will rise with the number of nodes in the cluster. Depending on your availability requirements, you can decide for the number of nodes to use. 2.1.7.1. Number of ZooKeeper nodes The number of ZooKeeper nodes can be configured using the replicas property in Kafka.spec.zookeeper . An example showing replicas configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... replicas: 3 # ... 2.1.7.2. Changing the number of ZooKeeper replicas Prerequisites An OpenShift cluster is available. The Cluster Operator is running. Procedure Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment. In the spec.zookeeper.replicas property in the Kafka resource, enter the number of replicated ZooKeeper servers. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... replicas: 3 # ... Apply the new configuration to create or update the resource. Use oc apply : oc apply -f kafka.yaml where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml . 2.1.8. ZooKeeper configuration AMQ Streams allows you to customize the configuration of Apache ZooKeeper nodes. You can specify and configure most of the options listed in the ZooKeeper documentation . Options which cannot be configured are those related to the following areas: Security (Encryption, Authentication, and Authorization) Listener configuration Configuration of data directories ZooKeeper cluster composition These options are automatically configured by AMQ Streams. 2.1.8.1. ZooKeeper configuration ZooKeeper nodes are configured using the config property in Kafka.spec.zookeeper . This property contains the ZooKeeper configuration options as keys. The values can be described using one of the following JSON types: String Number Boolean Users can specify and configure the options listed in ZooKeeper documentation with the exception of those options which are managed directly by AMQ Streams. Specifically, all configuration options with keys equal to or starting with one of the following strings are forbidden: server. dataDir dataLogDir clientPort authProvider quorum.auth requireClientAuthScheme When one of the forbidden options is present in the config property, it is ignored and a warning message is printed to the Cluster Operator log file. All other options are passed to ZooKeeper. Important The Cluster Operator does not validate keys or values in the provided config object. When invalid configuration is provided, the ZooKeeper cluster might not start or might become unstable. In such cases, the configuration in the Kafka.spec.zookeeper.config object should be fixed and the Cluster Operator will roll out the new configuration to all ZooKeeper nodes. Selected options have default values: timeTick with default value 2000 initLimit with default value 5 syncLimit with default value 2 autopurge.purgeInterval with default value 1 These options will be automatically configured when they are not present in the Kafka.spec.zookeeper.config property. Use the three allowed ssl configuration options for client connection using a specific cipher suite for a TLS version. A cipher suite combines algorithms for secure connection and data transfer. Example ZooKeeper configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 1 ssl.enabled.protocols: "TLSv1.2" 2 ssl.protocol: "TLSv1.2" 3 # ... 1 The cipher suite for TLS using a combination of ECDHE key exchange mechanism, RSA authentication algorithm, AES bulk encyption algorithm and SHA384 MAC algorithm. 2 The SSl protocol TLSv1.2 is enabled. 3 Specifies the TLSv1.2 protocol to generate the SSL context. Allowed values are TLSv1.1 and TLSv1.2 . 2.1.8.2. Configuring ZooKeeper Prerequisites An OpenShift cluster is available. The Cluster Operator is running. Procedure Open the YAML configuration file that contains the Kafka resource specifying the cluster deployment. In the spec.zookeeper.config property in the Kafka resource, enter one or more ZooKeeper configuration settings. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... zookeeper: # ... config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 # ... Apply the new configuration to create or update the resource. Use oc apply : oc apply -f kafka.yaml where kafka.yaml is the YAML configuration file for the resource that you want to configure; for example, kafka-persistent.yaml . 2.1.9. ZooKeeper connection ZooKeeper services are secured with encryption and authentication and are not intended to be used by external applications that are not part of AMQ Streams. However, if you want to use Kafka CLI tools that require a connection to ZooKeeper, you can use a terminal inside a ZooKeeper container and connect to localhost:12181 as the ZooKeeper address. 2.1.9.1. Connecting to ZooKeeper from a terminal Most Kafka CLI tools can connect directly to Kafka. So you should under normal circumstances not need to connect to ZooKeeper. In case it is needed, you can follow this procedure. Open a terminal inside a ZooKeeper container to use Kafka CLI tools that require a ZooKeeper connection. Prerequisites An OpenShift cluster is available. A Kafka cluster is running. The Cluster Operator is running. Procedure Open the terminal using the OpenShift console or run the exec command from your CLI. For example: oc exec -it my-cluster -zookeeper-0 -- bin/kafka-topics.sh --list --zookeeper localhost:12181 Be sure to use localhost:12181 . You can now run Kafka commands to ZooKeeper. 2.1.10. Entity Operator The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. The Entity Operator comprises the: Topic Operator to manage Kafka topics User Operator to manage Kafka users Through Kafka resource configuration, the Cluster Operator can deploy the Entity Operator, including one or both operators, when deploying a Kafka cluster. Note When deployed, the Entity Operator contains the operators according to the deployment configuration. The operators are automatically configured to manage the topics and users of the Kafka cluster. 2.1.10.1. Entity Operator configuration properties Use the entityOperator property in Kafka.spec to configure the Entity Operator. The entityOperator property supports several sub-properties: tlsSidecar topicOperator userOperator template The tlsSidecar property contains the configuration of the TLS sidecar container, which is used to communicate with ZooKeeper. For more information on configuring the TLS sidecar, see Section 2.1.19, "TLS sidecar" . The template property contains the configuration of the Entity Operator pod, such as labels, annotations, affinity, and tolerations. For more information on configuring templates, see Section 2.6, "Customizing OpenShift resources" . The topicOperator property contains the configuration of the Topic Operator. When this option is missing, the Entity Operator is deployed without the Topic Operator. The userOperator property contains the configuration of the User Operator. When this option is missing, the Entity Operator is deployed without the User Operator. For more information on the properties to configure the Entity Operator, see the EntityUserOperatorSpec schema reference . Example of basic configuration enabling both operators apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: {} userOperator: {} If an empty object ( {} ) is used for the topicOperator and userOperator , all properties use their default values. When both topicOperator and userOperator properties are missing, the Entity Operator is not deployed. 2.1.10.2. Topic Operator configuration properties Topic Operator deployment can be configured using additional options inside the topicOperator object. The following properties are supported: watchedNamespace The OpenShift namespace in which the topic operator watches for KafkaTopics . Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds The interval between periodic reconciliations in seconds. Default 90 . zookeeperSessionTimeoutSeconds The ZooKeeper session timeout in seconds. Default 20 . topicMetadataMaxAttempts The number of attempts at getting topic metadata from Kafka. The time between each attempt is defined as an exponential back-off. Consider increasing this value when topic creation could take more time due to the number of partitions or replicas. Default 6 . image The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 2.1.18, "Container images" . resources The resources property configures the amount of resources allocated to the Topic Operator. For more details about resource request and limit configuration, see Section 2.1.11, "CPU and memory resources" . logging The logging property configures the logging of the Topic Operator. For more details, see Section 2.1.10.4, "Operator loggers" . Example of Topic Operator configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 # ... 2.1.10.3. User Operator configuration properties User Operator deployment can be configured using additional options inside the userOperator object. The following properties are supported: watchedNamespace The OpenShift namespace in which the user operator watches for KafkaUsers . Default is the namespace where the Kafka cluster is deployed. reconciliationIntervalSeconds The interval between periodic reconciliations in seconds. Default 120 . zookeeperSessionTimeoutSeconds The ZooKeeper session timeout in seconds. Default 6 . image The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 2.1.18, "Container images" . resources The resources property configures the amount of resources allocated to the User Operator. For more details about resource request and limit configuration, see Section 2.1.11, "CPU and memory resources" . logging The logging property configures the logging of the User Operator. For more details, see Section 2.1.10.4, "Operator loggers" . Example of User Operator configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 # ... 2.1.10.4. Operator loggers The Topic Operator and User Operator have a configurable logger: rootLogger.level The operators use the Apache log4j2 logger implementation. Use the logging property in the Kafka resource to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j2.properties . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # ... userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # ... External logging apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external name: customConfigMap # ... Additional resources Garbage collector (GC) logging can also be enabled (or disabled). For more information about GC logging, see Section 2.1.17.1, "JVM configuration" For more information about log levels, see Apache logging services . 2.1.10.5. Configuring the Entity Operator Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the entityOperator property in the Kafka resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.11. CPU and memory resources For every deployed container, AMQ Streams allows you to request specific resources and define the maximum consumption of those resources. AMQ Streams supports two types of resources: CPU Memory AMQ Streams uses the OpenShift syntax for specifying CPU and memory resources. 2.1.11.1. Resource limits and requests Resource limits and requests are configured using the resources property in the following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.topicOperator Kafka.spec.entityOperator.userOperator Kafka.spec.entityOperator.tlsSidecar Kafka.spec.kafkaExporter KafkaConnect.spec KafkaConnectS2I.spec KafkaBridge.spec Additional resources For more information about managing computing resources on OpenShift, see Managing Compute Resources for Containers . 2.1.11.1.1. Resource requests Requests specify the resources to reserve for a given container. Reserving the resources ensures that they are always available. Important If the resource request is for more than the available free resources in the OpenShift cluster, the pod is not scheduled. Resources requests are specified in the requests property. Resources requests currently supported by AMQ Streams: cpu memory A request may be configured for one or more supported resources. Example resource request configuration with all resources # ... resources: requests: cpu: 12 memory: 64Gi # ... 2.1.11.1.2. Resource limits Limits specify the maximum resources that can be consumed by a given container. The limit is not reserved and might not always be available. A container can use the resources up to the limit only when they are available. Resource limits should be always higher than the resource requests. Resource limits are specified in the limits property. Resource limits currently supported by AMQ Streams: cpu memory A resource may be configured for one or more supported limits. Example resource limits configuration # ... resources: limits: cpu: 12 memory: 64Gi # ... 2.1.11.1.3. Supported CPU formats CPU requests and limits are supported in the following formats: Number of CPU cores as integer ( 5 CPU core) or decimal ( 2.5 CPU core). Number or millicpus / millicores ( 100m ) where 1000 millicores is the same 1 CPU core. Example CPU units # ... resources: requests: cpu: 500m limits: cpu: 2.5 # ... Note The computing power of 1 CPU core may differ depending on the platform where OpenShift is deployed. Additional resources For more information on CPU specification, see the Meaning of CPU . 2.1.11.1.4. Supported memory formats Memory requests and limits are specified in megabytes, gigabytes, mebibytes, and gibibytes. To specify memory in megabytes, use the M suffix. For example 1000M . To specify memory in gigabytes, use the G suffix. For example 1G . To specify memory in mebibytes, use the Mi suffix. For example 1000Mi . To specify memory in gibibytes, use the Gi suffix. For example 1Gi . An example of using different memory units # ... resources: requests: memory: 512Mi limits: memory: 2Gi # ... Additional resources For more details about memory specification and additional supported units, see Meaning of memory . 2.1.11.2. Configuring resource requests and limits Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the resources property in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... resources: requests: cpu: "8" memory: 64Gi limits: cpu: "12" memory: 128Gi # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file Additional resources For more information about the schema, see ResourceRequirements API reference . 2.1.12. Kafka loggers Kafka has its own configurable loggers: log4j.logger.org.I0Itec.zkclient.ZkClient log4j.logger.org.apache.zookeeper log4j.logger.kafka log4j.logger.org.apache.kafka log4j.logger.kafka.request.logger log4j.logger.kafka.network.Processor log4j.logger.kafka.server.KafkaApis log4j.logger.kafka.network.RequestChannelUSD log4j.logger.kafka.controller log4j.logger.kafka.log.LogCleaner log4j.logger.state.change.logger log4j.logger.kafka.authorizer.logger ZooKeeper also has a configurable logger: zookeeper.root.logger Kafka and ZooKeeper use the Apache log4j logger implementation. Operators use the Apache log4j2 logger implementation, so the logging configuration is described inside the ConfigMap using log4j2.properties . For more information, see Section 2.1.10.4, "Operator loggers" . Use the logging property to configure loggers and logger levels. You can set the log levels by specifying the logger and level directly (inline) or use a custom (external) ConfigMap. If a ConfigMap is used, you set logging.name property to the name of the ConfigMap containing the external logging configuration. Inside the ConfigMap, the logging configuration is described using log4j.properties . Here we see examples of inline and external logging. Inline logging apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # ... kafka: # ... logging: type: inline loggers: kafka.root.logger.level: "INFO" # ... zookeeper: # ... logging: type: inline loggers: zookeeper.root.logger: "INFO" # ... entityOperator: # ... topicOperator: # ... logging: type: inline loggers: rootLogger.level: INFO # ... userOperator: # ... logging: type: inline loggers: rootLogger.level: INFO # ... External logging apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # ... logging: type: external name: customConfigMap # ... Changes to both external and inline logging levels will be applied to Kafka brokers without a restart. Additional resources Garbage collector (GC) logging can also be enabled (or disabled). For more information on garbage collection, see Section 2.1.17.1, "JVM configuration" For more information about log levels, see Apache logging services . 2.1.13. Kafka rack awareness The rack awareness feature in AMQ Streams helps to spread the Kafka broker pods and Kafka topic replicas across different racks. Enabling rack awareness helps to improve availability of Kafka brokers and the topics they are hosting. Note "Rack" might represent an availability zone, data center, or an actual rack in your data center. 2.1.13.1. Configuring rack awareness in Kafka brokers Kafka rack awareness can be configured in the rack property of Kafka.spec.kafka . The rack object has one mandatory field named topologyKey . This key needs to match one of the labels assigned to the OpenShift cluster nodes. The label is used by OpenShift when scheduling the Kafka broker pods to nodes. If the OpenShift cluster is running on a cloud provider platform, that label should represent the availability zone where the node is running. Usually, the nodes are labeled with topology.kubernetes.io/zone label (or failure-domain.beta.kubernetes.io/zone on older OpenShift versions) that can be used as the topologyKey value. For more information about OpenShift node labels, see Well-Known Labels, Annotations and Taints . This has the effect of spreading the broker pods across zones, and also setting the brokers' broker.rack configuration parameter inside Kafka broker. Prerequisites An OpenShift cluster A running Cluster Operator Procedure Consult your OpenShift administrator regarding the node label that represents the zone / rack into which the node is deployed. Edit the rack property in the Kafka resource using the label as the topology key. apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... rack: topologyKey: topology.kubernetes.io/zone # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file Additional resources For information about Configuring init container image for Kafka rack awareness, see Section 2.1.18, "Container images" . 2.1.14. Healthchecks Healthchecks are periodical tests which verify the health of an application. When a Healthcheck probe fails, OpenShift assumes that the application is not healthy and attempts to fix it. OpenShift supports two types of Healthcheck probes: Liveness probes Readiness probes For more details about the probes, see Configure Liveness and Readiness Probes . Both types of probes are used in AMQ Streams components. Users can configure selected options for liveness and readiness probes. 2.1.14.1. Healthcheck configurations Liveness and readiness probes can be configured using the livenessProbe and readinessProbe properties in following resources: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator.tlsSidecar Kafka.spec.entityOperator.topicOperator Kafka.spec.entityOperator.userOperator Kafka.spec.kafkaExporter KafkaConnect.spec KafkaConnectS2I.spec KafkaMirrorMaker.spec KafkaBridge.spec Both livenessProbe and readinessProbe support the following options: initialDelaySeconds timeoutSeconds periodSeconds successThreshold failureThreshold For more information about the livenessProbe and readinessProbe options, see Section B.45, " Probe schema reference" . An example of liveness and readiness probe configuration # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... 2.1.14.2. Configuring healthchecks Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the livenessProbe or readinessProbe property in the Kafka resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.15. Prometheus metrics AMQ Streams supports Prometheus metrics using Prometheus JMX exporter to convert the JMX metrics supported by Apache Kafka and ZooKeeper to Prometheus metrics. When metrics are enabled, they are exposed on port 9404. For more information about setting up and deploying Prometheus and Grafana, see Introducing Metrics to Kafka in the Deploying and Upgrading AMQ Streams on OpenShift guide. 2.1.15.1. Metrics configuration Prometheus metrics are enabled by configuring the metrics property in following resources: Kafka.spec.kafka Kafka.spec.zookeeper KafkaConnect.spec KafkaConnectS2I.spec When the metrics property is not defined in the resource, the Prometheus metrics will be disabled. To enable Prometheus metrics export without any further configuration, you can set it to an empty object ( {} ). Example of enabling metrics without any further configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: {} # ... zookeeper: # ... The metrics property might contain additional configuration for the Prometheus JMX exporter . Example of enabling metrics with additional Prometheus JMX Exporter configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... metrics: lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_USD1_USD2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_USD1_USD2_total" labels: topic: "USD3" # ... zookeeper: # ... 2.1.15.2. Configuring Prometheus metrics Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the metrics property in the Kafka resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... metrics: lowercaseOutputName: true # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.16. JMX Options AMQ Streams supports obtaining JMX metrics from the Kafka brokers by opening a JMX port on 9999. You can obtain various metrics about each Kafka broker, for example, usage data such as the BytesPerSecond value or the request rate of the network of the broker. AMQ Streams supports opening a password and username protected JMX port or a non-protected JMX port. 2.1.16.1. Configuring JMX options Prerequisites An OpenShift cluster A running Cluster Operator You can configure JMX options by using the jmxOptions property in the following resources: Kafka.spec.kafka You can configure username and password protection for the JMX port that is opened on the Kafka brokers. Securing the JMX Port You can secure the JMX port to prevent unauthorized pods from accessing the port. Currently the JMX port can only be secured using a username and password. To enable security for the JMX port, set the type parameter in the authentication field to password .: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: authentication: type: "password" # ... zookeeper: # ... This allows you to deploy a pod internally into a cluster and obtain JMX metrics by using the headless service and specifying which broker you want to address. To get JMX metrics from broker 0 we address the headless service appending broker 0 in front of the headless service: " <cluster-name> -kafka-0- <cluster-name> - <headless-service-name> " If the JMX port is secured, you can get the username and password by referencing them from the JMX secret in the deployment of your pod. Using an open JMX port To disable security for the JMX port, do not fill in the authentication field apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jmxOptions: {} # ... zookeeper: # ... This will just open the JMX Port on the headless service and you can follow a similar approach as described above to deploy a pod into the cluster. The only difference is that any pod will be able to read from the JMX port. 2.1.17. JVM Options The following components of AMQ Streams run inside a Virtual Machine (VM): Apache Kafka Apache ZooKeeper Apache Kafka Connect Apache Kafka MirrorMaker AMQ Streams Kafka Bridge JVM configuration options optimize the performance for different platforms and architectures. AMQ Streams allows you to configure some of these options. 2.1.17.1. JVM configuration Use the jvmOptions property to configure supported options for the JVM on which the component is running. Supported JVM options help to optimize performance for different platforms and architectures. 2.1.17.2. Configuring JVM options Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the jvmOptions property in the Kafka , KafkaConnect , KafkaConnectS2I , KafkaMirrorMaker , or KafkaBridge resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... jvmOptions: "-Xmx": "8g" "-Xms": "8g" # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.18. Container images AMQ Streams allows you to configure container images which will be used for its components. Overriding container images is recommended only in special situations, where you need to use a different container registry. For example, because your network does not allow access to the container repository used by AMQ Streams. In such a case, you should either copy the AMQ Streams images or build them from the source. If the configured image is not compatible with AMQ Streams images, it might not work properly. 2.1.18.1. Container image configurations Use the image property to specify which container image to use . Warning Overriding container images is recommended only in special situations. 2.1.18.2. Configuring container images Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the image property in the Kafka resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... image: my-org/my-image:latest # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.19. TLS sidecar A sidecar is a container that runs in a pod but serves a supporting purpose. In AMQ Streams, the TLS sidecar uses TLS to encrypt and decrypt all communication between the various components and ZooKeeper. The TLS sidecar is used in: Entity Operator Cruise Control 2.1.19.1. TLS sidecar configuration The TLS sidecar can be configured using the tlsSidecar property in: Kafka.spec.kafka Kafka.spec.zookeeper Kafka.spec.entityOperator The TLS sidecar supports the following additional options: image resources logLevel readinessProbe livenessProbe The resources property can be used to specify the memory and CPU resources allocated for the TLS sidecar. The image property can be used to configure the container image which will be used. For more details about configuring custom container images, see Section 2.1.18, "Container images" . The logLevel property is used to specify the logging level. Following logging levels are supported: emerg alert crit err warning notice info debug The default value is notice . For more information about configuring the readinessProbe and livenessProbe properties for the healthchecks, see Section 2.1.14.1, "Healthcheck configurations" . Example of TLS sidecar configuration apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # ... zookeeper: # ... 2.1.19.2. Configuring TLS sidecar Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the tlsSidecar property in the Kafka resource. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... entityOperator: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... cruiseControl: # ... tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.20. Configuring pod scheduling Important When two applications are scheduled to the same OpenShift node, both applications might use the same resources like disk I/O and impact performance. That can lead to performance degradation. Scheduling Kafka pods in a way that avoids sharing nodes with other critical workloads, using the right nodes or dedicated a set of nodes only for Kafka are the best ways how to avoid such problems. 2.1.20.1. Scheduling pods based on other applications 2.1.20.1.1. Avoid critical applications to share the node Pod anti-affinity can be used to ensure that critical applications are never scheduled on the same disk. When running Kafka cluster, it is recommended to use pod anti-affinity to ensure that the Kafka brokers do not share the nodes with other workloads like databases. 2.1.20.1.2. Affinity Affinity can be configured using the affinity property in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaConnectS2I.spec.template.pod KafkaBridge.spec.template.pod The affinity configuration can include different types of affinity: Pod affinity and anti-affinity Node affinity The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation . 2.1.20.1.3. Configuring pod anti-affinity in Kafka components Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the affinity property in the resource specifying the cluster deployment. Use labels to specify the pods which should not be scheduled on the same nodes. The topologyKey should be set to kubernetes.io/hostname to specify that the selected pods should not be scheduled on nodes with the same hostname. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.20.2. Scheduling pods to specific nodes 2.1.20.2.1. Node scheduling The OpenShift cluster usually consists of many different types of worker nodes. Some are optimized for CPU heavy workloads, some for memory, while other might be optimized for storage (fast local SSDs) or network. Using different nodes helps to optimize both costs and performance. To achieve the best possible performance, it is important to allow scheduling of AMQ Streams components to use the right nodes. OpenShift uses node affinity to schedule workloads onto specific nodes. Node affinity allows you to create a scheduling constraint for the node on which the pod will be scheduled. The constraint is specified as a label selector. You can specify the label using either the built-in node label like beta.kubernetes.io/instance-type or custom labels to select the right node. 2.1.20.2.2. Affinity Affinity can be configured using the affinity property in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaConnectS2I.spec.template.pod KafkaBridge.spec.template.pod The affinity configuration can include different types of affinity: Pod affinity and anti-affinity Node affinity The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation . 2.1.20.2.3. Configuring node affinity in Kafka components Prerequisites An OpenShift cluster A running Cluster Operator Procedure Label the nodes where AMQ Streams components should be scheduled. This can be done using oc label : oc label node your-node node-type=fast-network Alternatively, some of the existing labels might be reused. Edit the affinity property in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.20.3. Using dedicated nodes 2.1.20.3.1. Dedicated nodes Cluster administrators can mark selected OpenShift nodes as tainted. Nodes with taints are excluded from regular scheduling and normal pods will not be scheduled to run on them. Only services which can tolerate the taint set on the node can be scheduled on it. The only other services running on such nodes will be system services such as log collectors or software defined networks. Taints can be used to create dedicated nodes. Running Kafka and its components on dedicated nodes can have many advantages. There will be no other applications running on the same nodes which could cause disturbance or consume the resources needed for Kafka. That can lead to improved performance and stability. To schedule Kafka pods on the dedicated nodes, configure node affinity and tolerations . 2.1.20.3.2. Affinity Affinity can be configured using the affinity property in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaConnectS2I.spec.template.pod KafkaBridge.spec.template.pod The affinity configuration can include different types of affinity: Pod affinity and anti-affinity Node affinity The format of the affinity property follows the OpenShift specification. For more details, see the Kubernetes node and pod affinity documentation . 2.1.20.3.3. Tolerations Tolerations can be configured using the tolerations property in following resources: Kafka.spec.kafka.template.pod Kafka.spec.zookeeper.template.pod Kafka.spec.entityOperator.template.pod KafkaConnect.spec.template.pod KafkaConnectS2I.spec.template.pod KafkaBridge.spec.template.pod The format of the tolerations property follows the OpenShift specification. For more details, see the Kubernetes taints and tolerations . 2.1.20.3.4. Setting up dedicated nodes and scheduling pods on them Prerequisites An OpenShift cluster A running Cluster Operator Procedure Select the nodes which should be used as dedicated. Make sure there are no workloads scheduled on these nodes. Set the taints on the selected nodes: This can be done using oc adm taint : oc adm taint node your-node dedicated=Kafka:NoSchedule Additionally, add a label to the selected nodes as well. This can be done using oc label : oc label node your-node dedicated=Kafka Edit the affinity and tolerations properties in the resource specifying the cluster deployment. For example: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # ... template: pod: tolerations: - key: "dedicated" operator: "Equal" value: "Kafka" effect: "NoSchedule" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # ... zookeeper: # ... Create or update the resource. This can be done using oc apply : oc apply -f your-file 2.1.21. Kafka Exporter You can configure the Kafka resource to automatically deploy Kafka Exporter in your cluster. Kafka Exporter extracts data for analysis as Prometheus metrics, primarily data relating to offsets, consumer groups, consumer lag and topics. For information on setting up Kafka Exporter and why it is important to monitor consumer lag for performance, see Kafka Exporter in the Deploying and Upgrading AMQ Streams on OpenShift guide. 2.1.22. Performing a rolling update of a Kafka cluster This procedure describes how to manually trigger a rolling update of an existing Kafka cluster by using an OpenShift annotation. Prerequisites See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Procedure Find the name of the StatefulSet that controls the Kafka pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding StatefulSet is named my-cluster-kafka . Annotate the StatefulSet resource in OpenShift. For example, using oc annotate : oc annotate statefulset cluster-name -kafka strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the StatefulSet . 2.1.23. Performing a rolling update of a ZooKeeper cluster This procedure describes how to manually trigger a rolling update of an existing ZooKeeper cluster by using an OpenShift annotation. Prerequisites See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Procedure Find the name of the StatefulSet that controls the ZooKeeper pods you want to manually update. For example, if your Kafka cluster is named my-cluster , the corresponding StatefulSet is named my-cluster-zookeeper . Annotate the StatefulSet resource in OpenShift. For example, using oc annotate : oc annotate statefulset cluster-name -zookeeper strimzi.io/manual-rolling-update=true Wait for the reconciliation to occur (every two minutes by default). A rolling update of all pods within the annotated StatefulSet is triggered, as long as the annotation was detected by the reconciliation process. When the rolling update of all the pods is complete, the annotation is removed from the StatefulSet . 2.1.24. Scaling clusters 2.1.24.1. Scaling Kafka clusters 2.1.24.1.1. Adding brokers to a cluster The primary way of increasing throughput for a topic is to increase the number of partitions for that topic. That works because the extra partitions allow the load of the topic to be shared between the different brokers in the cluster. However, in situations where every broker is constrained by a particular resource (typically I/O) using more partitions will not result in increased throughput. Instead, you need to add brokers to the cluster. When you add an extra broker to the cluster, Kafka does not assign any partitions to it automatically. You must decide which partitions to move from the existing brokers to the new broker. Once the partitions have been redistributed between all the brokers, the resource utilization of each broker should be reduced. 2.1.24.1.2. Removing brokers from a cluster Because AMQ Streams uses StatefulSets to manage broker pods, you cannot remove any pod from the cluster. You can only remove one or more of the highest numbered pods from the cluster. For example, in a cluster of 12 brokers the pods are named cluster-name -kafka-0 up to cluster-name -kafka-11 . If you decide to scale down by one broker, the cluster-name -kafka-11 will be removed. Before you remove a broker from a cluster, ensure that it is not assigned to any partitions. You should also decide which of the remaining brokers will be responsible for each of the partitions on the broker being decommissioned. Once the broker has no assigned partitions, you can scale the cluster down safely. 2.1.24.2. Partition reassignment The Topic Operator does not currently support reassigning replicas to different brokers, so it is necessary to connect directly to broker pods to reassign replicas to brokers. Within a broker pod, the kafka-reassign-partitions.sh utility allows you to reassign partitions to different brokers. It has three different modes: --generate Takes a set of topics and brokers and generates a reassignment JSON file which will result in the partitions of those topics being assigned to those brokers. Because this operates on whole topics, it cannot be used when you just need to reassign some of the partitions of some topics. --execute Takes a reassignment JSON file and applies it to the partitions and brokers in the cluster. Brokers that gain partitions as a result become followers of the partition leader. For a given partition, once the new broker has caught up and joined the ISR (in-sync replicas) the old broker will stop being a follower and will delete its replica. --verify Using the same reassignment JSON file as the --execute step, --verify checks whether all of the partitions in the file have been moved to their intended brokers. If the reassignment is complete, --verify also removes any throttles that are in effect. Unless removed, throttles will continue to affect the cluster even after the reassignment has finished. It is only possible to have one reassignment running in a cluster at any given time, and it is not possible to cancel a running reassignment. If you need to cancel a reassignment, wait for it to complete and then perform another reassignment to revert the effects of the first reassignment. The kafka-reassign-partitions.sh will print the reassignment JSON for this reversion as part of its output. Very large reassignments should be broken down into a number of smaller reassignments in case there is a need to stop in-progress reassignment. 2.1.24.2.1. Reassignment JSON file The reassignment JSON file has a specific structure: Where <PartitionObjects> is a comma-separated list of objects like: Note Although Kafka also supports a "log_dirs" property this should not be used in AMQ Streams. The following is an example reassignment JSON file that assigns topic topic-a , partition 4 to brokers 2 , 4 and 7 , and topic topic-b partition 2 to brokers 1 , 5 and 7 : { "version": 1, "partitions": [ { "topic": "topic-a", "partition": 4, "replicas": [2,4,7] }, { "topic": "topic-b", "partition": 2, "replicas": [1,5,7] } ] } Partitions not included in the JSON are not changed. 2.1.24.2.2. Reassigning partitions between JBOD volumes When using JBOD storage in your Kafka cluster, you can choose to reassign the partitions between specific volumes and their log directories (each volume has a single log directory). To reassign a partition to a specific volume, add the log_dirs option to <PartitionObjects> in the reassignment JSON file. The log_dirs object should contain the same number of log directories as the number of replicas specified in the replicas object. The value should be either an absolute path to the log directory, or the any keyword. For example: 2.1.24.3. Generating reassignment JSON files This procedure describes how to generate a reassignment JSON file that reassigns all the partitions for a given set of topics using the kafka-reassign-partitions.sh tool. Prerequisites A running Cluster Operator A Kafka resource A set of topics to reassign the partitions of Procedure Prepare a JSON file named topics.json that lists the topics to move. It must have the following structure: where <TopicObjects> is a comma-separated list of objects like: For example if you want to reassign all the partitions of topic-a and topic-b , you would need to prepare a topics.json file like this: { "version": 1, "topics": [ { "topic": "topic-a"}, { "topic": "topic-b"} ] } Copy the topics.json file to one of the broker pods: Use the kafka-reassign-partitions.sh command to generate the reassignment JSON. For example, to move all the partitions of topic-a and topic-b to brokers 4 and 7 oc exec <BrokerPod> -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --topics-to-move-json-file /tmp/topics.json \ --broker-list 4,7 \ --generate 2.1.24.4. Creating reassignment JSON files manually You can manually create the reassignment JSON file if you want to move specific partitions. 2.1.24.5. Reassignment throttles Partition reassignment can be a slow process because it involves transferring large amounts of data between brokers. To avoid a detrimental impact on clients, you can throttle the reassignment process. This might cause the reassignment to take longer to complete. If the throttle is too low then the newly assigned brokers will not be able to keep up with records being published and the reassignment will never complete. If the throttle is too high then clients will be impacted. For example, for producers, this could manifest as higher than normal latency waiting for acknowledgement. For consumers, this could manifest as a drop in throughput caused by higher latency between polls. 2.1.24.6. Scaling up a Kafka cluster This procedure describes how to increase the number of brokers in a Kafka cluster. Prerequisites An existing Kafka cluster. A reassignment JSON file named reassignment.json that describes how partitions should be reassigned to brokers in the enlarged cluster. Procedure Add as many new brokers as you need by increasing the Kafka.spec.kafka.replicas configuration option. Verify that the new broker pods have started. Copy the reassignment.json file to the broker pod on which you will later execute the commands: cat reassignment.json | \ oc exec broker-pod -c kafka -i -- /bin/bash -c \ 'cat > /tmp/reassignment.json' For example: cat reassignment.json | \ oc exec my-cluster-kafka-0 -c kafka -i -- /bin/bash -c \ 'cat > /tmp/reassignment.json' Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool from the same broker pod. oc exec broker-pod -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --execute If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: oc exec my-cluster-kafka-0 -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 5000000 \ --execute This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example: oc exec my-cluster-kafka-0 -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 10000000 \ --execute Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the step but with the --verify option instead of the --execute option. oc exec broker-pod -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --verify For example, oc exec my-cluster-kafka-0 -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --verify The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. 2.1.24.7. Scaling down a Kafka cluster Additional resources This procedure describes how to decrease the number of brokers in a Kafka cluster. Prerequisites An existing Kafka cluster. A reassignment JSON file named reassignment.json describing how partitions should be reassigned to brokers in the cluster once the broker(s) in the highest numbered Pod(s) have been removed. Procedure Copy the reassignment.json file to the broker pod on which you will later execute the commands: cat reassignment.json | \ oc exec broker-pod -c kafka -i -- /bin/bash -c \ 'cat > /tmp/reassignment.json' For example: cat reassignment.json | \ oc exec my-cluster-kafka-0 -c kafka -i -- /bin/bash -c \ 'cat > /tmp/reassignment.json' Execute the partition reassignment using the kafka-reassign-partitions.sh command line tool from the same broker pod. oc exec broker-pod -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --execute If you are going to throttle replication you can also pass the --throttle option with an inter-broker throttled rate in bytes per second. For example: oc exec my-cluster-kafka-0 -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 5000000 \ --execute This command will print out two reassignment JSON objects. The first records the current assignment for the partitions being moved. You should save this to a local file (not a file in the pod) in case you need to revert the reassignment later on. The second JSON object is the target reassignment you have passed in your reassignment JSON file. If you need to change the throttle during reassignment you can use the same command line with a different throttled rate. For example: oc exec my-cluster-kafka-0 -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --throttle 10000000 \ --execute Periodically verify whether the reassignment has completed using the kafka-reassign-partitions.sh command line tool from any of the broker pods. This is the same command as the step but with the --verify option instead of the --execute option. oc exec broker-pod -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --verify For example, oc exec my-cluster-kafka-0 -c kafka -it -- \ bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 \ --reassignment-json-file /tmp/reassignment.json \ --verify The reassignment has finished when the --verify command reports each of the partitions being moved as completed successfully. This final --verify will also have the effect of removing any reassignment throttles. You can now delete the revert file if you saved the JSON for reverting the assignment to their original brokers. Once all the partition reassignments have finished, the broker(s) being removed should not have responsibility for any of the partitions in the cluster. You can verify this by checking that the broker's data log directory does not contain any live partition logs. If the log directory on the broker contains a directory that does not match the extended regular expression \.[a-z0-9] -deleteUSD then the broker still has live partitions and it should not be stopped. You can check this by executing the command: oc exec my-cluster-kafka-0 -c kafka -it -- \ /bin/bash -c \ "ls -l /var/lib/kafka/kafka-log_<N>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\.[a-z0-9]+-deleteUSD'" where N is the number of the Pod(s) being deleted. If the above command prints any output then the broker still has live partitions. In this case, either the reassignment has not finished, or the reassignment JSON file was incorrect. Once you have confirmed that the broker has no live partitions you can edit the Kafka.spec.kafka.replicas of your Kafka resource, which will scale down the StatefulSet , deleting the highest numbered broker Pod(s) . 2.1.25. Deleting Kafka nodes manually Additional resources This procedure describes how to delete an existing Kafka node by using an OpenShift annotation. Deleting a Kafka node consists of deleting both the Pod on which the Kafka broker is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues. Prerequisites See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Procedure Find the name of the Pod that you want to delete. For example, if the cluster is named cluster-name , the pods are named cluster-name -kafka- index , where index starts at zero and ends at the total number of replicas. Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -kafka- index strimzi.io/delete-pod-and-pvc=true Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 2.1.26. Deleting ZooKeeper nodes manually This procedure describes how to delete an existing ZooKeeper node by using an OpenShift annotation. Deleting a ZooKeeper node consists of deleting both the Pod on which ZooKeeper is running and the related PersistentVolumeClaim (if the cluster was deployed with persistent storage). After deletion, the Pod and its related PersistentVolumeClaim are recreated automatically. Warning Deleting a PersistentVolumeClaim can cause permanent data loss. The following procedure should only be performed if you have encountered storage issues. Prerequisites See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Procedure Find the name of the Pod that you want to delete. For example, if the cluster is named cluster-name , the pods are named cluster-name -zookeeper- index , where index starts at zero and ends at the total number of replicas. Annotate the Pod resource in OpenShift. Use oc annotate : oc annotate pod cluster-name -zookeeper- index strimzi.io/delete-pod-and-pvc=true Wait for the reconciliation, when the annotated pod with the underlying persistent volume claim will be deleted and then recreated. 2.1.27. Maintenance time windows for rolling updates Maintenance time windows allow you to schedule certain rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. 2.1.27.1. Maintenance time windows overview In most cases, the Cluster Operator only updates your Kafka or ZooKeeper clusters in response to changes to the corresponding Kafka resource. This enables you to plan when to apply changes to a Kafka resource to minimize the impact on Kafka client applications. However, some updates to your Kafka and ZooKeeper clusters can happen without any corresponding change to the Kafka resource. For example, the Cluster Operator will need to perform a rolling restart if a CA (Certificate Authority) certificate that it manages is close to expiry. While a rolling restart of the pods should not affect availability of the service (assuming correct broker and topic configurations), it could affect performance of the Kafka client applications. Maintenance time windows allow you to schedule such spontaneous rolling updates of your Kafka and ZooKeeper clusters to start at a convenient time. If maintenance time windows are not configured for a cluster then it is possible that such spontaneous rolling updates will happen at an inconvenient time, such as during a predictable period of high load. 2.1.27.2. Maintenance time window definition You configure maintenance time windows by entering an array of strings in the Kafka.spec.maintenanceTimeWindows property. Each string is a cron expression interpreted as being in UTC (Coordinated Universal Time, which for practical purposes is the same as Greenwich Mean Time). The following example configures a single maintenance time window that starts at midnight and ends at 01:59am (UTC), on Sundays, Mondays, Tuesdays, Wednesdays, and Thursdays: # ... maintenanceTimeWindows: - "* * 0-1 ? * SUN,MON,TUE,WED,THU *" # ... In practice, maintenance windows should be set in conjunction with the Kafka.spec.clusterCa.renewalDays and Kafka.spec.clientsCa.renewalDays properties of the Kafka resource, to ensure that the necessary CA certificate renewal can be completed in the configured maintenance time windows. Note AMQ Streams does not schedule maintenance operations exactly according to the given windows. Instead, for each reconciliation, it checks whether a maintenance window is currently "open". This means that the start of maintenance operations within a given time window can be delayed by up to the Cluster Operator reconciliation interval. Maintenance time windows must therefore be at least this long. Additional resources For more information about the Cluster Operator configuration, see Section 5.1.1, "Cluster Operator configuration" . 2.1.27.3. Configuring a maintenance time window You can configure a maintenance time window for rolling updates triggered by supported processes. Prerequisites An OpenShift cluster. The Cluster Operator is running. Procedure Add or edit the maintenanceTimeWindows property in the Kafka resource. For example to allow maintenance between 0800 and 1059 and between 1400 and 1559 you would set the maintenanceTimeWindows as shown below: apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # ... zookeeper: # ... maintenanceTimeWindows: - "* * 8-10 * * ?" - "* * 14-15 * * ?" Create or update the resource. This can be done using oc apply : oc apply -f your-file Additional resources Performing a rolling update of a Kafka cluster, see Section 2.1.22, "Performing a rolling update of a Kafka cluster" Performing a rolling update of a ZooKeeper cluster, see Section 2.1.23, "Performing a rolling update of a ZooKeeper cluster" 2.1.28. Renewing CA certificates manually Cluster and clients CA certificates auto-renew at the start of their respective certificate renewal periods. If Kafka.spec.clusterCa.generateCertificateAuthority and Kafka.spec.clientsCa.generateCertificateAuthority are set to false , the CA certificates do not auto-renew. You can manually renew one or both of these certificates before the certificate renewal period starts. You might do this for security reasons, or if you have changed the renewal or validity periods for the certificates . A renewed certificate uses the same private key as the old certificate. Prerequisites The Cluster Operator is running. A Kafka cluster in which CA certificates and private keys are installed. Procedure Apply the strimzi.io/force-renew annotation to the Secret that contains the CA certificate that you want to renew. Table 2.1. Annotation for the Secret that forces renewal of certificates Certificate Secret Annotate command Cluster CA KAFKA-CLUSTER-NAME -cluster-ca-cert oc annotate secret KAFKA-CLUSTER-NAME -cluster-ca-cert strimzi.io/force-renew=true Clients CA KAFKA-CLUSTER-NAME -clients-ca-cert oc annotate secret KAFKA-CLUSTER-NAME -clients-ca-cert strimzi.io/force-renew=true At the reconciliation the Cluster Operator will generate a new CA certificate for the Secret that you annotated. If maintenance time windows are configured, the Cluster Operator will generate the new CA certificate at the first reconciliation within the maintenance time window. Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator. Check the period the CA certificate is valid: For example, using an openssl command: oc get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data. CA-CERTIFICATE }' | base64 -d | openssl x509 -subject -issuer -startdate -enddate -noout CA-CERTIFICATE-SECRET is the name of the Secret , which is KAFKA-CLUSTER-NAME -cluster-ca-cert for the cluster CA certificate and KAFKA-CLUSTER-NAME -clients-ca-cert for the clients CA certificate. CA-CERTIFICATE is the name of the CA certificate, such as jsonpath={.data.ca\.crt} . The command returns a notBefore and notAfter date, which is the validity period for the CA certificate. For example, for a cluster CA certificate: subject=O = io.strimzi, CN = cluster-ca v0 issuer=O = io.strimzi, CN = cluster-ca v0 notBefore=Jun 30 09:43:54 2020 GMT notAfter=Jun 30 09:43:54 2021 GMT Delete old certificates from the Secret. When components are using the new certificates, older certificates might still be active. Delete the old certificates to remove any potential security risk. Additional resources Section 11.2, "Secrets" Section 2.1.27, "Maintenance time windows for rolling updates" Section B.69, " CertificateAuthority schema reference" 2.1.29. Replacing private keys You can replace the private keys used by the cluster CA and clients CA certificates. When a private key is replaced, the Cluster Operator generates a new CA certificate for the new private key. Prerequisites The Cluster Operator is running. A Kafka cluster in which CA certificates and private keys are installed. Procedure Apply the strimzi.io/force-replace annotation to the Secret that contains the private key that you want to renew. Table 2.2. Commands for replacing private keys Private key for Secret Annotate command Cluster CA <cluster-name> -cluster-ca oc annotate secret <cluster-name> -cluster-ca strimzi.io/force-replace=true Clients CA <cluster-name> -clients-ca oc annotate secret <cluster-name> -clients-ca strimzi.io/force-replace=true At the reconciliation the Cluster Operator will: Generate a new private key for the Secret that you annotated Generate a new CA certificate If maintenance time windows are configured, the Cluster Operator will generate the new private key and CA certificate at the first reconciliation within the maintenance time window. Client applications must reload the cluster and clients CA certificates that were renewed by the Cluster Operator. Additional resources Section 11.2, "Secrets" Section 2.1.27, "Maintenance time windows for rolling updates" 2.1.30. List of resources created as part of Kafka cluster The following resources are created by the Cluster Operator in the OpenShift cluster: Shared resources cluster-name -cluster-ca Secret with the Cluster CA used to encrypt the cluster communication. cluster-name -cluster-ca-cert Secret with the Cluster CA public key. This key can be used to verify the identity of the Kafka brokers. cluster-name -clients-ca Secret with the Clients CA private key used to sign user certiticates cluster-name -clients-ca-cert Secret with the Clients CA public key. This key can be used to verify the identity of the Kafka users. cluster-name -cluster-operator-certs Secret with Cluster operators keys for communication with Kafka and ZooKeeper. Zookeeper nodes cluster-name -zookeeper StatefulSet which is in charge of managing the ZooKeeper node pods. cluster-name -zookeeper- idx Pods created by the Zookeeper StatefulSet. cluster-name -zookeeper-nodes Headless Service needed to have DNS resolve the ZooKeeper pods IP addresses directly. cluster-name -zookeeper-client Service used by Kafka brokers to connect to ZooKeeper nodes as clients. cluster-name -zookeeper-config ConfigMap that contains the ZooKeeper ancillary configuration, and is mounted as a volume by the ZooKeeper node pods. cluster-name -zookeeper-nodes Secret with ZooKeeper node keys. cluster-name -zookeeper Service account used by the Zookeeper nodes. cluster-name -zookeeper Pod Disruption Budget configured for the ZooKeeper nodes. cluster-name -network-policy-zookeeper Network policy managing access to the ZooKeeper services. data- cluster-name -zookeeper- idx Persistent Volume Claim for the volume used for storing data for the ZooKeeper node pod idx . This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data. Kafka brokers cluster-name -kafka StatefulSet which is in charge of managing the Kafka broker pods. cluster-name -kafka- idx Pods created by the Kafka StatefulSet. cluster-name -kafka-brokers Service needed to have DNS resolve the Kafka broker pods IP addresses directly. cluster-name -kafka-bootstrap Service can be used as bootstrap servers for Kafka clients. cluster-name -kafka-external-bootstrap Bootstrap service for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled. cluster-name -kafka- pod-id Service used to route traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled. cluster-name -kafka-external-bootstrap Bootstrap route for clients connecting from outside of the OpenShift cluster. This resource will be created only when external listener is enabled and set to type route . cluster-name -kafka- pod-id Route for traffic from outside of the OpenShift cluster to individual pods. This resource will be created only when external listener is enabled and set to type route . cluster-name -kafka-config ConfigMap which contains the Kafka ancillary configuration and is mounted as a volume by the Kafka broker pods. cluster-name -kafka-brokers Secret with Kafka broker keys. cluster-name -kafka Service account used by the Kafka brokers. cluster-name -kafka Pod Disruption Budget configured for the Kafka brokers. cluster-name -network-policy-kafka Network policy managing access to the Kafka services. strimzi- namespace-name - cluster-name -kafka-init Cluster role binding used by the Kafka brokers. cluster-name -jmx Secret with JMX username and password used to secure the Kafka broker port. This resource will be created only when JMX is enabled in Kafka. data- cluster-name -kafka- idx Persistent Volume Claim for the volume used for storing data for the Kafka broker pod idx . This resource will be created only if persistent storage is selected for provisioning persistent volumes to store data. data- id - cluster-name -kafka- idx Persistent Volume Claim for the volume id used for storing data for the Kafka broker pod idx . This resource is only created if persistent storage is selected for JBOD volumes when provisioning persistent volumes to store data. Entity Operator These resources are only created if the Entity Operator is deployed using the Cluster Operator. cluster-name -entity-operator Deployment with Topic and User Operators. cluster-name -entity-operator- random-string Pod created by the Entity Operator deployment. cluster-name -entity-topic-operator-config ConfigMap with ancillary configuration for Topic Operators. cluster-name -entity-user-operator-config ConfigMap with ancillary configuration for User Operators. cluster-name -entity-operator-certs Secret with Entity Operator keys for communication with Kafka and ZooKeeper. cluster-name -entity-operator Service account used by the Entity Operator. strimzi- cluster-name -topic-operator Role binding used by the Entity Operator. strimzi- cluster-name -user-operator Role binding used by the Entity Operator. Kafka Exporter These resources are only created if the Kafka Exporter is deployed using the Cluster Operator. cluster-name -kafka-exporter Deployment with Kafka Exporter. cluster-name -kafka-exporter- random-string Pod created by the Kafka Exporter deployment. cluster-name -kafka-exporter Service used to collect consumer lag metrics. cluster-name -kafka-exporter Service account used by the Kafka Exporter. Cruise Control These resources are only created only if Cruise Control was deployed using the Cluster Operator. cluster-name -cruise-control Deployment with Cruise Control. cluster-name -cruise-control- random-string Pod created by the Cruise Control deployment. cluster-name -cruise-control-config ConfigMap that contains the Cruise Control ancillary configuration, and is mounted as a volume by the Cruise Control pods. cluster-name -cruise-control-certs Secret with Cruise Control keys for communication with Kafka and ZooKeeper. cluster-name -cruise-control Service used to communicate with Cruise Control. cluster-name -cruise-control Service account used by Cruise Control. cluster-name -network-policy-cruise-control Network policy managing access to the Cruise Control service. JMXTrans These resources are only created if JMXTrans is deployed using the Cluster Operator. cluster-name -jmxtrans Deployment with JMXTrans. cluster-name -jmxtrans- random-string Pod created by the JMXTrans deployment. cluster-name -jmxtrans-config ConfigMap that contains the JMXTrans ancillary configuration, and is mounted as a volume by the JMXTrans pods. cluster-name -jmxtrans Service account used by JMXTrans. 2.2. Kafka Connect/S2I cluster configuration This section describes how to configure a Kafka Connect or Kafka Connect with Source-to-Image (S2I) deployment in your AMQ Streams cluster. Kafka Connect is an integration toolkit for streaming data between Kafka brokers and other systems using Connector plugins. Kafka Connect provides a framework for integrating Kafka with an external data source or target, such as a database, for import or export of data using connectors. Connectors are plugins that provide the connection configuration needed. If you are using Kafka Connect, you configure either the KafkaConnect or the KafkaConnectS2I resource. Use the KafkaConnectS2I resource if you are using the Source-to-Image (S2I) framework to deploy Kafka Connect. The full schema of the KafkaConnect resource is described in Section B.79, " KafkaConnect schema reference" . The full schema of the KafkaConnectS2I resource is described in Section B.95, " KafkaConnectS2I schema reference" . Additional resources Creating and managing connectors Deploying a KafkaConnector resource to Kafka Connect 2.2.1. Configuring Kafka Connect Use Kafka Connect to set up external data connections to your Kafka cluster. Use the properties of the KafkaConnect or KafkaConnectS2I resource to configure your Kafka Connect deployment. The example shown in this procedure is for the KafkaConnect resource, but the properties are the same for the KafkaConnectS2I resource. Kafka connector configuration KafkaConnector resources allow you to create and manage connector instances for Kafka Connect in an OpenShift-native way. In the configuration, you enable KafkaConnectors for a Kafka Connect cluster by adding the strimzi.io/use-connector-resources annotation. You can also specify external configuration for Kafka Connect connectors through the externalConfiguration property. Connectors are created, reconfigured, and deleted using the Kafka Connect HTTP REST interface, or by using KafkaConnectors . For more information on these methods, see Creating and managing connectors in the Deploying and Upgrading AMQ Streams on OpenShift guide. The connector configuration is passed to Kafka Connect as part of an HTTP request and stored within Kafka itself. ConfigMaps and Secrets are standard OpenShift resources used for storing configurations and confidential data. You can use ConfigMaps and Secrets to configure certain elements of a connector. You can then reference the configuration values in HTTP REST commands (this keeps the configuration separate and more secure, if needed). This method applies especially to confidential data, such as usernames, passwords, or certificates. Prerequisites An OpenShift cluster A running Cluster Operator See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Procedure Edit the spec properties for the KafkaConnect or KafkaConnectS2I resource. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: "true" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 externalConfiguration: 8 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 9 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 10 type: inline loggers: log4j.rootLogger: "INFO" readinessProbe: 11 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metrics: 12 lowercaseOutputName: true lowercaseOutputLabelNames: true rules: - pattern: kafka.connect<type=connect-worker-metrics><>([a-z-]+) name: kafka_connect_worker_USD1 help: "Kafka Connect JMX metric worker" type: GAUGE - pattern: kafka.connect<type=connect-worker-rebalance-metrics><>([a-z-]+) name: kafka_connect_worker_rebalance_USD1 help: "Kafka Connect JMX metric rebalance information" type: GAUGE jvmOptions: 13 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 14 template: 15 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 16 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" 1 Use KafkaConnect or KafkaConnectS2I , as required. 2 Enables KafkaConnectors for the Kafka Connect cluster. 3 The number of replica nodes . 4 Authentication for the Kafka Connect cluster, using the TLS mechanism , as shown here, using OAuth bearer tokens , or a SASL-based SCRAM-SHA-512 or PLAIN mechanism. By default, Kafka Connect connects to Kafka brokers using a plain text connection. 5 Bootstrap server for connection to the Kafka Connect cluster. 6 TLS encryption with key names under which TLS certificates are stored in X.509 format for the cluster. If certificates are stored in the same secret, it can be listed multiple times. 7 Kafka Connect configuration of workers (not connectors). Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. 8 External configuration for Kafka connectors using environment variables, as shown here, or volumes. 9 Requests for reservation of supported resources , currently cpu and memory , and limits to specify the maximum resources that can be consumed. 10 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom ConfigMap must be placed under the log4j.properties or log4j2.properties key. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 11 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 12 Prometheus metrics , which are enabled with configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using metrics: {} . 13 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka Connect. 14 ADVANCED OPTION: Container image configuration , which is recommended only in special situations. 15 Template customization . Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 16 Environment variables are also set for distributed tracing using Jaeger . Create or update the resource: oc apply -f KAFKA-CONNECT-CONFIG-FILE If authorization is enabled for Kafka Connect, configure Kafka Connect users to enable access to the Kafka Connect consumer group and topics . 2.2.2. Kafka Connect configuration for multiple instances If you are running multiple instances of Kafka Connect, you have to change the default configuration of the following config properties: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 # ... # ... 1 Kafka Connect cluster group that the instance belongs to. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. Note Values for the three topics must be the same for all Kafka Connect instances with the same group.id . Unless you change the default settings, each Kafka Connect instance connecting to the same Kafka cluster is deployed with the same values. What happens, in effect, is all instances are coupled to run in a cluster and use the same topics. If multiple Kafka Connect clusters try to use the same topics, Kafka Connect will not work as expected and generate errors. If you wish to run multiple Kafka Connect instances, change the values of these properties for each instance. 2.2.3. Configuring Kafka Connect user authorization This procedure describes how to authorize user access to Kafka Connect. When any type of authorization is being used in Kafka, a Kafka Connect user requires read/write access rights to the consumer group and the internal topics of Kafka Connect. The properties for the consumer group and internal topics are automatically configured by AMQ Streams, or they can be specified explicitly in the spec of the KafkaConnect or KafkaConnectS2I resource. Example configuration properties in the KafkaConnect resource apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # ... config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # ... # ... 1 Kafka Connect cluster group that the instance belongs to. 2 Kafka topic that stores connector offsets. 3 Kafka topic that stores connector and task status configurations. 4 Kafka topic that stores connector and task status updates. This procedure shows how access is provided when simple authorization is being used. Simple authorization uses ACL rules, handled by the Kafka AclAuthorizer plugin, to provide the right level of access. For more information on configuring a KafkaUser resource to use simple authorization, see the AclRule schema reference . Note The default values for the consumer group and topics will differ when running multiple instances . Prerequisites An OpenShift cluster A running Cluster Operator Procedure Edit the authorization property in the KafkaUser resource to provide access rights to the user. In the following example, access rights are configured for the Kafka Connect topics and consumer group using literal name values: Property Name offset.storage.topic connect-cluster-offsets status.storage.topic connect-cluster-status config.storage.topic connect-cluster-configs group connect-cluster apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # ... authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Write host: "*" - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Create host: "*" - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Describe host: "*" - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Read host: "*" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operation: Write host: "*" - resource: type: topic name: connect-cluster-status patternType: literal operation: Create host: "*" - resource: type: topic name: connect-cluster-status patternType: literal operation: Describe host: "*" - resource: type: topic name: connect-cluster-status patternType: literal operation: Read host: "*" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operation: Write host: "*" - resource: type: topic name: connect-cluster-configs patternType: literal operation: Create host: "*" - resource: type: topic name: connect-cluster-configs patternType: literal operation: Describe host: "*" - resource: type: topic name: connect-cluster-configs patternType: literal operation: Read host: "*" # consumer group - resource: type: group name: connect-cluster patternType: literal operation: Read host: "*" Create or update the resource. oc apply -f KAFKA-USER-CONFIG-FILE 2.2.4. List of Kafka Connect cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: connect-cluster-name -connect Deployment which is in charge to create the Kafka Connect worker node pods. connect-cluster-name -connect-api Service which exposes the REST interface for managing the Kafka Connect cluster. connect-cluster-name -config ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods. connect-cluster-name -connect Pod Disruption Budget configured for the Kafka Connect worker nodes. 2.2.5. List of Kafka Connect (S2I) cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: connect-cluster-name -connect-source ImageStream which is used as the base image for the newly-built Docker images. connect-cluster-name -connect BuildConfig which is responsible for building the new Kafka Connect Docker images. connect-cluster-name -connect ImageStream where the newly built Docker images will be pushed. connect-cluster-name -connect DeploymentConfig which is in charge of creating the Kafka Connect worker node pods. connect-cluster-name -connect-api Service which exposes the REST interface for managing the Kafka Connect cluster. connect-cluster-name -config ConfigMap which contains the Kafka Connect ancillary configuration and is mounted as a volume by the Kafka broker pods. connect-cluster-name -connect Pod Disruption Budget configured for the Kafka Connect worker nodes. 2.2.6. Integrating with Debezium for change data capture Red Hat Debezium is a distributed change data capture platform. It captures row-level changes in databases, creates change event records, and streams the records to Kafka topics. Debezium is built on Apache Kafka. You can deploy and integrate Debezium with AMQ Streams. Following a deployment of AMQ Streams, you deploy Debezium as a connector configuration through Kafka Connect. Debezium passes change event records to AMQ Streams on OpenShift. Applications can read these change event streams and access the change events in the order in which they occurred. Debezium has multiple uses, including: Data replication Updating caches and search indexes Simplifying monolithic applications Data integration Enabling streaming queries To capture database changes, deploy Kafka Connect with a Debezium database connector . You configure a KafkaConnector resource to define the connector instance. For more information on deploying Debezium with AMQ Streams, refer to the product documentation . The Debezium documentation includes a Getting Started with Debezium guide that guides you through the process of setting up the services and connector required to view change event records for database updates. 2.3. Kafka MirrorMaker cluster configuration This chapter describes how to configure a Kafka MirrorMaker deployment in your AMQ Streams cluster to replicate data between Kafka clusters. You can use AMQ Streams with MirrorMaker or MirrorMaker 2.0 . MirrorMaker 2.0 is the latest version, and offers a more efficient way to mirror data between Kafka clusters. If you are using MirrorMaker, you configure the KafkaMirrorMaker resource. The following procedure shows how the resource is configured: Configuring Kafka MirrorMaker The full schema of the KafkaMirrorMaker resource is described in the KafkaMirrorMaker schema reference . 2.3.1. Configuring Kafka MirrorMaker Use the properties of the KafkaMirrorMaker resource to configure your Kafka MirrorMaker deployment. You can configure access control for producers and consumers using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and authentication on the consumer and producer side. Prerequisites See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Source and target Kafka clusters must be available Procedure Edit the spec properties for the KafkaMirrorMaker resource. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: "my-group" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 9 ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS 10 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 11 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 12 ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS 13 whitelist: "my-topic|other-topic" 14 resources: 15 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 16 type: inline loggers: mirrormaker.root.logger: "INFO" readinessProbe: 17 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metrics: 18 lowercaseOutputName: true rules: - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*><>Count" name: "kafka_server_USD1_USD2_total" - pattern: "kafka.server<type=(.+), name=(.+)PerSec\\w*, topic=(.+)><>Count" name: "kafka_server_USD1_USD2_total" labels: topic: "USD3" jvmOptions: 19 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 20 template: 21 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 22 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: 23 type: jaeger 1 The number of replica nodes . 2 Bootstrap servers for consumer and producer. 3 Group ID for the consumer . 4 The number of consumer streams . 5 The offset auto-commit interval in milliseconds . 6 TLS encryption with key names under which TLS certificates are stored in X.509 format for consumer or producer. If certificates are stored in the same secret, it can be listed multiple times. 7 Authentication for consumer or producer, using the TLS mechanism , as shown here, using OAuth bearer tokens , or a SASL-based SCRAM-SHA-512 or PLAIN mechanism. 8 Kafka configuration options for consumer and producer . 9 SSL properties for external listeners to run with a specific cipher suite for a TLS version. 10 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. 11 If the abortOnSendFailure property is set to true , Kafka MirrorMaker will exit and the container will restart following a send failure for a message. 12 SSL properties for external listeners to run with a specific cipher suite for a TLS version. 13 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. 14 A whitelist of topics mirrored from source to target Kafka cluster. 15 Requests for reservation of supported resources , currently cpu and memory , and limits to specify the maximum resources that can be consumed. 16 Specified loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom ConfigMap must be placed under the log4j.properties or log4j2.properties key. MirrorMaker has a single logger called mirrormaker.root.logger . You can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 17 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 18 Prometheus metrics , which are enabled with configuration for the Prometheus JMX exporter in this example. You can enable metrics without further configuration using metrics: {} . 19 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 20 ADVANCED OPTION: Container image configuration , which is recommended only in special situations. 21 Template customization . Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 22 Environment variables are also set for distributed tracing using Jaeger . 23 Distributed tracing is enabled for Jaeger . Warning With the abortOnSendFailure property set to false , the producer attempts to send the message in a topic. The original message might be lost, as there is no attempt to resend a failed message. Create or update the resource: oc apply -f <your-file> 2.3.2. List of Kafka MirrorMaker cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: <mirror-maker-name> -mirror-maker Deployment which is responsible for creating the Kafka MirrorMaker pods. <mirror-maker-name> -config ConfigMap which contains ancillary configuration for the Kafka MirrorMaker, and is mounted as a volume by the Kafka broker pods. <mirror-maker-name> -mirror-maker Pod Disruption Budget configured for the Kafka MirrorMaker worker nodes. 2.4. Kafka MirrorMaker 2.0 cluster configuration This section describes how to configure a Kafka MirrorMaker 2.0 deployment in your AMQ Streams cluster. MirrorMaker 2.0 is used to replicate data between two or more active Kafka clusters, within or across data centers. Data replication across clusters supports scenarios that require: Recovery of data in the event of a system failure Aggregation of data for analysis Restriction of data access to a specific cluster Provision of data at a specific location to improve latency If you are using MirrorMaker 2.0, you configure the KafkaMirrorMaker2 resource. MirrorMaker 2.0 introduces an entirely new way of replicating data between clusters. As a result, the resource configuration differs from the version of MirrorMaker. If you choose to use MirrorMaker 2.0, there is currently no legacy support, so any resources must be manually converted into the new format. How MirrorMaker 2.0 replicates data is described here: MirrorMaker 2.0 data replication The following procedure shows how the resource is configured for MirrorMaker 2.0: Synchronizing data between Kafka clusters The full schema of the KafkaMirrorMaker2 resource is described in the KafkaMirrorMaker2 schema reference . 2.4.1. MirrorMaker 2.0 data replication MirrorMaker 2.0 consumes messages from a source Kafka cluster and writes them to a target Kafka cluster. MirrorMaker 2.0 uses: Source cluster configuration to consume data from the source cluster Target cluster configuration to output data to the target cluster MirrorMaker 2.0 is based on the Kafka Connect framework, connectors managing the transfer of data between clusters. A MirrorMaker 2.0 MirrorSourceConnector replicates topics from a source cluster to a target cluster. The process of mirroring data from one cluster to another cluster is asynchronous. The recommended pattern is for messages to be produced locally alongside the source Kafka cluster, then consumed remotely close to the target Kafka cluster. MirrorMaker 2.0 can be used with more than one source cluster. Figure 2.1. Replication across two clusters 2.4.2. Cluster configuration You can use MirrorMaker 2.0 in active/passive or active/active cluster configurations. In an active/active configuration, both clusters are active and provide the same data simultaneously, which is useful if you want to make the same data available locally in different geographical locations. In an active/passive configuration, the data from an active cluster is replicated in a passive cluster, which remains on standby, for example, for data recovery in the event of system failure. The expectation is that producers and consumers connect to active clusters only. A MirrorMaker 2.0 cluster is required at each target destination. 2.4.2.1. Bidirectional replication (active/active) The MirrorMaker 2.0 architecture supports bidirectional replication in an active/active cluster configuration. Each cluster replicates the data of the other cluster using the concept of source and remote topics. As the same topics are stored in each cluster, remote topics are automatically renamed by MirrorMaker 2.0 to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Figure 2.2. Topic renaming By flagging the originating cluster, topics are not replicated back to that cluster. The concept of replication through remote topics is useful when configuring an architecture that requires data aggregation. Consumers can subscribe to source and remote topics within the same cluster, without the need for a separate aggregation cluster. 2.4.2.2. Unidirectional replication (active/passive) The MirrorMaker 2.0 architecture supports unidirectional replication in an active/passive cluster configuration. You can use an active/passive cluster configuration to make backups or migrate data to another cluster. In this situation, you might not want automatic renaming of remote topics. You can override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration of the KafkaMirrorMaker2 resource. With this configuration applied, topics retain their original names. 2.4.2.3. Topic configuration synchronization Topic configuration is automatically synchronized between source and target clusters. By synchronizing configuration properties, the need for rebalancing is reduced. 2.4.2.4. Data integrity MirrorMaker 2.0 monitors source topics and propagates any configuration changes to remote topics, checking for and creating missing partitions. Only MirrorMaker 2.0 can write to remote topics. 2.4.2.5. Offset tracking MirrorMaker 2.0 tracks offsets for consumer groups using internal topics . The offset sync topic maps the source and target offsets for replicated topic partitions from record metadata The checkpoint topic maps the last committed offset in the source and target cluster for replicated topic partitions in each consumer group Offsets for the checkpoint topic are tracked at predetermined intervals through configuration. Both topics enable replication to be fully restored from the correct offset position on failover. MirrorMaker 2.0 uses its MirrorCheckpointConnector to emit checkpoints for offset tracking. 2.4.2.6. Connectivity checks A heartbeat internal topic checks connectivity between clusters. The heartbeat topic is replicated from the source cluster. Target clusters use the topic to check: The connector managing connectivity between clusters is running The source cluster is available MirrorMaker 2.0 uses its MirrorHeartbeatConnector to emit heartbeats that perform these checks. 2.4.3. ACL rules synchronization ACL access to remote topics is possible if you are not using the User Operator. If AclAuthorizer is being used, without the User Operator, ACL rules that manage access to brokers also apply to remote topics. Users that can read a source topic can read its remote equivalent. Note OAuth 2.0 authorization does not support access to remote topics in this way. 2.4.4. Synchronizing data between Kafka clusters using MirrorMaker 2.0 Use MirrorMaker 2.0 to synchronize data between Kafka clusters through configuration. The configuration must specify: Each Kafka cluster Connection information for each cluster, including TLS authentication The replication flow and direction Cluster to cluster Topic to topic Use the properties of the KafkaMirrorMaker2 resource to configure your Kafka MirrorMaker 2.0 deployment. Note The version of MirrorMaker continues to be supported. If you wish to use the resources configured for the version, they must be updated to the format supported by MirrorMaker 2.0. MirrorMaker 2.0 provides default configuration values for properties such as replication factors. A minimal configuration, with defaults left unchanged, would be something like this example: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 2.6.0 connectCluster: "my-cluster-target" clusters: - alias: "my-cluster-source" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: "my-cluster-target" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: "my-cluster-source" targetCluster: "my-cluster-target" sourceConnector: {} You can configure access control for source and target clusters using TLS or SASL authentication. This procedure shows a configuration that uses TLS encryption and authentication for the source and target cluster. Prerequisites See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Source and target Kafka clusters must be available Procedure Edit the spec properties for the KafkaMirrorMaker2 resource. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 2.6.0 1 replicas: 3 2 connectCluster: "my-cluster-target" 3 clusters: 4 - alias: "my-cluster-source" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: "my-cluster-target" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 ssl.cipher.suites: "TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384" 13 ssl.enabled.protocols: "TLSv1.2" ssl.protocol: "TLSv1.2" ssl.endpoint.identification.algorithm: HTTPS 14 tls: 15 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 16 - sourceCluster: "my-cluster-source" 17 targetCluster: "my-cluster-target" 18 sourceConnector: 19 config: replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: "false" 22 replication.policy.separator: "" 23 replication.policy.class: "io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy" 24 heartbeatConnector: 25 config: heartbeats.topic.replication.factor: 1 26 checkpointConnector: 27 config: checkpoints.topic.replication.factor: 1 28 topicsPattern: ".*" 29 groupsPattern: "group1|group2|group3" 30 resources: 31 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 32 type: inline loggers: connect.root.logger.level: "INFO" readinessProbe: 33 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 34 "-Xmx": "1g" "-Xms": "1g" image: my-org/my-image:latest 35 template: 36 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" connectContainer: 37 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" tracing: type: jaeger 38 externalConfiguration: 39 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey 1 The Kafka Connect version . 2 The number of replica nodes . 3 Cluster alias for Kafka Connect. 4 Specification for the Kafka clusters being synchronized. 5 Cluster alias for the source Kafka cluster. 6 Authentication for the source cluster, using the TLS mechanism , as shown here, using OAuth bearer tokens , or a SASL-based SCRAM-SHA-512 or PLAIN mechanism. 7 Bootstrap server for connection to the source Kafka cluster. 8 TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times. 9 Cluster alias for the target Kafka cluster. 10 Authentication for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 11 Bootstrap server for connection to the target Kafka cluster. 12 Kafka Connect configuration . Standard Apache Kafka configuration may be provided, restricted to those properties not managed directly by AMQ Streams. 13 SSL properties for external listeners to run with a specific cipher suite for a TLS version. 14 Hostname verification is enabled by setting to HTTPS . An empty string disables the verification. 15 TLS encryption for the target Kafka cluster is configured in the same way as for the source Kafka cluster. 16 MirrorMaker 2.0 connectors . 17 Cluster alias for the source cluster used by the MirrorMaker 2.0 connectors. 18 Cluster alias for the target cluster used by the MirrorMaker 2.0 connectors. 19 Configuration for the MirrorSourceConnector that creates remote topics. The config overrides the default configuration options. 20 Replication factor for mirrored topics created at the target cluster. 21 Replication factor for the MirrorSourceConnector offset-syncs internal topic that maps the offsets of the source and target clusters. 22 When ACL rules synchronization is enabled, ACLs are applied to synchronized topics. The default is true . 23 Defines the separator used for the renaming of remote topics. 24 Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. This optional setting is useful for active/passive backups and data migration. 25 Configuration for the MirrorHeartbeatConnector that performs connectivity checks. The config overrides the default configuration options. 26 Replication factor for the heartbeat topic created at the target cluster. 27 Configuration for the MirrorCheckpointConnector that tracks offsets. The config overrides the default configuration options. 28 Replication factor for the checkpoints topic created at the target cluster. 29 Topic replication from the source cluster defined as regular expression patterns . Here we request all topics. 30 Consumer group replication from the source cluster defined as regular expression patterns . Here we request three consumer groups by name. You can use comma-separated lists. 31 Requests for reservation of supported resources , currently cpu and memory , and limits to specify the maximum resources that can be consumed. 32 Specified Kafka Connect loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom ConfigMap must be placed under the log4j.properties or log4j2.properties key. For the Kafka Connect log4j.rootLogger logger, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 33 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 34 JVM configuration options to optimize performance for the Virtual Machine (VM) running Kafka MirrorMaker. 35 ADVANCED OPTION: Container image configuration , which is recommended only in special situations. 36 Template customization . Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 37 Environment variables are also set for distributed tracing using Jaeger . 38 Distributed tracing is enabled for Jaeger . 39 External configuration for an OpenShift Secret mounted to Kafka MirrorMaker as an environment variable. Create or update the resource: oc apply -f <your-file> 2.5. Kafka Bridge cluster configuration This section describes how to configure a Kafka Bridge deployment in your AMQ Streams cluster. Kafka Bridge provides an API for integrating HTTP-based clients with a Kafka cluster. If you are using the Kafka Bridge, you configure the KafkaBridge resource. The full schema of the KafkaBridge resource is described in Section B.121, " KafkaBridge schema reference" . 2.5.1. Configuring the Kafka Bridge Use the Kafka Bridge to make HTTP-based requests to the Kafka cluster. Use the properties of the KafkaBridge resource to configure your Kafka Bridge deployment. In order to prevent issues arising when client consumer requests are processed by different Kafka Bridge instances, address-based routing must be employed to ensure that requests are routed to the right Kafka Bridge instance. Additionally, each independent Kafka Bridge instance must have a replica. A Kafka Bridge instance has its own state which is not shared with another instances. Prerequisites An OpenShift cluster A running Cluster Operator See the Deploying and Upgrading AMQ Streams on OpenShift guide for instructions on running a: Cluster Operator Kafka cluster Procedure Edit the spec properties for the KafkaBridge resource. The properties you can configure are shown in this example configuration: apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: my-cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: "https://strimzi.io" allowedMethods: "GET,POST,PUT,DELETE,OPTIONS,PATCH" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: "1" memory: 2Gi limits: cpu: "2" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: "INFO" # enabling DEBUG just for send operation logger.send.name: "http.openapi.operation.send" logger.send.level: "DEBUG" jvmOptions: 11 "-Xmx": "1g" "-Xms": "1g" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: "kubernetes.io/hostname" bridgeContainer: 15 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: "6831" 1 The number of replica nodes . 2 Bootstrap server for connection to the target Kafka cluster. 3 TLS encryption with key names under which TLS certificates are stored in X.509 format for the source Kafka cluster. If certificates are stored in the same secret, it can be listed multiple times. 4 Authentication for the Kafka Bridge cluster, using the TLS mechanism , as shown here, using OAuth bearer tokens , or a SASL-based SCRAM-SHA-512 or PLAIN mechanism. By default, the Kafka Bridge connects to Kafka brokers without authentication. 5 HTTP access to Kafka brokers. 6 CORS access specifying selected resources and access methods. Additional HTTP headers in requests describe the origins that are permitted access to the Kafka cluster . 7 Consumer configuration options. 8 Producer configuration options. 9 Requests for reservation of supported resources , currently cpu and memory , and limits to specify the maximum resources that can be consumed. 10 Specified Kafka Bridge loggers and log levels added directly ( inline ) or indirectly ( external ) through a ConfigMap. A custom ConfigMap must be placed under the log4j.properties or log4j2.properties key. For the Kafka Bridge loggers, you can set the log level to INFO, ERROR, WARN, TRACE, DEBUG, FATAL or OFF. 11 JVM configuration options to optimize performance for the Virtual Machine (VM) running the Kafka Bridge. 12 Healthchecks to know when to restart a container (liveness) and when a container can accept traffic (readiness). 13 ADVANCED OPTION: Container image configuration , which is recommended only in special situations. 14 Template customization . Here a pod is scheduled with anti-affinity, so the pod is not scheduled on nodes with the same hostname. 15 Environment variables are also set for distributed tracing using Jaeger . Create or update the resource: oc apply -f KAFKA-BRIDGE-CONFIG-FILE 2.5.2. List of Kafka Bridge cluster resources The following resources are created by the Cluster Operator in the OpenShift cluster: bridge-cluster-name -bridge Deployment which is in charge to create the Kafka Bridge worker node pods. bridge-cluster-name -bridge-service Service which exposes the REST interface of the Kafka Bridge cluster. bridge-cluster-name -bridge-config ConfigMap which contains the Kafka Bridge ancillary configuration and is mounted as a volume by the Kafka broker pods. bridge-cluster-name -bridge Pod Disruption Budget configured for the Kafka Bridge worker nodes. 2.6. Customizing OpenShift resources AMQ Streams creates several OpenShift resources, such as Deployments , StatefulSets , Pods , and Services , which are managed by AMQ Streams operators. Only the operator that is responsible for managing a particular OpenShift resource can change that resource. If you try to manually change an operator-managed OpenShift resource, the operator will revert your changes back. However, changing an operator-managed OpenShift resource can be useful if you want to perform certain tasks, such as: Adding custom labels or annotations that control how Pods are treated by Istio or other services Managing how Loadbalancer -type Services are created by the cluster You can make such changes using the template property in the AMQ Streams custom resources. The template property is supported in the following resources. The API reference provides more details about the customizable fields. Kafka.spec.kafka See Section B.53, " KafkaClusterTemplate schema reference" Kafka.spec.zookeeper See Section B.63, " ZookeeperClusterTemplate schema reference" Kafka.spec.entityOperator See Section B.68, " EntityOperatorTemplate schema reference" Kafka.spec.kafkaExporter See Section B.74, " KafkaExporterTemplate schema reference" Kafka.spec.cruiseControl See Section B.71, " CruiseControlTemplate schema reference" KafkaConnect.spec See Section B.88, " KafkaConnectTemplate schema reference" KafkaConnectS2I.spec See Section B.88, " KafkaConnectTemplate schema reference" KafkaMirrorMaker.spec See Section B.119, " KafkaMirrorMakerTemplate schema reference" KafkaMirrorMaker2.spec See Section B.88, " KafkaConnectTemplate schema reference" KafkaBridge.spec See Section B.128, " KafkaBridgeTemplate schema reference" KafkaUser.spec See Section B.112, " KafkaUserTemplate schema reference" In the following example, the template property is used to modify the labels in a Kafka broker's StatefulSet : apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # ... template: statefulset: metadata: labels: mylabel: myvalue # ... 2.6.1. Customizing the image pull policy AMQ Streams allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY in the Cluster Operator deployment. The STRIMZI_IMAGE_PULL_POLICY environment variable can be set to three different values: Always Container images are pulled from the registry every time the pod is started or restarted. IfNotPresent Container images are pulled from the registry only when they were not pulled before. Never Container images are never pulled from the registry. The image pull policy can be currently customized only for all Kafka, Kafka Connect, and Kafka MirrorMaker clusters at once. Changing the policy will result in a rolling update of all your Kafka, Kafka Connect, and Kafka MirrorMaker clusters. Additional resources For more information about Cluster Operator configuration, see Section 5.1, "Using the Cluster Operator" . For more information about Image Pull Policies, see Disruptions . 2.7. External logging When setting the logging levels for a resource, you can specify them inline directly in the spec.logging property of the resource YAML: spec: # ... logging: type: inline loggers: kafka.root.logger.level: "INFO" Or you can specify external logging: spec: # ... logging: type: external name: customConfigMap With external logging, logging properties are defined in a ConfigMap. The name of the ConfigMap is referenced in the spec.logging.name property. The advantages of using a ConfigMap are that the logging properties are maintained in one place and are accessible to more than one resource. 2.7.1. Creating a ConfigMap for logging To use a ConfigMap to define logging properties, you create the ConfigMap and then reference it as part of the logging definition in the spec of a resource. The ConfigMap must contain the appropriate logging configuration. log4j.properties for Kafka components, ZooKeeper, and the Kafka Bridge log4j2.properties for the Topic Operator and User Operator The configuration must be placed under these properties. Here we demonstrate how a ConfigMap defines a root logger for a Kafka resource. Procedure Create the ConfigMap. You can create the ConfigMap as a YAML file or from a properties file using oc at the command line. ConfigMap example with a root logger definition for Kafka: kind: ConfigMap apiVersion: kafka.strimzi.io/v1beta1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level="INFO" From the command line, using a properties file: oc create configmap logging-configmap --from-file=log4j.properties The properties file defines the logging configuration: # Define the logger kafka.root.logger.level="INFO" # ... Define external logging in the spec of the resource, setting the logging.name to the name of the ConfigMap. spec: # ... logging: type: external name: logging-configmap Create or update the resource. oc apply -f kafka.yaml
[ "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: replicas: 3 1 version: 1.6 2 resources: 3 requests: memory: 64Gi cpu: \"8\" limits: 4 memory: 64Gi cpu: \"12\" jvmOptions: 5 -Xms: 8192m -Xmx: 8192m listeners: 6 - name: plain 7 port: 9092 8 type: internal 9 tls: false 10 configuration: useServiceDnsDomain: true 11 - name: tls port: 9093 type: internal tls: true authentication: 12 type: tls - name: external 13 port: 9094 type: route tls: true configuration: brokerCertChainAndKey: 14 secretName: my-secret certificate: my-certificate.crt key: my-key.key authorization: 15 type: simple config: 16 auto.create.topics.enable: \"false\" offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 2 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 17 ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" storage: 18 type: persistent-claim 19 size: 10000Gi 20 rack: 21 topologyKey: topology.kubernetes.io/zone metrics: 22 lowercaseOutputName: true rules: 23 # Special cases and very specific rules - pattern : kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: \"USD3\" topic: \"USD4\" partition: \"USD5\" # zookeeper: 24 replicas: 3 resources: requests: memory: 8Gi cpu: \"2\" limits: memory: 8Gi cpu: \"2\" jvmOptions: -Xms: 4096m -Xmx: 4096m storage: type: persistent-claim size: 1000Gi metrics: # entityOperator: 25 topicOperator: resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" userOperator: resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\" kafkaExporter: 26 # cruiseControl: 27 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: ephemeral # zookeeper: # storage: type: ephemeral #", "storage: type: persistent-claim size: 1000Gi", "storage: type: persistent-claim size: 1Gi class: my-storage-class", "storage: type: persistent-claim size: 1Gi selector: hdd-type: ssd deleteClaim: true", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: labels: app: my-cluster name: my-cluster namespace: myproject spec: # kafka: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c # zookeeper: replicas: 3 storage: deleteClaim: true size: 100Gi type: persistent-claim class: my-storage-class overrides: - broker: 0 class: my-storage-class-zone-1a - broker: 1 class: my-storage-class-zone-1b - broker: 2 class: my-storage-class-zone-1c #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: persistent-claim size: 2000Gi class: my-storage-class # zookeeper: #", "apply -f your-file", "storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false - id: 1 type: persistent-claim size: 100Gi deleteClaim: false - id: 2 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #", "apply -f KAFKA-CONFIG-FILE", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # storage: type: jbod volumes: - id: 0 type: persistent-claim size: 100Gi deleteClaim: false # zookeeper: #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # replicas: 3 # zookeeper: #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # config: default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 # zookeeper: #", "apply -f kafka.yaml", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # replicas: 3 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # replicas: 3 #", "apply -f kafka.yaml", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # zookeeper: # config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 1 ssl.enabled.protocols: \"TLSv1.2\" 2 ssl.protocol: \"TLSv1.2\" 3 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # zookeeper: # config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 1 #", "apply -f kafka.yaml", "exec -it my-cluster -zookeeper-0 -- bin/kafka-topics.sh --list --zookeeper localhost:12181", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: {} userOperator: {}", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external name: customConfigMap", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 userOperator: watchedNamespace: my-user-namespace reconciliationIntervalSeconds: 60", "apply -f your-file", "resources: requests: cpu: 12 memory: 64Gi", "resources: limits: cpu: 12 memory: 64Gi", "resources: requests: cpu: 500m limits: cpu: 2.5", "resources: requests: memory: 512Mi limits: memory: 2Gi", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # resources: requests: cpu: \"8\" memory: 64Gi limits: cpu: \"12\" memory: 128Gi # zookeeper: #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: \"INFO\" # zookeeper: # logging: type: inline loggers: zookeeper.root.logger: \"INFO\" # entityOperator: # topicOperator: # logging: type: inline loggers: rootLogger.level: INFO # userOperator: # logging: type: inline loggers: rootLogger.level: INFO #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: # logging: type: external name: customConfigMap #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #", "apply -f your-file", "readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # zookeeper: #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # metrics: {} # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # metrics: lowercaseOutputName: true rules: - pattern: \"kafka.server<type=(.+), name=(.+)PerSec\\\\w*><>Count\" name: \"kafka_server_USD1_USD2_total\" - pattern: \"kafka.server<type=(.+), name=(.+)PerSec\\\\w*, topic=(.+)><>Count\" name: \"kafka_server_USD1_USD2_total\" labels: topic: \"USD3\" # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # metrics: lowercaseOutputName: true #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: #", "\" <cluster-name> -kafka-0- <cluster-name> - <headless-service-name> \"", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # jvmOptions: \"-Xmx\": \"8g\" \"-Xms\": \"8g\" # zookeeper: #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # image: my-org/my-image:latest # zookeeper: #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # tlsSidecar: image: my-org/my-image:latest resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi logLevel: debug readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi # cruiseControl: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi #", "apply -f your-file", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # template: pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" # zookeeper: #", "apply -f your-file", "label node your-node node-type=fast-network", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # template: pod: affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: node-type operator: In values: - fast-network # zookeeper: #", "apply -f your-file", "adm taint node your-node dedicated=Kafka:NoSchedule", "label node your-node dedicated=Kafka", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka spec: kafka: # template: pod: tolerations: - key: \"dedicated\" operator: \"Equal\" value: \"Kafka\" effect: \"NoSchedule\" affinity: nodeAffinity: requiredDuringSchedulingIgnoredDuringExecution: nodeSelectorTerms: - matchExpressions: - key: dedicated operator: In values: - Kafka # zookeeper: #", "apply -f your-file", "annotate statefulset cluster-name -kafka strimzi.io/manual-rolling-update=true", "annotate statefulset cluster-name -zookeeper strimzi.io/manual-rolling-update=true", "{ \"version\": 1, \"partitions\": [ <PartitionObjects> ] }", "{ \"topic\": <TopicName> , \"partition\": <Partition> , \"replicas\": [ <AssignedBrokerIds> ] }", "{ \"version\": 1, \"partitions\": [ { \"topic\": \"topic-a\", \"partition\": 4, \"replicas\": [2,4,7] }, { \"topic\": \"topic-b\", \"partition\": 2, \"replicas\": [1,5,7] } ] }", "{ \"topic\": <TopicName> , \"partition\": <Partition> , \"replicas\": [ <AssignedBrokerIds> ], \"log_dirs\": [ <AssignedLogDirs> ] }", "{ \"topic\": \"topic-a\", \"partition\": 4, \"replicas\": [2,4,7]. \"log_dirs\": [ \"/var/lib/kafka/data-0/kafka-log2\", \"/var/lib/kafka/data-0/kafka-log4\", \"/var/lib/kafka/data-0/kafka-log7\" ] }", "{ \"version\": 1, \"topics\": [ <TopicObjects> ] }", "{ \"topic\": <TopicName> }", "{ \"version\": 1, \"topics\": [ { \"topic\": \"topic-a\"}, { \"topic\": \"topic-b\"} ] }", "cat topics.json | oc exec -c kafka <BrokerPod> -i -- /bin/bash -c 'cat > /tmp/topics.json'", "exec <BrokerPod> -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file /tmp/topics.json --broker-list <BrokerList> --generate", "exec <BrokerPod> -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --topics-to-move-json-file /tmp/topics.json --broker-list 4,7 --generate", "cat reassignment.json | oc exec broker-pod -c kafka -i -- /bin/bash -c 'cat > /tmp/reassignment.json'", "cat reassignment.json | oc exec my-cluster-kafka-0 -c kafka -i -- /bin/bash -c 'cat > /tmp/reassignment.json'", "exec broker-pod -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --execute", "exec my-cluster-kafka-0 -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "exec my-cluster-kafka-0 -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "exec broker-pod -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --verify", "exec my-cluster-kafka-0 -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --verify", "cat reassignment.json | oc exec broker-pod -c kafka -i -- /bin/bash -c 'cat > /tmp/reassignment.json'", "cat reassignment.json | oc exec my-cluster-kafka-0 -c kafka -i -- /bin/bash -c 'cat > /tmp/reassignment.json'", "exec broker-pod -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --execute", "exec my-cluster-kafka-0 -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --throttle 5000000 --execute", "exec my-cluster-kafka-0 -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --throttle 10000000 --execute", "exec broker-pod -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --verify", "exec my-cluster-kafka-0 -c kafka -it -- bin/kafka-reassign-partitions.sh --bootstrap-server localhost:9092 --reassignment-json-file /tmp/reassignment.json --verify", "exec my-cluster-kafka-0 -c kafka -it -- /bin/bash -c \"ls -l /var/lib/kafka/kafka-log_<N>_ | grep -E '^d' | grep -vE '[a-zA-Z0-9.-]+\\.[a-z0-9]+-deleteUSD'\"", "annotate pod cluster-name -kafka- index strimzi.io/delete-pod-and-pvc=true", "annotate pod cluster-name -zookeeper- index strimzi.io/delete-pod-and-pvc=true", "maintenanceTimeWindows: - \"* * 0-1 ? * SUN,MON,TUE,WED,THU *\"", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # maintenanceTimeWindows: - \"* * 8-10 * * ?\" - \"* * 14-15 * * ?\"", "apply -f your-file", "get secret CA-CERTIFICATE-SECRET -o 'jsonpath={.data. CA-CERTIFICATE }' | base64 -d | openssl x509 -subject -issuer -startdate -enddate -noout", "subject=O = io.strimzi, CN = cluster-ca v0 issuer=O = io.strimzi, CN = cluster-ca v0 notBefore=Jun 30 09:43:54 2020 GMT notAfter=Jun 30 09:43:54 2021 GMT", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect 1 metadata: name: my-connect-cluster annotations: strimzi.io/use-connector-resources: \"true\" 2 spec: replicas: 3 3 authentication: 4 type: tls certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source bootstrapServers: my-cluster-kafka-bootstrap:9092 5 tls: 6 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt config: 7 group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 externalConfiguration: 8 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey resources: 9 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 10 type: inline loggers: log4j.rootLogger: \"INFO\" readinessProbe: 11 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metrics: 12 lowercaseOutputName: true lowercaseOutputLabelNames: true rules: - pattern: kafka.connect<type=connect-worker-metrics><>([a-z-]+) name: kafka_connect_worker_USD1 help: \"Kafka Connect JMX metric worker\" type: GAUGE - pattern: kafka.connect<type=connect-worker-rebalance-metrics><>([a-z-]+) name: kafka_connect_worker_rebalance_USD1 help: \"Kafka Connect JMX metric rebalance information\" type: GAUGE jvmOptions: 13 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 14 template: 15 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 16 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\"", "apply -f KAFKA-CONNECT-CONFIG-FILE", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: connect-cluster 1 offset.storage.topic: connect-cluster-offsets 2 config.storage.topic: connect-cluster-configs 3 status.storage.topic: connect-cluster-status 4 #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster 1 offset.storage.topic: my-connect-cluster-offsets 2 config.storage.topic: my-connect-cluster-configs 3 status.storage.topic: my-connect-cluster-status 4 # #", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: # authorization: type: simple acls: # access to offset.storage.topic - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Write host: \"*\" - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Create host: \"*\" - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Describe host: \"*\" - resource: type: topic name: connect-cluster-offsets patternType: literal operation: Read host: \"*\" # access to status.storage.topic - resource: type: topic name: connect-cluster-status patternType: literal operation: Write host: \"*\" - resource: type: topic name: connect-cluster-status patternType: literal operation: Create host: \"*\" - resource: type: topic name: connect-cluster-status patternType: literal operation: Describe host: \"*\" - resource: type: topic name: connect-cluster-status patternType: literal operation: Read host: \"*\" # access to config.storage.topic - resource: type: topic name: connect-cluster-configs patternType: literal operation: Write host: \"*\" - resource: type: topic name: connect-cluster-configs patternType: literal operation: Create host: \"*\" - resource: type: topic name: connect-cluster-configs patternType: literal operation: Describe host: \"*\" - resource: type: topic name: connect-cluster-configs patternType: literal operation: Read host: \"*\" # consumer group - resource: type: group name: connect-cluster patternType: literal operation: Read host: \"*\"", "apply -f KAFKA-USER-CONFIG-FILE", "apiVersion: kafka.strimzi.io/v1beta1 kind: KafkaMirrorMaker metadata: name: my-mirror-maker spec: replicas: 3 1 consumer: bootstrapServers: my-source-cluster-kafka-bootstrap:9092 2 groupId: \"my-group\" 3 numStreams: 2 4 offsetCommitInterval: 120000 5 tls: 6 trustedCertificates: - secretName: my-source-cluster-ca-cert certificate: ca.crt authentication: 7 type: tls certificateAndKey: secretName: my-source-secret certificate: public.crt key: private.key config: 8 max.poll.records: 100 receive.buffer.bytes: 32768 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 9 ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS 10 producer: bootstrapServers: my-target-cluster-kafka-bootstrap:9092 abortOnSendFailure: false 11 tls: trustedCertificates: - secretName: my-target-cluster-ca-cert certificate: ca.crt authentication: type: tls certificateAndKey: secretName: my-target-secret certificate: public.crt key: private.key config: compression.type: gzip batch.size: 8192 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 12 ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS 13 whitelist: \"my-topic|other-topic\" 14 resources: 15 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 16 type: inline loggers: mirrormaker.root.logger: \"INFO\" readinessProbe: 17 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 metrics: 18 lowercaseOutputName: true rules: - pattern: \"kafka.server<type=(.+), name=(.+)PerSec\\\\w*><>Count\" name: \"kafka_server_USD1_USD2_total\" - pattern: \"kafka.server<type=(.+), name=(.+)PerSec\\\\w*, topic=(.+)><>Count\" name: \"kafka_server_USD1_USD2_total\" labels: topic: \"USD3\" jvmOptions: 19 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 20 template: 21 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 22 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: 23 type: jaeger", "apply -f <your-file>", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 2.6.0 connectCluster: \"my-cluster-target\" clusters: - alias: \"my-cluster-source\" bootstrapServers: my-cluster-source-kafka-bootstrap:9092 - alias: \"my-cluster-target\" bootstrapServers: my-cluster-target-kafka-bootstrap:9092 mirrors: - sourceCluster: \"my-cluster-source\" targetCluster: \"my-cluster-target\" sourceConnector: {}", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaMirrorMaker2 metadata: name: my-mirror-maker2 spec: version: 2.6.0 1 replicas: 3 2 connectCluster: \"my-cluster-target\" 3 clusters: 4 - alias: \"my-cluster-source\" 5 authentication: 6 certificateAndKey: certificate: source.crt key: source.key secretName: my-user-source type: tls bootstrapServers: my-cluster-source-kafka-bootstrap:9092 7 tls: 8 trustedCertificates: - certificate: ca.crt secretName: my-cluster-source-cluster-ca-cert - alias: \"my-cluster-target\" 9 authentication: 10 certificateAndKey: certificate: target.crt key: target.key secretName: my-user-target type: tls bootstrapServers: my-cluster-target-kafka-bootstrap:9092 11 config: 12 config.storage.replication.factor: 1 offset.storage.replication.factor: 1 status.storage.replication.factor: 1 ssl.cipher.suites: \"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384\" 13 ssl.enabled.protocols: \"TLSv1.2\" ssl.protocol: \"TLSv1.2\" ssl.endpoint.identification.algorithm: HTTPS 14 tls: 15 trustedCertificates: - certificate: ca.crt secretName: my-cluster-target-cluster-ca-cert mirrors: 16 - sourceCluster: \"my-cluster-source\" 17 targetCluster: \"my-cluster-target\" 18 sourceConnector: 19 config: replication.factor: 1 20 offset-syncs.topic.replication.factor: 1 21 sync.topic.acls.enabled: \"false\" 22 replication.policy.separator: \"\" 23 replication.policy.class: \"io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy\" 24 heartbeatConnector: 25 config: heartbeats.topic.replication.factor: 1 26 checkpointConnector: 27 config: checkpoints.topic.replication.factor: 1 28 topicsPattern: \".*\" 29 groupsPattern: \"group1|group2|group3\" 30 resources: 31 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 32 type: inline loggers: connect.root.logger.level: \"INFO\" readinessProbe: 33 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 jvmOptions: 34 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" image: my-org/my-image:latest 35 template: 36 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" connectContainer: 37 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\" tracing: type: jaeger 38 externalConfiguration: 39 env: - name: AWS_ACCESS_KEY_ID valueFrom: secretKeyRef: name: aws-creds key: awsAccessKey - name: AWS_SECRET_ACCESS_KEY valueFrom: secretKeyRef: name: aws-creds key: awsSecretAccessKey", "apply -f <your-file>", "apiVersion: kafka.strimzi.io/v1alpha1 kind: KafkaBridge metadata: name: my-bridge spec: replicas: 3 1 bootstrapServers: my-cluster-kafka-bootstrap:9092 2 tls: 3 trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt authentication: 4 type: tls certificateAndKey: secretName: my-secret certificate: public.crt key: private.key http: 5 port: 8080 cors: 6 allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" consumer: 7 config: auto.offset.reset: earliest producer: 8 config: delivery.timeout.ms: 300000 resources: 9 requests: cpu: \"1\" memory: 2Gi limits: cpu: \"2\" memory: 2Gi logging: 10 type: inline loggers: logger.bridge.level: \"INFO\" # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: \"DEBUG\" jvmOptions: 11 \"-Xmx\": \"1g\" \"-Xms\": \"1g\" readinessProbe: 12 initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 image: my-org/my-image:latest 13 template: 14 pod: affinity: podAntiAffinity: requiredDuringSchedulingIgnoredDuringExecution: - labelSelector: matchExpressions: - key: application operator: In values: - postgresql - mongodb topologyKey: \"kubernetes.io/hostname\" bridgeContainer: 15 env: - name: JAEGER_SERVICE_NAME value: my-jaeger-service - name: JAEGER_AGENT_HOST value: jaeger-agent-name - name: JAEGER_AGENT_PORT value: \"6831\"", "apply -f KAFKA-BRIDGE-CONFIG-FILE", "apiVersion: kafka.strimzi.io/v1beta1 kind: Kafka metadata: name: my-cluster labels: app: my-cluster spec: kafka: # template: statefulset: metadata: labels: mylabel: myvalue #", "spec: # logging: type: inline loggers: kafka.root.logger.level: \"INFO\"", "spec: # logging: type: external name: customConfigMap", "kind: ConfigMap apiVersion: kafka.strimzi.io/v1beta1 metadata: name: logging-configmap data: log4j.properties: kafka.root.logger.level=\"INFO\"", "create configmap logging-configmap --from-file=log4j.properties", "Define the logger kafka.root.logger.level=\"INFO\"", "spec: # logging: type: external name: logging-configmap", "apply -f kafka.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_openshift/assembly-deployment-configuration-str
1.3. Load Balancer Add-On Scheduling Overview
1.3. Load Balancer Add-On Scheduling Overview One of the advantages of using Load Balancer Add-On is its ability to perform flexible, IP-level load balancing on the real server pool. This flexibility is due to the variety of scheduling algorithms an administrator can choose from when configuring Load Balancer Add-On. Load Balancer Add-On load balancing is superior to less flexible methods, such as Round-Robin DNS where the hierarchical nature of DNS and the caching by client machines can lead to load imbalances. Additionally, the low-level filtering employed by the LVS router has advantages over application-level request forwarding because balancing loads at the network packet level causes minimal computational overhead and allows for greater scalability. Using scheduling, the active router can take into account the real servers' activity and, optionally, an administrator-assigned weight factor when routing service requests. Using assigned weights gives arbitrary priorities to individual machines. Using this form of scheduling, it is possible to create a group of real servers using a variety of hardware and software combinations and the active router can evenly load each real server. The scheduling mechanism for Load Balancer Add-On is provided by a collection of kernel patches called IP Virtual Server or IPVS modules. These modules enable layer 4 ( L4 ) transport layer switching, which is designed to work well with multiple servers on a single IP address. To track and route packets to the real servers efficiently, IPVS builds an IPVS table in the kernel. This table is used by the active LVS router to redirect requests from a virtual server address to and returning from real servers in the pool. The IPVS table is constantly updated by a utility called ipvsadm - adding and removing cluster members depending on their availability. 1.3.1. Scheduling Algorithms The structure that the IPVS table takes depends on the scheduling algorithm that the administrator chooses for any given virtual server. To allow for maximum flexibility in the types of services you can cluster and how these services are scheduled, Red Hat Enterprise Linux provides the following scheduling algorithms listed below. For instructions on how to assign scheduling algorithms see Section 4.6.1, "The VIRTUAL SERVER Subsection" . Round-Robin Scheduling Distributes each request sequentially around the pool of real servers. Using this algorithm, all the real servers are treated as equals without regard to capacity or load. This scheduling model resembles round-robin DNS but is more granular due to the fact that it is network-connection based and not host-based. Load Balancer Add-On round-robin scheduling also does not suffer the imbalances caused by cached DNS queries. Weighted Round-Robin Scheduling Distributes each request sequentially around the pool of real servers but gives more jobs to servers with greater capacity. Capacity is indicated by a user-assigned weight factor, which is then adjusted upward or downward by dynamic load information. Refer to Section 1.3.2, "Server Weight and Scheduling" for more on weighting real servers. Weighted round-robin scheduling is a preferred choice if there are significant differences in the capacity of real servers in the pool. However, if the request load varies dramatically, the more heavily weighted server may answer more than its share of requests. Least-Connection Distributes more requests to real servers with fewer active connections. Because it keeps track of live connections to the real servers through the IPVS table, least-connection is a type of dynamic scheduling algorithm, making it a better choice if there is a high degree of variation in the request load. It is best suited for a real server pool where each member node has roughly the same capacity. If a group of servers have different capabilities, weighted least-connection scheduling is a better choice. Weighted Least-Connections (default) Distributes more requests to servers with fewer active connections relative to their capacities. Capacity is indicated by a user-assigned weight, which is then adjusted upward or downward by dynamic load information. The addition of weighting makes this algorithm ideal when the real server pool contains hardware of varying capacity. Refer to Section 1.3.2, "Server Weight and Scheduling" for more on weighting real servers. Locality-Based Least-Connection Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is designed for use in a proxy-cache server cluster. It routes the packets for an IP address to the server for that address unless that server is above its capacity and has a server in its half load, in which case it assigns the IP address to the least loaded real server. Locality-Based Least-Connection Scheduling with Replication Scheduling Distributes more requests to servers with fewer active connections relative to their destination IPs. This algorithm is also designed for use in a proxy-cache server cluster. It differs from Locality-Based Least-Connection Scheduling by mapping the target IP address to a subset of real server nodes. Requests are then routed to the server in this subset with the lowest number of connections. If all the nodes for the destination IP are above capacity, it replicates a new server for that destination IP address by adding the real server with the least connections from the overall pool of real servers to the subset of real servers for that destination IP. The most loaded node is then dropped from the real server subset to prevent over-replication. Destination Hash Scheduling Distributes requests to the pool of real servers by looking up the destination IP in a static hash table. This algorithm is designed for use in a proxy-cache server cluster. Source Hash Scheduling Distributes requests to the pool of real servers by looking up the source IP in a static hash table. This algorithm is designed for LVS routers with multiple firewalls.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-scheduling-VSA
Chapter 13. Scoping tokens
Chapter 13. Scoping tokens 13.1. About scoping tokens You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods. A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the cluster-admin role can create scoped tokens. Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules . Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks. 13.1.1. User scopes User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you: user:full - Allows full read/write access to the API with all of the user's permissions. user:info - Allows read-only access to information about the user, such as name and groups. user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews . These are the variables where you pass an empty user and groups in your request object. user:list-projects - Allows read-only access to list the projects the user has access to. 13.1.2. Role scope The role scope allows you to have the same level of access as a given role filtered by namespace. role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace . Note Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is. role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/authentication_and_authorization/tokens-scoping
Chapter 11. Enabling and disabling features
Chapter 11. Enabling and disabling features Red Hat build of Keycloak has packed some functionality in features, including some disabled features, such as Technology Preview and deprecated features. Other features are enabled by default, but you can disable them if they do not apply to your use of Red Hat build of Keycloak. 11.1. Enabling features Some supported features, and all preview features, are disabled by default. To enable a feature, enter this command: bin/kc.[sh|bat] build --features="<name>[,<name>]" For example, to enable docker and token-exchange , enter this command: bin/kc.[sh|bat] build --features="docker,token-exchange" To enable all preview features, enter this command: bin/kc.[sh|bat] build --features="preview" 11.2. Disabling features To disable a feature that is enabled by default, enter this command: bin/kc.[sh|bat] build --features-disabled="<name>[,<name>]" For example to disable impersonation , enter this command: bin/kc.[sh|bat] build --features-disabled="impersonation" You can disable all default features by entering this command: bin/kc.[sh|bat] build --features-disabled="default" This command can be used in combination with features to explicitly set what features should be available. If a feature is added both to the features-disabled list and the features list, it will be enabled. 11.3. Supported features The following list contains supported features that are enabled by default, and can be disabled if not needed. account-api Account Management REST API account2 Account Management Console version 2 admin-api Admin API admin2 New Admin Console authorization Authorization Service ciba OpenID Connect Client Initiated Backchannel Authentication (CIBA) client-policies Client configuration policies impersonation Ability for admins to impersonate users js-adapter Host keycloak.js and keycloak-authz.js through the Keycloak server kerberos Kerberos par OAuth 2.0 Pushed Authorization Requests (PAR) step-up-authentication Step-up Authentication web-authn W3C Web Authentication (WebAuthn) 11.3.1. Disabled by default The following list contains supported features that are disabled by default, and can be enabled if needed. docker Docker Registry protocol fips FIPS 140-2 mode 11.4. Preview features Preview features are disabled by default and are not recommended for use in production. These features may change or be removed at a future release. account3 Account Management Console version 3 admin-fine-grained-authz Fine-Grained Admin Permissions client-secret-rotation Client Secret Rotation declarative-user-profile Configure user profiles using a declarative style multi-site Multi-site support recovery-codes Recovery codes scripts Write custom authenticators using JavaScript token-exchange Token Exchange Service update-email Update Email Action 11.5. Deprecated features The following list contains deprecated features that will be removed in a future release. These features are disabled by default. linkedin-oauth LinkedIn Social Identity Provider based on OAuth 11.6. Relevant options Value features 🛠 Enables a set of one or more features. CLI: --features Env: KC_FEATURES account-api , account2 , account3 , admin-api , admin-fine-grained-authz , admin2 , authorization , ciba , client-policies , client-secret-rotation , declarative-user-profile , docker , dynamic-scopes , fips , impersonation , js-adapter , kerberos , linkedin-oauth , map-storage , multi-site , par , preview , recovery-codes , scripts , step-up-authentication , token-exchange , update-email , web-authn features-disabled 🛠 Disables a set of one or more features. CLI: --features-disabled Env: KC_FEATURES_DISABLED account-api , account2 , account3 , admin-api , admin-fine-grained-authz , admin2 , authorization , ciba , client-policies , client-secret-rotation , declarative-user-profile , docker , dynamic-scopes , fips , impersonation , js-adapter , kerberos , linkedin-oauth , map-storage , multi-site , par , preview , recovery-codes , scripts , step-up-authentication , token-exchange , update-email , web-authn
[ "bin/kc.[sh|bat] build --features=\"<name>[,<name>]\"", "bin/kc.[sh|bat] build --features=\"docker,token-exchange\"", "bin/kc.[sh|bat] build --features=\"preview\"", "bin/kc.[sh|bat] build --features-disabled=\"<name>[,<name>]\"", "bin/kc.[sh|bat] build --features-disabled=\"impersonation\"", "bin/kc.[sh|bat] build --features-disabled=\"default\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_guide/features-
Server Administration Guide
Server Administration Guide Red Hat build of Keycloak 26.0 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/server_administration_guide/index
9.2. Booting a Guest Using PXE
9.2. Booting a Guest Using PXE This section demonstrates how to boot a guest virtual machine with PXE. 9.2.1. Using bridged networking Procedure 9.2. Booting a guest using PXE and bridged networking Ensure bridging is enabled such that the PXE boot server is available on the network. Boot a guest virtual machine with PXE booting enabled. You can use the virt-install command to create a new virtual machine with PXE booting enabled, as shown in the following example command: Alternatively, ensure that the guest network is configured to use your bridged network, and that the XML guest configuration file has a <boot order='1'/> element inside the network's <interface> element, as shown in the following example: 9.2.2. Using a Private libvirt Network Procedure 9.3. Using a private libvirt network Configure PXE booting on libvirt as shown in Section 9.1.1, "Setting up a PXE Boot Server on a Private libvirt Network" . Boot a guest virtual machine using libvirt with PXE booting enabled. You can use the virt-install command to create/install a new virtual machine using PXE: Alternatively, ensure that the guest network is configured to use your private libvirt network, and that the XML guest configuration file has a <boot order='1'/> element inside the network's <interface> element. In addition, ensure that the guest virtual machine is connected to the private network:
[ "virt-install --pxe --network bridge=breth0 --prompt", "<interface type='bridge'> <mac address='52:54:00:5a:ad:cb'/> <source bridge='breth0'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>", "virt-install --pxe --network network=default --prompt", "<interface type='network'> <mac address='52:54:00:66:79:14'/> <source network='default'/> <target dev='vnet0'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> <boot order='1'/> </interface>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-network_booting_with_libvirt-booting_a_guest_using_pxe
Appendix A. Versioning information
Appendix A. Versioning information Documentation last updated on Thursday, March 14th, 2024.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/versioning-information
5.5. Batching Network Packets
5.5. Batching Network Packets In configurations with a long transmission path, batching packets before submitting them to the kernel may improve cache utilization. To configure the maximum number of packets that can be batched where N is the maximum number of packets to batch: To provide support for tun / tap rx batching for type='bridge' or type='network' interfaces, add a snippet similar to the following to the domain XML file.
[ "ethtool -C USDtap rx-frames N", "<devices> <interface type='network'> <source network='default'/> <target dev='vnet0'/> <coalesce> <rx> <frames max='7'/> </rx> </coalesce> </interface> </devices>" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_tuning_and_optimization_guide/rx_batching
Chapter 9. The Certificate System Configuration Files
Chapter 9. The Certificate System Configuration Files The primary configuration file for every subsystem is its CS.cfg file. This chapter covers basic information about and rules for editing the CS.cfg file. This chapter also describes some other useful configuration files used by the subsystems, such as password and web services files. 9.1. File and directory locations for Certificate System subsystems Certificate System servers consist of an Apache Tomcat instance, which contains one or more subsystems. Each subsystem consists of a web application, which handles requests for a specific type of PKI function. The available subsystems are: CA, KRA, OCSP, TKS, and TPS. Each instance can contain only one of each type of a PKI subsystem. A subsystem can be installed within a particular instance using the pkispawn command. 9.1.1. Instance-specific information For instance information for the default instance ( pki-tomcat , if you have not specified pki_instance_name when running pkispawn), see Table 2.2, "Tomcat instance information" Table 9.1. Certificate server port assignments (default) Port Type Port Number Notes Secure port 8443 Main port used to access PKI services by end-users, agents, and admins over HTTPS. Insecure port 8080 Used to access the server insecurely for some end-entity functions over HTTP. Used for instance to provide CRLs, which are already signed and therefore need not be encrypted. AJP port 8009 Used to access the server from a front end Apache proxy server through an AJP connection. Redirects to the HTTPS port. Tomcat port 8005 Used by the web server. 9.1.2. CA subsystem information This section contains details about the CA subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 9.2. CA subsystem information for the default instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/ca/ Configuration directory /var/lib/pki/pki-tomcat/ca/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/ca/conf/CS.cfg Subsystem certificates CA signing certificate OCSP signing certificate (for the CA's internal OCSP service) TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/ca/logs/ [d] Install log /var/log/pki/pki-ca-spawn. date .log Uninstall log /var/log/pki/pki-ca-destroy. date .log Audit logs /var/log/pki/pki-tomcat/ca/logs/signedAudit/ Profile files /var/lib/pki/pki-tomcat/ca/profiles/ca/ Email notification templates /var/lib/pki/pki-tomcat/ca/emails/ Web services files [e] Agent services: /usr/share/pki/ca/webapps/ca/agent/ Admin services: /usr/share/pki/ca/webapps/ca/admin/ End user services: /usr/share/pki/ca/webapps/ca/ee/ [a] Aliased to /etc/pki/pki-tomcat/ca/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database or the associated HSM, if so dictated at instance creation. [d] Aliased to /var/lib/pki/pki-tomcat/ca [e] Instead of instance-specific webapp files as with in the past, there are now webapp descriptors that point to the webapp location. E.g. in /var/lib/pki/rhcs10-RSA-SubCA/conf/Catalina/localhost/ca.xml: <Context docBase="/usr/share/pki/ca/webapps/ca" crossContext="true"> 9.1.3. KRA subsystem information This section contains details about the KRA subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 9.3. KRA subsystem information for the default instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/kra/ Configuration directory /var/lib/pki/pki-tomcat/kra/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/kra/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/kra/logs/ Install log /var/log/pki/pki-kra-spawn- date .log Uninstall log /var/log/pki/pki-kra-destroy- date .log Audit logs /var/log/pki/pki-tomcat/kra/logs/signedAudit/ Web services files [d] Agent services: /usr/share/pki/kra/webapps/kra/agent/ Admin services: /usr/share/pki/kra/webapps/kra/admin/ [a] Linked to /etc/pki/pki-tomcat/kra/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database or the associated HSM, if so dictated at instance creation. [d] Instead of instance-specific webapp files as with in the past, there are now webapp descriptors that point to the webapp location. E.g. in /var/lib/pki/rhcs10-RSA-SubCA/conf/Catalina/localhost/ca.xml: <Context docBase="/usr/share/pki/ca/webapps/ca" crossContext="true"> 9.1.4. OCSP subsystem information This section contains details about the OCSP subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 9.4. OCSP subsystem information for the default instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/ocsp/ Configuration directory /var/lib/pki/pki-tomcat/ocsp/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/ocsp/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/ocsp/logs/ Install log /var/log/pki/pki-ocsp-spawn- date .log Uninstall log /var/log/pki/pki-ocsp-destroy- date .log Audit logs /var/log/pki/pki-tomcat/ocsp/logs/signedAudit/ Web services files [d] Agent services: /var/lib/pki/pki-tomcat/ocsp/webapps/ocsp/agent/ Admin services: /var/lib/pki/pki-tomcat/ocsp/webapps/ocsp/admin/ [a] Linked to /etc/pki/pki-tomcat/ocsp/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database or the associated HSM, if so dictated at instance creation. [d] Instead of instance-specific webapp files as with in the past, there are now webapp descriptors that point to the webapp location. E.g. in /var/lib/pki/rhcs10-RSA-SubCA/conf/Catalina/localhost/ca.xml: <Context docBase="/usr/share/pki/ca/webapps/ca" crossContext="true"> 9.1.5. TKS subsystem information This section contains details about the TKS subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 9.5. Every time a subsystem is created either through the initial installation or creating additional instances with (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/tks/ Configuration directory /var/lib/pki/pki-tomcat/tks/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/tks/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/tks/logs/ Install log /var/log/pki/pki-tks-spawn- date .log Uninstall log /var/log/pki/pki-tks-destroy- date .log Audit logs /var/log/pki/pki-tomcat/tks/logs/signedAudit/ Web services files [d] Agent services: /usr/share/pki/tks/webapps/tks/agent/ Admin services: /usr/share/pki/tks/webapps/tks/admin/ [a] Linked to /etc/pki/pki-tomcat/tks/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database or the associated HSM, if so dictated at instance creation. [d] Instead of instance-specific webapp files as with in the past, there are now webapp descriptors that point to the webapp location. E.g. in /var/lib/pki/rhcs10-RSA-SubCA/conf/Catalina/localhost/ca.xml: <Context docBase="/usr/share/pki/ca/webapps/ca" crossContext="true"> 9.1.6. TPS subsystem information This section contains details about the TPS subsystem, which is one of the possible subsystems that can be installed as a web application in a Certificate Server instance. Table 9.6. TPS subsystem information for the default instance (pki-tomcat) Setting Value Main directory /var/lib/pki/pki-tomcat/tps Configuration directory /var/lib/pki/pki-tomcat/tps/conf/ [a] Configuration file /var/lib/pki/pki-tomcat/tps/conf/CS.cfg Subsystem certificates Transport certificate Storage certificate TLS server certificate Audit log signing certificate Subsystem certificate [b] Security databases /var/lib/pki/pki-tomcat/alias/ [c] Log files /var/lib/pki/pki-tomcat/tps/logs/ Install log /var/log/pki/pki-tps-spawn- date .log Uninstall log /var/log/pki/pki-tps-destroy- date .log Audit logs /var/log/pki/pki-tomcat/tps/logs/signedAudit/ Web services files [d] Agent services: /usr/share/pki/tps/webapps/tps/agent/ Admin services: /usr/share/pki/tps/webapps/tps/admin/ [a] Linked to /etc/pki/pki-tomcat/tps/ [b] The subsystem certificate is always issued by the security domain so that domain-level operations that require client authentication are based on this subsystem certificate. [c] Note that all subsystem certificates are stored in the instance security database or the associated HSM, if so dictated at instance creation. [d] Instead of instance-specific webapp files as with in the past, there are now webapp descriptors that point to the webapp location. E.g. in /var/lib/pki/rhcs10-RSA-SubCA/conf/Catalina/localhost/ca.xml: <Context docBase="/usr/share/pki/ca/webapps/ca" crossContext="true"> 9.1.7. Shared Certificate System subsystem file locations There are some directories used by or common to all Certificate System subsystem instances for general server operations, listed in Table 9.7, "Subsystem file locations" . Table 9.7. Subsystem file locations Directory Location Contents /var/lib/pki/ instance_name Contains the main instance directory, which is the location for instance-specific directory locations and customized configuration files, profiles, certificate databases, and other files for the subsystem instance. /usr/share/java/pki Contains Java archive files shared by the Certificate System subsystems. /usr/share/pki Contains common files and templates used to create Certificate System instances. Along with shared files for all subsystems, there are subsystem-specific files in subfolders: pki/ca/ (CA) pki/kra/ (KRA) pki/ocsp/ (OCSP) pki/tks/ (TKS) pki/tps (TPS) /usr/bin /usr/sbin Contains the pki command line scripts and tools (Java, native, and security) shared by the Certificate System subsystems. 9.2. CS.cfg files The runtime properties of a Certificate System subsystem are governed by a set of configuration parameters. These parameters are stored in a file that is read by the server during startup, CS.cfg . The CS.cfg , an ASCII file, is created and populated with the appropriate configuration parameters when a subsystem is first installed. The way the instance functions are modified is by making changes through the subsystem console, which is the recommended method. The changes made in the administrative console are reflected in the configuration file. It is also possible to edit the CS.cfg configuration file directly, and in some cases this is the easiest way to manage the subsystem. 9.2.1. Locating the CS.cfg file Each instance of a Certificate System subsystem has its own configuration file, CS.cfg . The contents of the file for each subsystem instance is different depending on the way the subsystem was configured, additional settings and configuration (like adding new profiles or enabling self-tests), and the type of subsystem. The CS.cfg file is located in the configuration directory for the instance. For example: 9.2.2. Editing the configuration file WARNING Do not edit the configuration file directly without being familiar with the configuration parameters or without being sure that the changes are acceptable to the server. The Certificate System fails to start if the configuration file is modified incorrectly. Incorrect configuration can also result in data loss. Therefore, it is highly recommended to save a backup of the configuration file before making any changes. To modify the CS.cfg file: Stop the subsystem instance. OR if using the Nuxwdog watchdog: The configuration file is stored in the cache when the instance is started. Any changes made to the instance through the Console are changed in the cached version of the file. When the server is stopped or restarted, the configuration file stored in the cache is written to disk. Stop the server before editing the configuration file or the changes will be overwritten by the cached version when the server is stopped. Open the /var/lib/pki/instance_name/subsystem_type/conf directory. Open the CS.cfg file in a text editor. Edit the parameters in the file, and save the changes. Start the subsystem instance. OR if using the Nuxwdog watchdog: 9.2.3. Overview of the CS.cfg configuration file Each subsystem instances has its own main configuration file, CS.cfg , which contains all of the settings for the instance, such as plugins and Java classes for configuration. The parameters and specific settings are different depending on the type of subsystem, but, in a general sense, the CS.cfg file defines these parts of the subsystem instance: Basic subsystem instance information, like its name, port assignments, instance directory, and hostname Logging Plug-ins and methods to authenticate to the instance's user directory (authorization) The security domain to which the instance belongs Subsystem certificates Other subsystems used by the subsystem instance Database types and instances used by the subsystem Settings for PKI-related tasks, like the key profiles in the TKS, the certificate profiles in the CA, and the required agents for key recovery in the KRA Many of the configuration parameters (aside from the ones for PKI tasks) are very much the same between the CA, OCSP, KRA, and TKS because they all use a Java-based console, so configuration settings which can be managed in the console have similar parameters. The CS.cfg file a basic parameter=value format. In the CS.cfg file, many of the parameter blocks have descriptive comments, commented out with a pound (#) character. Comments, blank lines, unknown parameters, or misspelled parameters are ignored by the server. Parameters that configure the same area of the instance tend to be grouped together into the same block. Example 9.1. Logging settings in the CS.cfg file Some areas of functionality are implemented through plugins, such as self-tests, jobs, and authorization to access the subsystem. For those parameters, the plugin instance has a unique identifier (since there can be multiple instances of even the same plugin called for a subsystem), the implementation plugin name, and the Java class. Example 9.2. Subsystem authorization settings NOTE The values for configuration parameters must be properly formatted, so they must obey two rules: The values that need to be localized must be in UTF8 characters. The CS.cfg file supports forward slashes (/) in parameter values. If a back slash (\) is required in a value, it must be escaped with a back slash, meaning that two back slashes in a row must be used. The following sections are snapshots of CS.cfg file settings and parameters. These are not exhaustive references or examples of CS.cfg file parameters. Also, the parameters available and used in each subsystem configuration file is very different, although there are similarities. 9.2.3.1. Basic subsystem settings Basic settings are specific to the instance itself, without directly relating to the functionality or behavior of the subsystem. This includes settings for the instance name, root directory, the user ID for the process, and port numbers. Many of the settings assigned when the instance is first installed or configured are prefaced with pkispawn . Example 9.3. Basic Instance Parameters for the CA: pkispawn file ca.cfg Important While information like the port settings is included in the CS.cfg file, it is not set in the CS.cfg . The server configuration is set in the server.xml file. The ports in CS.cfg and server.xml must match for a working RHCS instance. 9.2.3.2. Logging settings There are several different types of logs that can be configured, depending on the type of subsystem. Each type of log has its own configuration entry in the CS.cfg file. For example, the CA has this entry for signed audit logs, which allows log rotation, buffered logging, log signing, and log levels, among other settings: For more information about these parameters and their values, see Section 13.1, "Log settings" . As long as audit logging is enabled, these values do not affect compliance. 9.2.3.3. Authentication and authorization settings The CS.cfg file sets how users are identified to access a subsystem instance (authentication) and what actions are approved (authorization) for each authenticated user. A CS subsystem uses authentication plugins to define the method for logging into the subsystem. The following example shows an authentication instance named SharedToken that instantiates a JAVA plugin named SharedSecret . For some authorization settings, it is possible to select an authorization method that uses an LDAP database to store user entries, in which case the database settings are configured along with the plugin as shown below. For more information on securely configuring LDAP and an explanation of parameters, refer to Section 7.13.13, "Enabling TLS mutual authentication from CS to DS" . The parameters paths differ than what is shown there, but the same names and values are allowed in both places. The CA also has to have a mechanism for approving user requests. As with configuring authorization, this is done by identifying the appropriate authentication plugin and configuring an instance for it: 9.2.3.4. Subsystem certificate settings Several of the subsystems have entries for each subsystem certificate in the configuration file. 9.2.3.5. Settings for required subsystems At a minimum, each subsystem depends on a CA, which means that the CA (and any other required subsystem) has to be configured in the subsystem's settings. Any connection to another subsystem is prefaced by conn. and then the subsystem type and number. The following is an example "conn" section from a TPS instance's CS.cfg : 9.2.3.6. Database settings All of the subsystems use an LDAP directory to store their information. This internal database is configured in the internaldb parameters. Here is an example of the internaldb section from a CA's CS.cfg : In addition to the internaldb parameter, TPS introduces the tokendb parameters to contain more configuration settings relating to the smartcard tokens. For further information on securely configuring LDAP and an explanation of parameters, refer to Section 7.13.13, "Enabling TLS mutual authentication from CS to DS" . No additional configuration is necessary outside of what is done as part Section 7.13.13, "Enabling TLS mutual authentication from CS to DS" . 9.2.3.7. Enabling and configuring a publishing queue Part of the enrollment process includes publishing the issued certificate to any directories or files. This, essentially, closes out the initial certificate request. However, publishing a certificate to an external network can significantly slow down the issuance process -which leaves the request open. To avoid this situation, administrators can enable a publishing queue . The publishing queue separates the publishing operation (which may involve an external LDAP directory) from the request and enrollment operations, which uses a separate request queue. The request queue is updated immediately to show that the enrollment process is complete, while the publishing queue sends the information at the pace of the network traffic. The publishing queue sets a defined, limited number of threads that publish generated certificates, rather than opening a new thread for each approved certificate. Procedure Enabling the publishing queue by editing the CS.cfg file allows administrators to set other options for publishing, like the number of threads to use for publishing operations and the queue page size. Stop the CA server, so that you can edit the configuration files. Open the CA's CS.cfg file. Set the ca.publish.queue.enable to true. If the parameter is not present, then add a line as follows: Set other related publishing queue parameters: ca.publish.queue.maxNumberOfThreads sets the maximum number of threads that can be opened for publishing operations. The default is 3. ca.publish.queue.priorityLevel sets the priority for publishing operations. The priority value ranges from -2 (lowest priority) to 2 (highest priority). Zero (0) is normal priority and is also the default. ca.publish.queue.pageSize sets the maximum number of requests that can be stored in the publishing queue page. The default is 40. ca.publish.queue.saveStatus sets the interval to save its status every specified number of publishing operations. This allows the publishing queue to be recovered if the CA is restarted or crashes. The default is 200, but any non-zero number will recover the queue when the CA restarts. Setting this parameter to 0 disables queue recovery. TIP Setting ca.publish.queue.enable to false and ca.publish.queue.maxNumberOfThreads to 0 disables both the publishing queue and using separate threads for publishing issued certificates. Restart the CA server. 9.2.3.8. Settings for PKI tasks The CS.cfg file is used to configure the PKI tasks for every subsystem. The parameters are different for every single subsystem, without any overlap. For example, the KRA has settings for a required number of agents to recover a key. Review the CS.cfg file for each subsystem to become familiar with its PKI task settings; the comments in the file are a decent guide for learning what the different parameters are. The CA configuration file lists all of the certificate profiles and policy settings, as well as rules for generating CRLs. The TPS configures different token operations. The TKS lists profiles for deriving keys from different key types. The OCSP sets key information for different key sets. 9.2.3.9. Changing DN attributes in CA-issued certificates In certificates issued by the Certificate System, DNs identify the entity that owns the certificate. In all cases, if the Certificate System is connected with a Directory Server, the format of the DNs in the certificates should match the format of the DNs in the directory. It is not necessary that the names match exactly; certificate mapping allows the subject DN in a certificate to be different from the one in the directory. In the Certificate System, the DN is based on the components, or attributes, defined in the X.509 standard. Table 9.8, "Allowed characters for value types" lists the attributes supported by default. The set of attributes is extensible. Table 9.8. Allowed characters for value types Attribute Value Type Object Identifier cn DirectoryString 2.5.4.3 ou DirectoryString 2.5.4.11 o DirectoryString 2.5.4.10 c PrintableString , two-character 2.5.4.6 l DirectoryString 2.5.4.7 st DirectoryString 2.5.4.8 street DirectoryString 2.5.4.9 title DirectoryString 2.5.4.12 uid DirectoryString 0.9.2342.19200300.100.1.1 mail IA5String 1.2.840.113549.1.9.1 dc IA5String 0.9.2342.19200300.100.1.2.25 serialnumber PrintableString 2.5.4.5 unstructuredname IA5String 1.2.840.113549.1.9.2 unstructuredaddress PrintableString 1.2.840.113549.1.9.8 By default, the Certificate System supports the attributes identified in Table 9.8, "Allowed characters for value types" . This list of supported attributes can be extended by creating or adding new attributes. The syntax for adding additional X.500Name attributes, or components, is as follows: The value converter class converts a string to an ASN.1 value; this class must implement the netscape.security.x509.AVAValueConverter interface. The string-to-value converter class can be one of the following: netscape.security.x509.PrintableConverter converts a string to a PrintableString value. The string must have only printable characters. netscape.security.x509.IA5StringConverter converts a string to an IA5String value. The string must have only IA5String characters. netscape.security.x509.DirStrConverter converts a string to a DirectoryString . The string is expected to be in DirectoryString format according to RFC 2253. netscape.security.x509.GenericValueConverter converts a string character by character in the following order, from the smallest characterset to the largest: PrintableString IA5String BMPString Universal String An attribute entry looks like the following: 9.2.3.9.1. Adding new or custom attributes To add a new or proprietary attribute to the Certificate System schema, do the following: Stop the Certificate Manager. Open the /var/lib/pki/cs_instance/conf/ directory. Open the configuration file, CS.cfg . Add the new attributes to the configuration file. For example, to add three proprietary attributes, MYATTR1 that is a DirectoryString , MYATTR2 that is an IA5String , and MYATTR3 that is a PrintableString , add the following lines at the end of the configuration file: Save the changes, and close the file. Restart the Certificate Manager. Reload the enrollment page and verify the changes; the new attributes should show up in the form. To verify that the new attributes are in effect, request a certificate using the manual enrollment form. Enter values for the new attributes so that it can be verified that they appear in the certificate subject names. For example, enter the following values for the new attributes and look for them in the subject name: Open the agent services page, and approve the request. When the certificate is issued, check the subject name. The certificate should show the new attribute values in the subject name. 9.2.3.9.2. Changing the DER-encoding order It is possible to change the DER-encoding order of a DirectoryString , so that the string is configurable since different clients support different encodings. The syntax for changing the DER-encoding order of a DirectoryString is as follows: The possible encoding values are as follows: PrintableString IA5String UniversalString BMPString UTF8String For example, the DER-encoding ordered can be listed as follows: To change the DirectoryString encoding, do the following: Stop the Certificate Manager. Open the /var/lib/pki/cs_instance/conf/ directory. Open the CS.cfg configuration file. Add the encoding order to the configuration file. For example, to specify two encoding values, PrintableString and UniversalString , and the encoding order is PrintableString first and UniversalString , add the following line at the end of the configuration file: Save the changes, and close the file. Start the Certificate Manager. To verify that the encoding orders are in effect, enroll for a certificate using the manual enrollment form. Use John_Doe for the cn . Open the agent services page, and approve the request. When the certificate is issued, use the dumpasn1 tool to examine the encoding of the certificate. The cn component of the subject name should be encoded as a UniversalString . Create and submit a new request using John Smith for the cn . The cn component of the subject name should be encoded as a PrintableString . 9.2.3.10. Setting a CA to use a different certificate to sign CRLs A Certificate Manager uses the key pair corresponding to its OCSP signing certificate for signing certificates and certificate revocation lists (CRLs). To use a different key pair to sign the CRLs that the Certificate Manager generates, then a CRL signing certificate can be created. The Certificate Manager's CRL signing certificate must be signed or issued by itself. To enable a Certificate Manager to sign CRLs with a different key pair, do the following: Request and install a CRL signing certificate for the Certificate Manager using CMC. For details about requesting a system certificate, see Section 5.3.2.1 Obtaining system and server certificates in the Administration Guide (Common Criteria Edition) . Note that the profile used to obtain the certificate must use the keyUsageExtDefaultImpl class id and the corresponding keyUsageCrlSign parameter set to true : After you have generated the CRL signing certificate, install the certificate in the Certificate Manager's crypto module database. If using a HSM, follow Section 10.4, "Hardware Security Module" If you are not using a HSM, follow Section 10.5, "Importing a certificate into an NSS database" But instead of PKICertImport use the certutil command as described in Section 10.1.3, " certutil common commands" . Stop the Certificate Manager. Update the Certificate Manager's configuration to recognize the new key pair and certificate. Change to the Certificate Manager instance configuration directory. Open the CS.cfg file and add the following lines: nickname is the name assigned to the CRL signing certificate. instance_ID is the name of the Certificate Manager instance. If the installed CA is a RSA-based CA, signing_algorithm can be SHA256withRSA , SHA384withRSA , or SHA512withRSA . If the installed CA is an EC-based CA, signing_algorithm can be SHA256withEC , SHA384withEC , SHA512withEC . token_name is the name of the token used for generating the key pair and the certificate. If the internal/software token is used, use Internal Key Storage Token as the value. For example, the entries might look like this: Save the changes, and close the file. Restart the Certificate Manager. Now the Certificate Manager is ready to use the CRL signing certificate to sign the CRLs it generates. 9.2.3.11. Configuring CRL generation from cache in CS.cfg The CRL cache is a simple mechanism that allows cert revocation information to be taken from a collection of revocation information maintained in memory. For best performance, it is recommended that this feature be enabled, which already represents the default behavior. The following configuration information (which is the default) is presented for information purposes or if changes are desired. Stop the CA server. Open the CA configuration directory. Edit the CS.cfg file, setting the enableCRLCache and enableCacheRecovery parameters to true: Start the CA server. 9.2.3.12. Configuring update intervals for CRLs in CS.cfg The following describes how to configure the CRL system flexibly to reflect desired behavior. The goal is to configure CRL updates according to some schedule of two types. One type allows for a list of explicit times and the other consists of a length of time interval between updates. There is also a hybrid scenario where both are enabled to account for drift. The Note entry just below actually represents the default out of the box scenario. The default scenario is listed as follows: Deviate from this only when a more detailed and specific update schedule is desired. The rest of the section will talk about how that is accomplished. Configuring the settings for full and delta CRLs in the CS.cfg file involves editing parameters. Table 9.9. CRL extended interval parameters Parameter Description Accepted Values updateSchema Sets the ratio for how many delta CRLs are generated per full CRL An integer value enableDailyUpdates Enables and disables setting CRL updates based on set times true or false enableUpdateInterval Enables and disables setting CRL updates based on set intervals true or false dailyUpdates Sets the times the CRLs should be updated A comma-delimited list of times autoUpdateInterval Sets the interval in minutes to update the CRLs An integer value autoUpdateInterval.effectiveAtStart Allows the system to attempt to use the new value of auto update immediately instead of waiting for the currently scheduled nextUpdate time true or false nextUpdateGracePeriod Adds the time in minutes to the CRL validity period to ensure that CRLs remain valid throughout the publishing or replication period An integer value refreshInSec Sets the periodicity in seconds of the thread on the clone OCSP to check LDAP for any updates of the CRL An integer value Important The autoUpdateInterval.effectiveAtStart parameter requires a system restart in order for a new value to apply. The default value of this parameter is false, it should only be changed by users who are sure of what they are doing. Procedure: How to configure CRL update intervals in CS.cfg Stop the CA server. Change to the CA configuration directory. Edit the CS.cfg file, and add the following line to set the update interval: The default interval is 1, meaning a full CRL is generated every time a CRL is generated. The updateSchema interval can be set to any integer. Set the update frequency, either by specifying a cyclical interval or set times for the updates to occur: Specify set times by enabling the enableDailyUpdates parameter, and add the desired times to the dailyUpdates parameter: This field sets a daily time when the CRL should be updated. To specify multiple times, enter a comma-separated list of times, such as 01:50,04:55,06:55 . To enter a schedule for multiple days, enter a comma-separated list to set the times within the same day, and then a semicolon separated list to identify times for different days. For example, set 01:50,04:55,06:55;02:00,05:00,17:00 to configure revocation on Day 1 of the cycle at 1:50am, 4:55am, and 6:55am and then Day 2 at 2am, 5am, and 5pm. Specify intervals by enabling the enableUpdateInterval parameter, and add the required interval in minutes to the autoUpdateInterval parameter: Set the following parameters depending on your environment: If you run a CA without an OCSP subsystem, set: If you run a CA with an OCSP subsystem, set: The ca.crl.MasterCRL.nextUpdateGracePeriod parameter defines the time in minutes, and the value must be big enough to enable the CA to propagate the new CRL to the OCSP. You must set the parameter to a non-zero value. If you additionally have OCSP clones in your environment, also set: The ocsp.store.defStore.refreshInSec parameter sets the frequency in seconds with which the clone OCSP instances are informed of CRL updates through LDAP replication updates from the master OCSP instance. See Table 9.9, "CRL extended interval parameters" for details on the parameters. Restart the CA server. NOTE Schedule drift can occur when updating CRLs by interval. Typically, drift occurs as a result of manual updates and CA restarts. To prevent schedule drift, set both enableDailyUpdates and enableUpdateInterval parameters to true, and add the required values to autoUpdateInterval and dailyUpdates : Only one dailyUpdates value will be accepted when updating CRLs by interval. The interval updates will resynchronize with the dailyUpdates value every 24 hours preventing schedule drift. 9.2.3.13. Changing the access control settings for the subsystem By default, access control rules are applied by evaluating deny rules first and then by evaluating allow rules. To change the order, change the authz.evaluateOrder parameter in the CS.cfg . Additionally, access control rules can be evaluated from the local web.xml file (basic ACLs) or more complex ACLs can be accessed by checking the LDAP database. The authz.sourceType parameter identifies what type of authorization to use. Note Always restart the subsystem after editing the CS.cfg file to load the updated settings. 9.2.3.14. Configuring ranges for requests and serial numbers When random serial numbers are not used, in case of cloned systems, administrators could specify the ranges Certificate System will use for requests and serial numbers in the /etc/pki/instance_name/subsystem/CS.cfg file: Note Certificate System supports BigInteger values for the ranges. 9.2.3.15. Setting requirement for pkiconsole to use TLS client certificate authentication Note pkiconsole is being deprecated and will be replaced by a new browser-based UI in a future major release. Although pkiconsole will continue to be available until the replacement UI is released, we encourage using the command line equivalent of pkiconsole at this time, as the pki CLI will continue to be supported and improved upon even when the new browser-based UI becomes available in the future. Edit the CS.cfg file of each subsystem, search for the authType parameter and set it as follows: 9.2.3.16. Changing the signing algorithms The signing algorithms for various PKI objects (certificates, CRLs, and OCSP responses) are first set at the time of installation via the pkispawn configuration file. It is then possible to change these settings post-installation, by editing the CS.cfg file of the instance involved, as follows: CA: default signing algorithm for signing certificates Open the CA's CS.cfg , and edit ca.signing.defaultSigningAlgorithm to assign the desired signing algorithm. For example: ca.signing.defaultSigningAlgorithm=SHA256withRSA CA: default signing algorithm for signing CRLs Open the CA's CS.cfg , and edit ca.crl.MasterCRL.signingAlgorithm to assign the desired signing algorithm. For example: ca.crl.MasterCRL.signingAlgorithm=SHA256withRSA CA: default signing algorithm for signing OCSP responses Open the CA's CS.cfg , and edit ca.ocsp_signing.defaultSigningAlgorithm to assign the desired signing algorithm. For example: ca.ocsp_signing.defaultSigningAlgorithm=SHA256withRSA OCSP: default signing algorithm for signing OCSP responses Open the OCSP's CS.cfg , and edit ocsp.signing.defaultSigningAlgorithm to assign the desired signing algorithm. For example: ocsp.signing.defaultSigningAlgorithm=SHA256withRSA Make sure to stop the CS instance before editing its CS.cfg file, and to restart it once you are done with the changes. Note Please see Section 3.3, "Allowed hash functions" . 9.2.3.17. Disabling the direct CA-OCSP CRL publishing When configuring the OCSP manager to use an LDAP directory, you need to disable the direct CA->OCSP CRL publishing method : Stop the SubCA: Edit the CA's CS.cfg configuration file (e.g. /var/lib/pki/rhcs10-RSA-SubCA/ca/conf/CS.cfg ) and set the following to false : For example: Start the CA for the configuration change to take effect: 9.2.3.18. Enabling client certificate verification using latest CRL within OCSP Note This is an alternative method for enabling revocation checks in an OCSP subsystem. The preferred method is detailed in Section 7.13.10.2, "Enabling OCSP for the CA / KRA / TKS / TPS" . When set up correctly, the OCSP system has the advantage of having the latest CRL internally to verify its own clients. To do so, you need to enable both the ocsp.store.ldapStore.validateConnCertWithCRL and auths.revocationChecking.enabled parameters. Edit the OCSP's CS.cfg configuration file (e.g. /var/lib/pki/rhcs10-OCSP-subca/ca/conf/CS.cfg ) and set the following: In addition to enabling these two parameters in the CS.cfg , the enableOCSP parameter should remain set to false in /var/lib/pki/<ocsp instance directory>/conf/server.xml . 9.2.3.19. Enabling client certificate and CRL publishing for the CA Red Hat Certificate System enables certificate authorities to publish certificates and, certificate revocation lists (CRLs). To configure the Certificate Authority (CA) settings for publishing certificates or Certificate Revocation Lists (CRLs) in the CS.cfg configuration file, follow this example: Example of CA's CS.cfg file with certificate and CRL publishing enabled: Example of CA's CS.cfg file with Certificate publishing disabled: Example of CA's CS.cfg file with CRL publishing disabled: 9.3. Managing system passwords As explained in Section 2.3.10, "Passwords and watchdog (nuxwdog)" , Certificate System uses passwords bind to servers or to unlock tokens when the server starts. The password.conf file stores system passwords in plain text. However, some administrators prefer to remove the password file entirely to allow nuxwdog to prompt for manual entry of each password initially and store for auto-restart in case of an unplanned shutdown. When a Certificate System instance starts, the subsystem automatically checks for the password.conf file. If the file exists, then it uses those passwords to connect to other services, such as the internal LDAP database. If that file does not exist, then the watchdog daemon prompts for all of the passwords required by the PKI server to start. Note If the password.conf file is present, the subsystem assumes that all the required passwords are present and properly formatted in clear text. If any passwords are missing or wrongly formatted, then the system fails to start correctly. The required passwords are listed in the cms.passwordlist parameter in the CS.cfg file: Note The cms.password.ignore.publishing.failure parameter allows a CA subsystem to start up successfully even if it has a failed connection to one of its LDAP publishing directories. For the CA, KRA, OCSP, and TKS subsystems, the default expected passwords are: internal for the NSS database internaldb for the internal LDAP database replicationdb for the replication password Any passwords to access external LDAP databases for publishing (CA only) Note If a publisher is configured after the password.conf file is removed, nothing is written to the password.conf file. Unless nuxwdog is configured, the server will not have access to the prompts for the new publishing password the time that the instance restarts. Any external hardware token passwords For the TPS, this prompts for three passwords: internal for the NSS database tokendbpass for the internal LDAP database Any external hardware token passwords This section describes the two mechanisms provided for Certificate System to retrieve these passwords: password.conf file (the default) nuxwdog (watchdog) 9.3.1. Configuring the password.conf file Note This section is here for reference only. Correct and secure operation must involve using the nuxwdog watchdog. Please refer to Section 9.3.2, "Using the Certificate System watchdog service" to enable nuxwdog , as it is required for full compliance. By default, passwords are stored in a plain text file, password.conf , in the subsystem conf/ directory. Therefore, it is possible to modify them simply through a text editor. The list of passwords stored in this file includes the following: The bind password used by the Certificate System instance to access and update the internal database. The password to the HSM The bind password used by the Certificate System instance to access the authentication directory, in case of CMC Shared Token. The bind password used by the subsystem to access and update the LDAP publishing directory; this is required only if the Certificate System instance is configured for publishing certificates and CRLs to an LDAP-compliant directory. the bind password used by the subsystem to access its replication database. For a TPS instance, the bind password used to access and update the token database. The password.conf file also contains the token passwords needed to open the private keys of the subsystem. The name and location password file to use for the subsystem is configured in the CS.cfg file: The internal password store and replication database have randomly-generated PINs which were set when the subsystem was installed and configured; the internal LDAP database password was defined by the administrator when the instance was configured. The password entries in the password.conf file are in the following format: For example: In cases where an HSM token is required, use the following format: For example: Example content of a password.conf file: 9.3.2. Using the Certificate System watchdog service In Certificate System, the watchdog service is used to start services which require passwords to access the security database in order to start. In case there is a requirement not to store the unencrypted passwords on the system, the watchdog service: prompts for the relevant passwords during server startup and caches them. uses cached passwords in case of a failure when the server is automatically restarted due to a crash. 9.3.2.1. Enabling the watchdog service To enable the watchdog service: If you also want to use the Shared Secret feature on this host, enable the Shared Secret feature as described in Section 9.6.3, "Enabling the CMC Shared Secret feature" . Backup the server.xml and password.conf files from the /var/lib/pki/instance_name/conf/ directory. For example: Stop and disable the Certificate System instance's service: If you use a Hardware Security Module (HSM), enable the watchdog service to prompt for the password of the hardware token: Display the name of the hardware token: The highlighted string in the example is the hardware token name. Add the cms.tokenList parameter to the /var/lib/pki/instance_name/conf/ca/CS.cfg file and set it to the name of the hardware token. For example: Enable the watchdog configuration for the instance: Alternatively, enable the watchdog for all instances: For further details, see the pki-server-nuxwdog(8) man page. By default, nuxwdog starts the server as the user configured in the TOMCAT_USER variable in the /etc/sysconfig/pki-tomcat file. Optionally, to modify the user and group: Copy the watchdog systemd unit file of the instance to the /etc/systemd/system/ directory: Note Unit files in the /etc/systemd/system/ directory have a higher priority and are not replaced during updates. Add the following entries to the [Service] section in the /etc/pki/instance_name/nuxwdog.conf file: Reload the systemd configuration: Enable the Certificate System service that uses the watchdog: Optionally: See Section 9.3.2.3, "Verifying that the Certificate System watchdog service is enabled" . To start the Certificate System instance, run the following command and enter the prompted passwords: 9.3.2.2. Starting and stopping Certificate System with the watchdog enabled For information how to manage a Certificate System instance refer to Section 2.2.3, "Execution management (systemctl)" . 9.3.2.3. Verifying that the Certificate System watchdog service is enabled To verify that the watchdog service is enabled: Verify that the pki-tomcatd-nuxwdog service is enabled: Verify that the pki-tomcatd service is disabled: In the /etc/pki/instance_name/server.xml file: verify that the passwordFile parameter refers to the CS.cfg file. For example: verify that the passwordClass parameter is set to com.netscape.cms.tomcat.NuxwdogPasswordStore : 9.3.2.4. Disabling the watchdog service To disable the watchdog service: Stop and disable the Certificate System instance's service that uses the watchdog: Enable the regular service without watch dog for the instance: Disable the watchdog configuration for the instance: For further details, see the pki-server-nuxwdog(8) man page. Restore the password.conf file to its original location. For example: Start the Certificate System instance: 9.4. Configuration files for the tomcat engine and web services All of the user and administrative (administrators, agents, and auditors) services for the subsystems are accessed over web protocols. This section discusses the two major sets of configuration files that apply to all Red Hat Certificate System subsystems (CA, KRA, OCSP, TKS, and TPS): /var/lib/pki/instance_name/conf/server.xml provides the configuration for the Tomcat engine. /usr/share/pki/subsystem_type/webapps/WEB-INF/web.xml provides the configuration for the web services offered by this instance. 9.4.1. Tomcatjss Note The later subsections include important configuration information on required changes to parameter values. Ensure they are followed for strict compliance. The following configuration in the server.xml file found in the example pki-tomcat/conf directory can be used to explain how Tomcatjss fits into the entire Certificate System ecosystem. Portions of the Connector entry for the secure port and its corresponding SSLHostConfig parameters are shown below. In the server.xml configuration file for the Tomcat engine, there is this Connector configuration element that contains the pointer to the tomcatjss implementation, which can be plugged into the sslImplementation property of this Connector object. Each key parameter element is explained in the subsections below. 9.4.1.1. TLS cipher configuration Red Hat Certificate System supports TLS 1.2 cipher suites. These are defined in the instance's server.xml when the CS instance acts as a server, and in CS.cfg when the CS instance acts as a client. If you need to configure the ciphers, refer to the corresponding post-installation section in Section 7.13.11, "Update the ciphers list" . For information on the supported ciphers, refer to Section 3.1.1, "Supported cipher suites" 9.4.1.2. Enabling automatic revocation checking on the CA Revocation checks are supported/enabled in the CA in the same way as all other RHCS subsystems (see Section 9.4.1.3, "Enabling certificate revocation checking for RHCS subsystems" ). In general, RHCS recommends OCSP for more efficient certificate revocation checks. However, one thing that sets the CA apart from the other CS subsystems is that the CAs are the creators of the CRLs for the certificates they issue, and therefore are the sources of the CRLs that are needed by the OCSP subsystems. For this reason, they need to be able to start up independently before the OCSP or other RHCS subsystems. In the case when the CA's own internal LDAP server-cert is issued by the CA itself, it is very important that the CRL Distribution Point extension is used instead of the AIA (OCSP) so as not to fall victim to the chicken and egg issue during startup of the CA. 9.4.1.2.1. Configure support for CRL Distribution Point For the purpose of mitigating propagation and reducing storage of large CRLs, it is important to note that RHCS recommends partitioning of the CRL to allow the certificates that utilize the CRL Distribution Point to be grouped into a smaller subset. See Section 7.3.8, "Configure support for CRL Distribution Point" for information on how to set up the CA to support partitioned CRL for server-certs that are issued using the CRL Distribution Point certificate enrollment profile. 9.4.1.3. Enabling certificate revocation checking for RHCS subsystems Certificate revocation check is a vital part of certificate validation in a PKI environment. RHCS provides two types of certificate revocation validation methods: OCSP and CRL by means of detecting/processing either the AIA (OCSP) or the CRL Distribution Point extension of its peer certificate. RHCS recommends OCSP for more efficient certificate revocation checks. Note The usage of CRL Distribution Point is unavoidable in cases when the use of OCSP is not plausible. Such a case can be exemplified in Section 9.4.1.2, "Enabling automatic revocation checking on the CA" above. Applications (in this case, PKI subsystems) adopting OCSP need to know how to contact the OCSP system in order to verify the certificates in question. The PKI subsystems do not have OCSP checking enabled by default. You can enable OCSP checking for a PKI subsystem by editing its server.xml file. NOTE If you have configured the subsystem to use an SSL/TLS connection with its internal database, then the SSL/TLS server certificate of the LDAP internal database must be recognized by the OCSP responder. If the OCSP responder does not recognize the LDAP server certificate, then the subsystem will not start properly. This configuration is covered in the Red Hat Certificate System Planning, Installation and Deployment Guide , since subsystem-LDAP SSL/TLS server connections are configured as part of the subsystem setup. The following procedure aims to configure your PKI subsystem instances to verify its peer certificates according to their AIA extensions using OCSP. This method of OCSP certificate verification is more flexible than the default static OCSP responder URL. Procedure Stop the instance: Edit the /var/lib/pki/<instance_name>/conf/server.xml file to configure the Connector name="Secure" section: Set the enableOCSP parameter to true Make sure you remove these two parameters and their assigned values: ocspResponderURL ocspResponderCertNickname For example: Start the instance: Note By default, all PKI system certificates created during installation are generated with an AIA (Authority Information Access) extension pointing to its issuing CA's internal OCSP service. If you follow the steps in Section 9.4.1.4, "Adding an AIA extension to a certificate" , to point to the external OCSP prior to installing PKI subsystems, then all their certificates (and all other certificates issued by its CA thereon) should bear the correct AIA pointing to the external OCSP instead. 9.4.1.3.1. Setting trust of the OCSP signing certificate Each OCSP signing certificate must chain up to a trusted root in the Certificate System's NSS database. In RHCS, during validation, each OCSP response includes the OCSP signer certificate chain. Therefore, the following consideration is required. If the OCSP responder being used has been configured to provide the entire certificate chain of the OCSP signing certificate with each OCSP response, as is the default case for RHCS OCSP services, then no further action is required. NSS knows how to validate this chain from the given information. If on the other hand the OCSP is known to not return the full chain, you then need to import the chain manually during installation setup. For details, see Section 10.5.3, "Importing an OCSP responder" . The Certificate System OCSP responder already includes the chain with every response. This includes the Certificate System external OCSP responder and the internal OCSP service that comes with each CA. This behavior is by default and cannot be changed. 9.4.1.3.2. OCSP parameters for server.xml The following table provides information on each parameter relevant to certificate revocation checks (that is, OCSP and CRL) in the server.xml file. Table 9.10. OCSP parameters for server.xml Parameter Description enableRevocationCheck (also known as enableOCSP) Enables (or disables) revocation checking for the subsystem. ocspResponderURL Sets the URL where the OCSP requests are sent. For an OCSP Manager, this can be another OCSP service in another OCSP or in a CA. For a TKS or KRA, this always points to an external OCSP service in an OCSP or a CA. ocspResponderCertNickname Sets the nickname of the signing certificate for the responder, either the OCSP signing certificate or the CA's OCSP signing certificate. The certificate must be imported into the subsystem's NSS database and have the appropriate trust settings set. ocspCacheSize Sets the maximum number of cache entries. ocspMinCacheEntryDuration Sets minimum seconds before another fetch attempt can be made. For example, if this is set to 120, then the validity of a certificate cannot be checked again until at least 2 minutes after the last validity check. ocspMaxCacheEntryDuration Sets the maximum number of seconds to wait before making the fetch attempt. This prevents having too large a window between validity checks. ocspTimeout Sets the timeout period, in seconds, for the OCSP request. NOTE If a nextUpdate field is sent with the OCSP response, it can affect the fetch time with ocspMinCacheEntryDuration and ocspMaxCacheEntryDuration as follows: If the value in nextUpdate has been reached before the value set in ocspMinCacheEntryDuration , the fetch will not be started until the value set in ocspMinCacheEntryDuration has been reached. If ocspMinCacheEntryDuration has been reached, the server checks if the value in nextUpdate has been reached. If the value has been reached, the fetch will happen. Regardless of the value in nextUpdate , if the setting in ocspMaxCacheEntryDuration has been reached, the fetch will happen. NOTE Due to the underlying SSL/TLS session caches kept by NSS, which follows the industry standard to prevent very expensive full handshakes as well as provide stronger privacy, OCSP requests are only made when NSS determines that a validation is required. The SSL/TLS session caches are independent of the OCSP status cache. Once NSS determines that an OCSP request is to be made, the request will be made and the response received will be kept in the OCSP certificate status cache. Due to the SSL/TLS session caches, these OCSP cache parameters only come into play when allowed by NSS. 9.4.1.4. Adding an AIA extension to a certificate By default, unless explicitly specified, the CA issues certificates with an AIA (Authority Information Access) extension pointing to the CA's own internal OCSP. Once you have set up an OCSP instance, you can configure the CA to start issuing certificates with an AIA extension that points to the OCSP instance instead. Prerequisite You are logged in as root user. Procedure Stop the CA: Edit the CA's CS.cfg and set the ca.defaultOcspUri variable to point to the OCSP. For example: Start the CA: Note The OCSP URL of each subsystem (e.g. KRA) is set in its server.xml file by default. When enabled, this directs the RHCS instance to use the static URL when looking up a certificate status, instead of the AIA extension embedded in the peer certificate. For more information on using the AIA extension, refer to Section 9.4.1.3, "Enabling certificate revocation checking for RHCS subsystems" . 9.4.1.5. Adding a CRL Distribution Point extension to a certificate To add the CRL Distribution Point extension to a certificate, you need to use a certificate enrollment profile equipped with the CRL Distribution Point extension. To enable a certificate enrollment profile, see Section 7.3.8.3, "CA's enrollment profile configuration with CRL Distribution Points" . Note that the CA needs to be configured to handle CRL Distribution Points for such profiles to work. See Section 7.3.8, "Configure support for CRL Distribution Point" . 9.4.2. Session timeout When a user connects to PKI server through a client application, the server will create a session to keep track of the user. As long as the user remains active, the user can execute multiple operations over the same session without having to re-authenticate. Session timeout determines how long the server will wait since the last operation before terminating the session due to inactivity. Once the session is terminated, the user will be required to re-authenticate to continue accessing the server, and the server will create a new session. There are two types of timeouts: TLS session timeout HTTP session timeout Due to differences in the way clients work, the clients will be affected differently by these timeouts. Note Certain clients have their own timeout configuration. For example, Firefox has a keep-alive timeout setting. For details, see http://kb.mozillazine.org/Network.http.keep-alive.timeout . If the value is different from the server's setting for TLS Session Timeout or HTTP Session Timeout, different behavior can be observed. 9.4.2.1. TLS session timeout A TLS session is a secure communication channel over a TLS connection established through TLS handshake protocol. PKI server generates audit events for TLS session activities. The server generates an ACCESS_SESSION_ESTABLISH audit event with Outcome=Success when the connection is created. If the connection fails to be created, the server will generate an ACCESS_SESSION_ESTABLISH audit event with Outcome=Failure . When the connection is closed, the server will generate an ACCESS_SESSION_TERMINATED audit event. TLS session timeout (that is TLS connection timeout) is configured in the keepAliveTimeout parameter in the Secure Connector element in the /etc/pki/instance/server.xml file: By default the timeout value is set to 300000 milliseconds (that is 5 minutes). To change this value, edit the /etc/pki/instance/server.xml file and then restart the server. Note Note that this value will affect all TLS connections to the server. A large value may improve the efficiency of the clients since they can reuse existing connections that have not expired. However, it may also increase the number of connections that the server has to support simultaneously since it takes longer for abandoned connections to expire. 9.4.2.2. HTTP session timeout An HTTP session is a mechanism to track a user across multiple HTTP requests using HTTP cookies. PKI server does not generate audit events for the HTTP sessions. Note For the purpose of auditing consistency, set the session-timeout value in this section to match the keepAliveTimeout value in Section 9.4.2.1, "TLS session timeout" . For example if keepAliveTimeout was set to 300000 (5 minutes), then set session-timeout to 30 . The HTTP session timeout can be configured in the session-timeout element in the /etc/pki/instance/web.xml file: By default the timeout value is set to 30 minutes. To change the value, edit the /etc/pki/instance/web.xml file and then restart the server. Note Note that this value affects all sessions in all web applications on the server. A large value may improve the experience of the users since they will not be required to re-authenticate or view the access banner again so often. However, it may also increase the security risk since it takes longer for abandoned HTTP sessions to expire. 9.4.2.3. Session timeout for PKI Web UI PKI Web UI is an interactive web-based client that runs in a browser. Currently it only supports client certificate authentication. When the Web UI is opened, the browser may create multiple TLS connections to a server. These connections are associated to a single HTTP session. To configure a timeout for the Web UI, see Section 9.4.2.2, "HTTP session timeout" . The TLS session timeout is normally irrelevant since the browser caches the client certificate so it can recreate the TLS session automatically. When the HTTP session expires, the Web UI does not provide any immediate indication. However, the Web UI will display an access banner (if enabled) before a user executes an operation. 9.4.2.4. Session timeout for PKI Console PKI Console is an interactive standalone graphical UI client. It supports username/password and client certificate authentication. When the console is started, it will create a single TLS connection to the server. The console will display an access banner (if enabled) before opening the graphical interface. Unlike the Web UI, the console does not maintain an HTTP session with the server. To configure a timeout for the console, see Section 9.4.2.1, "TLS session timeout" . The HTTP session timeout is irrelevant since the console does not use HTTP session. When the TLS session expires, the TLS connection will close, and the console will exit immediately to the system. If the user wants to continue, the user will need to restart the console. 9.4.2.5. Session timeout for PKI CLI PKI CLI is a command-line client that executes a series of operations. It supports username/password and client certificate authentication. When the CLI is started, it will create a single TLS connection to the server and an HTTP session. The CLI will display an access banner (if enabled) before executing operations. Both timeouts are generally irrelevant to PKI CLI since the operations are executed in sequence without delay and the CLI exits immediately upon completion. However, if the CLI waits for user inputs, is slow, or becomes unresponsive, the TLS session or the HTTP session may expire and the remaining operations fail. If such delay is expected, see Section 9.4.2.1, "TLS session timeout" and Section 9.4.2.2, "HTTP session timeout" to accommodate the expected delay. 9.4.3. Removing unused interfaces from web.xml (CA only) Several legacy interfaces (for features like bulk issuance or the policy framework) are still included in the CA's web.xml file. However, since these features are deprecated and no longer in use, then they can be removed from the CA configuration to increase security. Procedure Stop the CA. OR if using the Nuxwdog watchdog: Open the web files directory for the CA. For example: Back up the current web.xml file. Edit the web.xml file and remove the entire <servlet> entries for each of the following deprecated servlets: caadminEnroll cabulkissuance cacertbasedenrollment caenrollment caProxyBulkIssuance For example, remove the caadminEnroll servlet entry: After removing the servlet entries, remove the corresponding <servlet-mapping> entries. Remove three <filter-mapping> entries for an end-entity request interface. Start the CA again. OR if using the Nuxwdog watchdog: 9.4.4. Customizing Web Services All of the subsystems (with the exception of the TKS) have some kind of a web-based services page for agents and some for other roles, like administrators or end entities. These web-based services pages use basic HTML and JavaScript, which can be customized to use different colors, logos, and other design elements to fit in with an existing site or intranet. 9.4.4.1. Customizing subsystem web applications Each PKI subsystem has a corresponding web application, which contains: HTML pages containing texts, JavaScript codes, page layout, CSS formatting, and so on A web.xml file, which defines servlets, paths, security constraints, and other Links to PKI libraries. The subsystem web applications are deployed using context files located in the /var/lib/pki/pki-tomcat/conf/Catalina/localhost/ directory, for example, the ca.xml file: The docBase points to the location of the default web application directory, /usr/share/pki/ . To customize the web application, copy the web application directory into the instance's webapps directory: Then change the docBase to point to the custom web application directory relative from the webapps directory: The change will be effective immediately without the need to restart the server. To remove the custom web application, simply revert the docBase and delete the custom web application directory: 9.4.4.2. Customizing the Web UI theme The subsystem web applications in the same instance share the same theme, which contains: CSS files, which determine the global appearance Image files including logo, icons, and other Branding properties, which determine the page title, logo link, title color, and other. The Web UI theme is deployed using the pki.xml context file in the /var/lib/pki/pki-tomcat/conf/Catalina/localhost/ directory: The docBase points to the location of the default theme directory, /usr/share/pki/ . To customize the theme, copy the default theme directory into the pki directory in the instance's webapps directory: Then change the docBase to point to the custom theme directory relative from the webapps directory: The change will be effective immediately without the need to restart the server. To remove the custom theme, simply revert the docBase and delete the custom theme directory: 9.4.4.3. Customizing TPS token state labels The default token state labels are stored in the /usr/share/pki/tps/conf/token-states.properties file and described in Section 2.5.2.4.1.4, "Token state and transition labels" . To customize the labels, copy the file into the instance directory: The change will be effective immediately without the need to restart the server. To remove the customized labels, simply delete the customized file: 9.5. Using an access banner In Certificate System, Administrators can configure a banner with customizable text. The banner will be displayed in the following situations: Application When the banner is displayed PKI Console Before the console is displayed. After the session has expired. Web interface When you connect to the web interface. After the session expired. pki command-line utility Before the actual operation proceeds. You can use the banner to display important information to the users before they can use Certificate System. The user must agree to the displayed text to continue. Example 9.4. When the access banner is displayed The following example shows when the access banner is displayed if you are using the pki utility: 9.5.1. Enabling an access banner To enable the access banner, create the /etc/pki/instance_name/banner.txt file and enter the text to displayed. Important The text in the /etc/pki/instance_name/banner.txt file must use the UTF-8 format. To validate, see Section 9.5.4, "Validating the banner" . 9.5.2. Disabling an access banner To disable the access banner, either delete or rename the /etc/pki/instance_name/banner.txt file. For example: 9.5.3. Displaying the banner To display the currently configured banner: 9.5.4. Validating the banner To validate that the banner does not contain invalid characters: 9.6. Configuration for CMC This section describes how to configure Certificate System for Certificate Management over CMS (CMC). 9.6.1. Understanding how CMC works Before configuring CMC, read the following documentation to learn more about the subject: Section 2.4.1.1.2.2, "Enrolling with CMC" 5.3 Requesting and receiving certificates using CMC in the Administration Guide (Common Criteria Edition) . Chapter 3 Making Rules for Issuing Certificates (Certificate Profiles) in the Administration Guide (Common Criteria Edition) . 9.6.2. Enabling the PopLinkWitnessV2 feature For a high-level security on the Certificate Authority (CA), enable the following option in the /var/lib/pki/instance_name/ca/conf/CS.cfg file: 9.6.3. Enabling the CMC Shared Secret feature To enable the shared token feature in a Certificate Authority (CA): If the watchdog service is enabled on the host, temporarily disable it. See Section 9.3.2.4, "Disabling the watchdog service" . Add the shrTok attribute to Directory Server's schema: If the system keys are stored on a Hardware Security Module (HSM), set the cmc.token parameter in the /var/lib/pki/instance_name/ca/conf/CS.cfg file. For example: Enable the shared token authentication plugin by adding the following settings into the /var/lib/pki/instance_name/ca/conf/CS.cfg file: Set the nickname of an RSA issuance protection certificate in the ca.cert.issuance_protection.nickname parameter in the /var/lib/pki/instance_name/ca/conf/CS.cfg file. For example: This step is: Optional if you use an RSA certificate in the ca.cert.subsystem.nickname parameter. Required if you use an ECC certificate in the ca.cert.subsystem.nickname parameter. Important If the ca.cert.issuance_protection.nickname parameter is not set, Certificate System automatically uses the certificate of the subsystem specified in the ca.cert.subsystem.nickname . However, the issuance protection certificate must be an RSA certificate. Restart Certificate System: When the CA starts, Certificate System prompts for the LDAP password used by the Shared Token plugin. If you temporarily disabled the watchdog service at the beginning of this procedure, re-enable it. See Section 9.3.2.1, "Enabling the watchdog service" . Note For information on how to use the CMC Shared Token, see 8.4 "CMC SharedSecret authentication" in the Administration Guide (Common Criteria Edition) . 9.6.4. Enabling CMCRevoke for the Web User Interface As described in 6.2.1 "Performing a CMC Revocation" in the Administration Guide (Common Criteria Edition) , there are two ways to submit CMC revocation requests. In cases when you use the CMCRevoke utility to create revocation requests to be submitted through the web UI, add the following setting to the /var/lib/pki/instance_name/ca/conf/CS.cfg file:
[ "/var/lib/pki/instance_name/subsystem_type/conf", "/var/lib/pki/instance_name/ca/conf", "pki-server stop instance_name", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "pki-server start instance_name", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "#comment parameter=value", "log.instance.SignedAudit._000=## log.instance.SignedAudit._001=## Signed Audit Logging log.instance.SignedAudit._002=## log.instance.SignedAudit._003=## To list available audit events: log.instance.SignedAudit._004=## USD pki-server ca-audit-event-find log.instance.SignedAudit._005=## log.instance.SignedAudit._006=## To enable/disable audit event: log.instance.SignedAudit._007=## USD pki-server ca-audit-event-enable/disable <event name> log.instance.SignedAudit._008=## log.instance.SignedAudit.bufferSize=512 log.instance.SignedAudit.enable=true log.instance.SignedAudit.events=ACCESS_SESSION_ESTABLISH,ACCESS_SESSION_TERMINATED,AUDIT_LOG_SIGNING,AUDIT_LOG_STARTUP,AUTH,AUTHORITY_CONFIG,AUTHZ,CERT_PROFILE_APPROVAL,CERT_REQUEST_PROCESSED,CERT_SIGNING_INFO,CERT_STATUS_CHANGE_REQUEST,CERT_STATUS_CHANGE_REQUEST_PROCESSED,CLIENT_ACCESS_SESSION_ESTABLISH,CLIENT_ACCESS_SESSION_TERMINATED,CMC_REQUEST_RECEIVED,CMC_RESPONSE_SENT,CMC_SIGNED_REQUEST_SIG_VERIFY,CMC_USER_SIGNED_REQUEST_SIG_VERIFY,CONFIG_ACL,CONFIG_AUTH,CONFIG_CERT_PROFILE,CONFIG_CRL_PROFILE,CONFIG_ENCRYPTION,CONFIG_ROLE,CONFIG_SERIAL_NUMBER,CONFIG_SIGNED_AUDIT,CONFIG_TRUSTED_PUBLIC_KEY,CRL_SIGNING_INFO,DELTA_CRL_GENERATION,FULL_CRL_GENERATION,KEY_GEN_ASYMMETRIC,LOG_PATH_CHANGE,OCSP_GENERATION,OCSP_SIGNING_INFO,PROFILE_CERT_REQUEST,PROOF_OF_POSSESSION,RANDOM_GENERATION,ROLE_ASSUME,SCHEDULE_CRL_GENERATION,SECURITY_DOMAIN_UPDATE,SELFTESTS_EXECUTION,SERVER_SIDE_KEYGEN_REQUEST,SERVER_SIDE_KEYGEN_REQUEST_PROCESSED log.instance.SignedAudit.expirationTime=0 log.instance.SignedAudit.fileName=/var/lib/pki/rhcs10-ECC-SubCA/logs/ca/signedAudit/ca_audit log.instance.SignedAudit.filters.CMC_SIGNED_REQUEST_SIG_VERIFY=(Outcome=Failure) log.instance.SignedAudit.filters.CMC_USER_SIGNED_REQUEST_SIG_VERIFY=(Outcome=Failure) log.instance.SignedAudit.filters.DELTA_CRL_GENERATION=(Outcome=Failure) log.instance.SignedAudit.filters.FULL_CRL_GENERATION=(Outcome=Failure) log.instance.SignedAudit.filters.OCSP_GENERATION=(Outcome=Failure) log.instance.SignedAudit.filters.RANDOM_GENERATION=(Outcome=Failure) log.instance.SignedAudit.flushInterval=5 log.instance.SignedAudit.level=1 log.instance.SignedAudit.logSigning=true log.instance.SignedAudit.maxFileSize=2000 log.instance.SignedAudit.pluginName=file log.instance.SignedAudit.rolloverInterval=2592000 log.instance.SignedAudit.signedAudit=_002=## log.instance.SignedAudit.signedAuditCertNickname=NHSM-CONN-XC:auditSigningCert cert-rhcs10-ECC-SubCA CA log.instance.SignedAudit.type=signedAudit", "authz.impl._000=## authz.impl._001=## authorization manager implementations authz.impl._002=## authz.impl.BasicAclAuthz.class=com.netscape.cms.authorization.BasicAclAuthz authz.instance.BasicAclAuthz.pluginName=BasicAclAuthz", "[DEFAULT] pki_admin_password=Secret.123 pki_client_pkcs12_password=Secret.123 pki_ds_password=Secret.123 Optionally keep client databases pki_client_database_purge=False Separated CA instance name and ports pki_instance_name=pki-ca pki_http_port=18080 pki_https_port=18443 This Separated CA instance will be its own security domain pki_security_domain_https_port=18443 Separated CA Tomcat ports pki_ajp_port=18009 pki_tomcat_server_port=18005", "log.instance.SignedAudit._000=## log.instance.SignedAudit._001=## Signed Audit Logging log.instance.SignedAudit._002=## log.instance.SignedAudit._003=## To list available audit events: log.instance.SignedAudit._004=## USD pki-server ca-audit-event-find log.instance.SignedAudit._005=## log.instance.SignedAudit._006=## To enable/disable audit event: log.instance.SignedAudit._007=## USD pki-server ca-audit-event-enable/disable <event name> log.instance.SignedAudit._008=## log.instance.SignedAudit.bufferSize=512 log.instance.SignedAudit.enable=true log.instance.SignedAudit.events=ACCESS_SESSION_ESTABLISH,ACCESS_SESSION_TERMINATED,AUDIT_LOG_SIGNING,AUDIT_LOG_STARTUP,AUTH,AUTHORITY_CONFIG,AUTHZ,CERT_PROFILE_APPROVAL,CERT_REQUEST_PROCESSED,CERT_SIGNING_INFO,CERT_STATUS_CHANGE_REQUEST,CERT_STATUS_CHANGE_REQUEST_PROCESSED,CLIENT_ACCESS_SESSION_ESTABLISH,CLIENT_ACCESS_SESSION_TERMINATED,CMC_REQUEST_RECEIVED,CMC_RESPONSE_SENT,CMC_SIGNED_REQUEST_SIG_VERIFY,CMC_USER_SIGNED_REQUEST_SIG_VERIFY,CONFIG_ACL,CONFIG_AUTH,CONFIG_CERT_PROFILE,CONFIG_CRL_PROFILE,CONFIG_ENCRYPTION,CONFIG_ROLE,CONFIG_SERIAL_NUMBER,CONFIG_SIGNED_AUDIT,CONFIG_TRUSTED_PUBLIC_KEY,CRL_SIGNING_INFO,DELTA_CRL_GENERATION,FULL_CRL_GENERATION,KEY_GEN_ASYMMETRIC,LOG_PATH_CHANGE,OCSP_GENERATION,OCSP_SIGNING_INFO,PROFILE_CERT_REQUEST,PROOF_OF_POSSESSION,RANDOM_GENERATION,ROLE_ASSUME,SCHEDULE_CRL_GENERATION,SECURITY_DOMAIN_UPDATE,SELFTESTS_EXECUTION,SERVER_SIDE_KEYGEN_REQUEST,SERVER_SIDE_KEYGEN_REQUEST_PROCESSED log.instance.SignedAudit.expirationTime=0 log.instance.SignedAudit.fileName=/var/lib/pki/rhcs10-ECC-SubCA/logs/ca/signedAudit/ca_audit log.instance.SignedAudit.filters.CMC_SIGNED_REQUEST_SIG_VERIFY=(Outcome=Failure) log.instance.SignedAudit.filters.CMC_USER_SIGNED_REQUEST_SIG_VERIFY=(Outcome=Failure) log.instance.SignedAudit.filters.DELTA_CRL_GENERATION=(Outcome=Failure) log.instance.SignedAudit.filters.FULL_CRL_GENERATION=(Outcome=Failure) log.instance.SignedAudit.filters.OCSP_GENERATION=(Outcome=Failure) log.instance.SignedAudit.filters.RANDOM_GENERATION=(Outcome=Failure) log.instance.SignedAudit.flushInterval=5 log.instance.SignedAudit.level=1 log.instance.SignedAudit.logSigning=true log.instance.SignedAudit.maxFileSize=2000 log.instance.SignedAudit.pluginName=file log.instance.SignedAudit.rolloverInterval=2592000 log.instance.SignedAudit.signedAudit=_002=## log.instance.SignedAudit.signedAuditCertNickname=NHSM-CONN-XC:auditSigningCert cert-rhcs10-ECC-SubCA CA log.instance.SignedAudit.type=signedAudit", "auths.impl.SharedToken.class=com.netscape.cms.authentication.SharedSecret auths.instance.SharedToken.pluginName=SharedToken auths.instance.SharedToken.dnpattern= auths.instance.SharedToken.ldap.basedn=ou=People,dc=example,dc=org auths.instance.SharedToken.ldap.ldapauth.authtype=BasicAuth auths.instance.SharedToken.ldap.ldapauth.bindDN=cn=Directory Manager auths.instance.SharedToken.ldap.ldapauth.bindPWPrompt=Rule SharedToken auths.instance.SharedToken.ldap.ldapauth.clientCertNickname= auths.instance.SharedToken.ldap.ldapconn.host=server.example.com auths.instance.SharedToken.ldap.ldapconn.port=636 auths.instance.SharedToken.ldap.ldapconn.secureConn=true auths.instance.SharedToken.ldap.ldapconn.version=3 auths.instance.SharedToken.ldap.maxConns= auths.instance.SharedToken.ldap.minConns= auths.instance.SharedToken.ldapByteAttributes= auths.instance.SharedToken.ldapStringAttributes= auths.instance.SharedToken.shrTokAttr=shrTok", "authz.impl.DirAclAuthz.class=com.netscape.cms.authorization.DirAclAuthz authz.instance.DirAclAuthz.ldap=internaldb authz.instance.DirAclAuthz.pluginName=DirAclAuthz authz.instance.DirAclAuthz.ldap._000=## authz.instance.DirAclAuthz.ldap._001=## Internal Database authz.instance.DirAclAuthz.ldap._002=## authz.instance.DirAclAuthz.ldap.basedn=dc=server.example.com-pki-ca authz.instance.DirAclAuthz.ldap.database=server.example.com-pki-ca authz.instance.DirAclAuthz.ldap.maxConns=15 authz.instance.DirAclAuthz.ldap.minConns=3 authz.instance.DirAclAuthz.ldap.ldapauth.authtype=SslClientAuth authz.instance.DirAclAuthz.ldap.ldapauth.bindDN=cn=Directory Manager authz.instance.DirAclAuthz.ldap.ldapauth.bindPWPrompt=Internal LDAP Database authz.instance.DirAclAuthz.ldap.ldapauth.clientCertNickname= authz.instance.DirAclAuthz.ldap.ldapconn.host=localhost authz.instance.DirAclAuthz.ldap.ldapconn.port=11636 authz.instance.DirAclAuthz.ldap.ldapconn.secureConn=true authz.instance.DirAclAuthz.ldap.multipleSuffix.enable=false", "auths.impl.AgentCertAuth.class=com.netscape.cms.authentication.AgentCertAuthentication auths.instance.AgentCertAuth.agentGroup=Certificate Manager Agents auths.instance.AgentCertAuth.pluginName=AgentCertAuth", "ca.sslserver.cert=MIIDmDCCAoCgAwIBAgIBAzANBgkqhkiG9w0BAQUFADBAMR4wHAYDVQQKExVSZWR ca.sslserver.certreq=MIICizCCAXMCAQAwRjEeMBwGA1UEChMVUmVkYnVkY29tcHV0ZXIgRG9tYWluMSQwIgYDV ca.sslserver.nickname=Server-Cert cert-pki-ca ca.sslserver.tokenname=Internal Key Storage Token", "conn.ca1.clientNickname=subsystemCert cert-pki-tps conn.ca1.hostadminport=server.example.com:8443 conn.ca1.hostagentport=server.example.com:8443 conn.ca1.hostport=server.example.com:9443 conn.ca1.keepAlive=true conn.ca1.retryConnect=3 conn.ca1.servlet.enrollment=/ca/ee/ca/profileSubmitSSLClient conn.ca1.servlet.renewal=/ca/ee/ca/profileSubmitSSLClient conn.ca1.servlet.revoke=/ca/subsystem/ca/doRevoke conn.ca1.servlet.unrevoke=/ca/subsystem/ca/doUnrevoke conn.ca1.timeout=100", "internaldb._000=## internaldb._000=## internaldb._001=## Internal Database internaldb._002=## internaldb.basedn=o=pki-tomcat-ca-SD internaldb.database=pki-tomcat-ca internaldb.maxConns=15 internaldb.minConns=3 internaldb.ldapauth.authtype=SslClientAuth internaldb.ldapauth.clientCertNickname=HSM-A:subsystemCert pki-tomcat-ca internaldb.ldapconn.host=example.com internaldb.ldapconn.port=11636 internaldb.ldapconn.secureConn=true internaldb.multipleSuffix.enable=false", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "vim /var/lib/pki/instance_name/ca/conf/CS.cfg", "ca.publish.queue.enable=true", "ca.publish.queue.maxNumberOfThreads=1 ca.publish.queue.priorityLevel=0 ca.publish.queue.pageSize=100 ca.publish.queue.saveStatus=200", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "kra.noOfRequiredRecoveryAgents=1", "X500Name.NEW_ATTRNAME.oid=n.n.n.n X500Name.NEW_ATTRNAME.class=string_to_DER_value_converter_class", "X500Name.MY_ATTR.oid=1.2.3.4.5.6 X500Name.MY_ATTR.class=netscape.security.x509.DirStrConverter", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "X500Name.attr.MYATTR1.oid=1.2.3.4.5.6 X500Name.attr.MYATTR1.class=netscape.security.x509.DirStrConverter X500Name.attr.MYATTR2.oid=11.22.33.44.55.66 X500Name.attr.MYATTR2.class=netscape.security.x509.IA5StringConverter X500Name.attr.MYATTR3.oid=111.222.333.444.555.666 X500Name.attr.MYATTR3.class=netscape.security.x509.PrintableConverter", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "MYATTR1: a_value MYATTR2: a.Value MYATTR3: aValue cn: John Doe o: Example Corporation", "X500Name.directoryStringEncodingOrder=encoding_list_separated_by_commas", "X500Name.directoryStringEncodingOrder=PrintableString,BMPString", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "X500Name.directoryStringEncodingOrder=PrintableString,UniversalString", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "policyset.userCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.userCertSet.6.default.params.keyUsageCrlSign=true", "pki-server stop instance_name", "cd /var/lib/pki/instance-name/ca/conf/", "ca.crl_signing.cacertnickname=nickname cert-instance_ID ca.crl_signing.defaultSigningAlgorithm=signing_algorithm ca.crl_signing.tokenname=token_name", "ca.crl_signing.cacertnickname=crlSigningCert cert-pki-ca ca.crl_signing.defaultSigningAlgorithm=SHA512withRSA ca.crl_signing.tokenname=Internal Key Storage Token", "pki-server restart instance_name", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service*", "cd /var/lib/instance_name/conf/", "ca.crl.MasterCRL.enableCRLCache=true ca.crl.MasterCRL.enableCacheRecovery=true", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "ca.crl.MasterCRL.updateSchema=3 ca.crl.MasterCRL.enableDailyUpdates=true ca.crl.MasterCRL.enableUpdateInterval=true ca.crl.MasterCRL.autoUpdateInterval=240 ca.crl.MasterCRL.dailyUpdates=1:00 ca.crl.MasterCRL.nextUpdateGracePeriod=0", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "cd /var/lib/instance_name/conf/", "ca.crl.MasterCRL.updateSchema=3", "ca.crl.MasterCRL.enableDailyUpdates=true ca.crl.MasterCRL.enableUpdateInterval=false ca.crl.MasterCRL.dailyUpdates=0:50,04:55,06:55", "ca.crl.MasterCRL.enableDailyUpdates=false ca.crl.MasterCRL.enableUpdateInterval=true ca.crl.MasterCRL.autoUpdateInterval=240", "ca.crl.MasterCRL.nextUpdateGracePeriod=0", "ca.crl.MasterCRL.nextUpdateGracePeriod=time_in_minutes", "ocsp.store.defStore.refreshInSec=time_in_seconds", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "ca.crl.MasterCRL.enableDailyUpdates=true ca.crl.MasterCRL.enableUpdateInterval=true ca.crl.MasterCRL.autoUpdateInterval=240 ca.crl.MasterCRL.dailyUpdates=1:00", "authz.evaluateOrder=deny,allow", "authz.sourceType=web.xml", "dbs.beginRequestNumber=1001001007001 dbs.endRequestNumber=11001001007000 dbs.requestIncrement=10000000000000 dbs.requestLowWaterMark=2000000000000 dbs.requestCloneTransferNumber=10000 dbs.requestDN=ou=ca, ou=requests dbs.requestRangeDN=ou=requests, ou=ranges dbs.beginSerialNumber=1001001007001 dbs.endSerialNumber=11001001007000 dbs.serialIncrement=10000000000000 dbs.serialLowWaterMark=2000000000000 dbs.serialCloneTransferNumber=10000 dbs.serialDN=ou=certificateRepository, ou=ca dbs.serialRangeDN=ou=certificateRepository, ou=ranges dbs.beginReplicaNumber=1 dbs.endReplicaNumber=100 dbs.replicaIncrement=100 dbs.replicaLowWaterMark=20 dbs.replicaCloneTransferNumber=5 dbs.replicaDN=ou=replica dbs.replicaRangeDN=ou=replica, ou=ranges dbs.ldap=internaldb dbs.newSchemaEntryAdded=true", "authType=sslclientauth", "pki-server stop rhcs10-RSA-SubCA", "ca.publish.rule.instance.ocsprule-<host/port info>.enable=false", "ca.publish.rule.instance.ocsprule-rhcs10-example-com-32443.enable=false", "pki-server start rhcs10-RSA-SubCA", "ocsp.store.ldapStore.validateConnCertWithCRL=true auths.revocationChecking.enabled=true", "ca.publish.enable=true ca.publish.cert.enable=true ca.publish.crl.enable=true", "ca.publish.enable=true ca.publish.cert.enable=false", "ca.publish.enable=true ca.publish.crl.enable=false", "cms.passwordlist=internaldb,replicationdb,CA LDAP Publishing cms.password.ignore.publishing.failure=true", "passwordFile=/var/lib/pki/instance_name/conf/password.conf", "name=password", "internal=413691159497", "hardware-name=password", "hardware-NHSM-CONN-XC=MyHSMUSDS8cret", "internal=376577078151 internaldb=secret12 replicationdb=1535106826 hardware-NHSM-CONN-XC=MyHSMUSDS8cret", "cp -p /var/lib/pki/instance_name/conf/server.xml /root/", "cp -p /var/lib/pki/instance_name/conf/password.conf /root/", "systemctl stop pki-tomcatd@instance_name.service", "systemctl disable pki-tomcatd@instance_name.service", "egrep \"^hardware-\" /var/lib/pki/instance_name/conf/password.conf hardware-HSM_token_name=password", "cms.tokenList=HMS_token_name", "pki-server instance-nuxwdog-enable instance_name", "pki-server nuxwdog-enable", "cp -p /usr/lib/systemd/system/[email protected] /etc/systemd/system/", "User new_user_name", "systemctl daemon-reload", "systemctl enable pki-tomcatd-nuxwdog@instance_name.service", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "systemctl is-enabled pki-tomcatd-nuxwdog@instance_name.service enabled", "systemctl is-disabled pki-tomcatd@instance_name.service disabled", "passwordFile=\"/var/lib/pki/instance_name/ca/CS.cfg\"", "passwordClass=\"com.netscape.cms.tomcat.NuxwdogPasswordStore\"", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "systemctl disable pki-tomcatd-nuxwdog@instance_name.service", "pki-server instance-nuxwdog-disable instance_name", "systemctl enable pki-tomcatd@instance_name.service", "cp /root/password.conf.bak /var/lib/pki/instance_name/conf/password.conf", "systemctl start pki-tomcatd@instance_name.service", "<Connector name=\"Secure\" port=\"21443\" protocol=\"org.dogtagpki.tomcat.Http11NioProtocol\" SSLEnabled=\"true\" sslImplementationName=\"org.dogtagpki.tomcat.JSSImplementation\" scheme=\"https\" secure=\"true\" connectionTimeout=\"3000000\" keepAliveTimeout=\"300000\" maxHttpHeaderSize=\"8192\" acceptCount=\"100\" maxThreads=\"150\" minSpareThreads=\"25\" enableLookups=\"false\" disableUploadTimeout=\"true\" enableOCSP=\"true\" ocspCacheSize=\"1000\" ocspMinCacheEntryDuration=\"7200\" ocspMaxCacheEntryDuration=\"14400\" ocspTimeout=\"10\" serverCertNickFile=\"/var/lib/pki/rhcs10-ECC-SubCA/conf/serverCertNick.conf\" passwordFile=\"/var/lib/pki/rhcs10-ECC-SubCA/conf/password.conf\" passwordClass=\"org.apache.tomcat.util.net.jss.PlainPasswordFile\" certdbDir=\"/var/lib/pki/rhcs10-ECC-SubCA/alias\"> <SSLHostConfig sslProtocol=\"TLS\" protocols=\"TLSv1.2\" certificateVerification=\"optional\" ciphers=\"ECDHE-ECDSA-AES128-SHA256:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA- AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384\"> <Certificate certificateKeystoreType=\"pkcs11\" certificateKeystoreProvider=\"Mozilla-JSS\" certificateKeyAlias=\"NHSM-CONN-XC:Server-Cert cert-rhcs10-ECC-SubCA\"/> </SSLHostConfig> </Connector>", "systemctl stop pki-tomcatd@ <instance_name> .service", "<Connector name=\"Secure\" enableOCSP=\"true\" ocspCacheSize=\"1000\" ocspMinCacheEntryDuration=\"60\" ocspMaxCacheEntryDuration=\"120\" ocspTimeout=\"10\" />", "systemctl start pki-tomcatd@ <instance_name> .service", "systemctl stop pki-tomcatd@ <instance_name> .service", "ca.defaultOcspUri=http:// hostname :32080/ocsp/ee/ocsp", "systemctl start pki-tomcatd@ <instance_name> .service", "Server Service Connector name=\"Secure\" keepAliveTimeout=\"300000\" / /Service /Server", "web-app session-config session-timeout30/session-timeout /session-config /web-app", "pki-server stop instance_name", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "cd /var/lib/pki/instance_name/ca/webapps/ca/WEB-INF", "cp web.xml web.xml.servlets", "<servlet> <servlet-name> caadminEnroll </servlet-name> <servlet-class> com.netscape.cms.servlet.cert.EnrollServlet </servlet-class> <init-param><param-name> GetClientCert </param-name> <param-value> false </param-value> </init-param> <init-param><param-name> successTemplate </param-name> <param-value> /admin/ca/EnrollSuccess.template </param-value> </init-param> <init-param><param-name> AuthzMgr </param-name> <param-value> BasicAclAuthz </param-value> </init-param> <init-param><param-name> authority </param-name> <param-value> ca </param-value> </init-param> <init-param><param-name> interface </param-name> <param-value> admin </param-value> </init-param> <init-param><param-name> ID </param-name> <param-value> caadminEnroll </param-value> </init-param> <init-param><param-name> resourceID </param-name> <param-value> certServer.admin.request.enrollment </param-value> </init-param> <init-param><param-name> AuthMgr </param-name> <param-value> passwdUserDBAuthMgr </param-value> </init-param> </servlet>", "<servlet-mapping> <servlet-name> caadminEnroll </servlet-name> <url-pattern> /admin/ca/adminEnroll </url-pattern> </servlet-mapping>", "<filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /certbasedenrollment </url-pattern> </filter-mapping> <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /enrollment </url-pattern> </filter-mapping> <filter-mapping> <filter-name> EERequestFilter </filter-name> <url-pattern> /profileSubmit </url-pattern> </filter-mapping>", "pki-server start instance_name", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "<Context docBase=\"/usr/share/pki/ca/webapps/ca\" crossContext=\"true\" allowLinking=\"true\" </Context", "cp -r /usr/share/pki/ca/webapps/ca /var/lib/pki/pki-tomcat/webapps", "<Context docBase=\"ca\" crossContext=\"true\" allowLinking=\"true\" </Context", "rm -rf /var/lib/pki/pki-tomcat/webapps/ca", "<Context docBase=\"/usr/share/pki/common-ui\" crossContext=\"true\" allowLinking=\"true\" </Context", "cp -r /usr/share/pki/common-ui /var/lib/pki/pki-tomcat/webapps/pki", "<Context docBase=\"pki\" crossContext=\"true\" allowLinking=\"true\" </Context", "rm -rf /var/lib/pki/pki-tomcat/webapps/pki", "cp /usr/share/pki/tps/conf/token-states.properties /var/lib/pki/pki-tomcat/tps/conf", "rm /var/lib/pki/pki-tomcat/tps/conf/token-states.properties", "pki ca-cert-show 0x1 WARNING! Access to this service is restricted to those individuals with specific permissions. If you are not an authorized user, disconnect now. Any attempts to gain unauthorized access will be prosecuted to the fullest extent of the law. Do you want to proceed (y/N)? y ----------------- Certificate \"0x1\" ----------------- Serial Number: 0x1 Issuer: CN=CA Signing Certificate,OU=instance_name,O=EXAMPLE Subject: CN=CA Signing Certificate,OU=instance_name,O=EXAMPLE Status: VALID Not Before: Mon Feb 20 18:21:03 CET 2017 Not After: Fri Feb 20 18:21:03 CET 2037", "mv /etc/pki/instance_name/banner.txt /etc/pki/instance_name/banner.txt.UNUSED", "pki-server banner-show -i instance_name", "pki-server banner-validate -i instance_name --------------- Banner is valid ---------------", "cmc.popLinkWitnessRequired=true", "ldapmodify -D \"cn=Directory Manager\" -H ldaps://server.example.com:636 -W -x dn: cn=schema changetype: modify add: attributetypes attributetypes: ( 2.16.840.1.117370.3.1.123 NAME 'shrTok' DESC 'User Defined ObjectClass for SharedToken' SYNTAX 1.3.6.1.4.1.1466.115.121.1.40 SINGLE-VALUE X-ORIGIN 'custom for sharedToken')", "cmc.token=NHSM-CONN-XC", "auths.impl.SharedToken.class=com.netscape.cms.authentication.SharedSecret auths.instance.SharedToken.dnpattern= auths.instance.SharedToken.ldap.basedn=ou=People,dc=example,dc=org auths.instance.SharedToken.ldap.ldapauth.authtype=BasicAuth auths.instance.SharedToken.ldap.ldapauth.bindDN=cn=Directory Manager auths.instance.SharedToken.ldap.ldapauth.bindPWPrompt=Rule SharedToken auths.instance.SharedToken.ldap.ldapauth.clientCertNickname= auths.instance.SharedToken.ldap.ldapconn.host=server.example.com auths.instance.SharedToken.ldap.ldapconn.port=636 auths.instance.SharedToken.ldap.ldapconn.secureConn=true auths.instance.SharedToken.ldap.ldapconn.version=3 auths.instance.SharedToken.ldap.maxConns= auths.instance.SharedToken.ldap.minConns= auths.instance.SharedToken.ldapByteAttributes= auths.instance.SharedToken.ldapStringAttributes= auths.instance.SharedToken.pluginName=SharedToken auths.instance.SharedToken.shrTokAttr=shrTok", "ca.cert.issuance_protection.nickname=issuance_protection_certificate", "systemctl restart pki-tomcatd@instance_name.service", "cmc.bypassClientAuth=true" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/configfiles
Chapter 11. Maintaining Satellite Server
Chapter 11. Maintaining Satellite Server This chapter provides information on how to maintain a Satellite Server, including information on how to work with audit records, how to clean unused tasks, and how to recover Pulp from a full disk. 11.1. Deleting Audit Records Audit records are created automatically in Satellite. You can use the foreman-rake audits:expire command to remove audits at any time. You can also use a cron job to schedule audit record deletions at the set interval that you want. By default, using the foreman-rake audits:expire command removes audit records that are older than 90 days. You can specify the number of days to keep the audit records by adding the days option and add the number of days. For example, if you want to delete audit records that are older than seven days, enter the following command: 11.2. Anonymizing Audit Records You can use the foreman-rake audits:anonymize command to remove any user account or IP information while maintaining the audit records in the database. You can also use a cron job to schedule anonymizing the audit records at the set interval that you want. By default, using the foreman-rake audits:anonymize command anonymizes audit records that are older than 90 days. You can specify the number of days to keep the audit records by adding the days option and add the number of days. For example, if you want to anonymize audit records that are older than seven days, enter the following command: 11.3. Deleting Report Records Report records are created automatically in Satellite. You can use the foreman-rake reports:expire command to remove reports at any time. You can also use a cron job to schedule report record deletions at the set interval that you want. By default, using the foreman-rake reports:expire command removes report records that are older than 90 days. You can specify the number of days to keep the report records by adding the days option and add the number of days. For example, if you want to delete report records that are older than seven days, enter the following command: 11.4. Configuring the Cleaning Unused Tasks Feature Satellite performs regular cleaning to reduce disc space in the database and limit the rate of disk growth. As a result, Satellite backup completes faster and overall performance is higher. By default, Satellite executes a cron job that cleans tasks every day at 19:45. Satellite removes the following tasks during the cleaning: Tasks that have run successfully and are older than thirty days All tasks that are older than a year You can configure the cleaning unused tasks feature using these options: To configure the time at which Satellite runs the cron job, set the --foreman-plugin-tasks-cron-line parameter to the time you want in cron format. For example, to schedule the cron job to run every day at 15:00, enter the following command: To configure the period after which Satellite deletes the tasks, edit the :rules: section in the /etc/foreman/plugins/foreman-tasks.yaml file. To disable regular task cleanup on Satellite, enter the following command: To reenable regular task cleanup on Satellite, enter the following command: 11.5. Deleting Task Records Task records are created automatically in Satellite. You can use the foreman-rake foreman_tasks:cleanup command to remove tasks at any time. You can also use a cron job to schedule Task record deletions at the set interval that you want. For example, if you want to delete task records from successful repository synchronizations, enter the following command: 11.6. Deleting a Task by ID You can delete tasks by ID, for example if you have submitted confidential data by mistake. Procedure Connect to your Satellite Server using SSH: Optional: View the task: Delete the task: Optional: Ensure the task has been removed from Satellite Server: Note that because the task is deleted, this command returns a non-zero exit code. 11.7. Recovering from a Full Disk The following procedure describes how to resolve the situation when a logical volume (LV) with the Pulp database on it has no free space. Procedure Let running Pulp tasks finish but do not trigger any new ones as they can fail due to the full disk. Ensure that the LV with the /var/lib/pulp directory on it has sufficient free space. Here are some ways to achieve that: Remove orphaned content: This is run weekly so it will not free much space. Change the download policy from Immediate to On Demand for as many repositories as possible and remove already downloaded packages. See the Red Hat Knowledgebase solution How to change syncing policy for Repositories on Satellite from "Immediate" to "On-Demand" on the Red Hat Customer Portal for instructions. Grow the file system on the LV with the /var/lib/pulp directory on it. For more information, see Growing a File System on a Logical Volume in the Red Hat Enterprise Linux 7 Logical Volume Manager Administration Guide . Note If you use an untypical file system (other than for example ext3, ext4, or xfs), you might need to unmount the file system so that it is not in use. In that case, complete the following steps: Stop Satellite services: Grow the file system on the LV. Start Satellite services: If some Pulp tasks failed due to the full disk, run them again. 11.8. Managing Packages on the Base Operating System of Satellite Server or Capsule Server To install and update packages on the Satellite Server or Capsule Server base operating system, you must enter the satellite-maintain packages command. Satellite prevents users from installing and updating packages with yum because yum might also update the packages related to Satellite Server or Capsule Server and result in system inconsistency. Important The satellite-maintain packages command restarts some services on the operating system where you run it because it runs the satellite-installer command after installing packages. Procedure To install packages on Satellite Server or Capsule Server, enter the following command: To update specific packages on Satellite Server or Capsule Server, enter the following command: To update all packages on Satellite Server or Capsule Server, enter the following command: Using yum to Check for Package Updates If you want to check for updates using yum , enter the command to install and update packages manually and then you can use yum to check for updates: Updating packages individually can lead to package inconsistencies in Satellite Server or Capsule Server. For more information about updating packages in Satellite Server, see Updating Satellite Server . Enabling yum for Satellite Server or Capsule Server Package Management If you want to install and update packages on your system using yum directly and control the stability of the system yourself, enter the following command: Restoring Package Management to the Default Settings If you want to restore the default settings and enable Satellite Server or Capsule Server to prevent users from installing and updating packages with yum and ensure the stability of the system, enter the following command: 11.9. Reclaiming PostgreSQL Space The PostgreSQL database can use a large amount of disk space especially in heavily loaded deployments. Use this procedure to reclaim some of this disk space on Satellite. Procedure Stop all services, except for the postgresql service: Switch to the postgres user and reclaim space on the database: Start the other services when the vacuum completes: 11.10. Reclaiming Space From On Demand Repositories If you set the download policy to on demand, Satellite downloads packages only when the clients request them. You can clean up these packages to reclaim space. For a single repository In the Satellite web UI, navigate to Content > Products . Select a product. On the Repositories tab, click the repository name. From the Select Actions list, select Reclaim Space . For multiple repositories In the Satellite web UI, navigate to Content > Products . Select the product name. On the Repositories tab, select the checkbox of the repositories. Click Reclaim Space at the top right corner. For Capsules In the Satellite web UI, navigate to Infrastructure > Capsules . Select the Capsule Server. Click Reclaim space .
[ "foreman-rake audits:expire days=7", "foreman-rake audits:anonymize days=7", "foreman-rake reports:expire days=7", "satellite-installer --foreman-plugin-tasks-cron-line \"00 15 * * *\"", "satellite-installer --foreman-plugin-tasks-automatic-cleanup false", "satellite-installer --foreman-plugin-tasks-automatic-cleanup true", "foreman-rake foreman_tasks:cleanup TASK_SEARCH='label = Actions::Katello::Repository::Sync' STATES='stopped'", "ssh [email protected]", "hammer task info --id My_Task_ID", "foreman-rake foreman_tasks:cleanup TASK_SEARCH=\"id= My_Task_ID \"", "hammer task info --id My_Task_ID", "foreman-rake katello:delete_orphaned_content RAILS_ENV=production", "satellite-maintain service stop", "satellite-maintain service start", "satellite-maintain packages install package_1 package_2", "satellite-maintain packages update package_1 package_2", "satellite-maintain packages update", "satellite-maintain packages unlock yum check update satellite-maintain packages lock", "satellite-maintain packages unlock", "satellite-maintain packages lock", "satellite-maintain service stop --exclude postgresql", "su - postgres -c 'vacuumdb --full --all'", "satellite-maintain service start" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/administering_red_hat_satellite/maintaining_server_admin
Chapter 76. Kubernetes Nodes
Chapter 76. Kubernetes Nodes Since Camel 2.17 Both producer and consumer are supported The Kubernetes Nodes component is one of the Kubernetes Components which provides a producer to execute Kubernetes Node operations and a consumer to consume events related to Node objects. 76.1. Dependencies When using kubernetes-nodes with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 76.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 76.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 76.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 76.3. Component Options The Kubernetes Nodes component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 76.4. Endpoint Options The Kubernetes Nodes endpoint is configured using URI syntax: with the following path and query parameters: 76.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 76.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 76.5. Message Headers The Kubernetes Nodes component supports 6 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNodesLabels (producer) Constant: KUBERNETES_NODES_LABELS The node labels. Map CamelKubernetesNodeName (producer) Constant: KUBERNETES_NODE_NAME The node name. String CamelKubernetesNodeSpec (producer) Constant: KUBERNETES_NODE_SPEC The spec for a node. NodeSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 76.6. Supported producer operation listNodes listNodesByLabels getNode createNode updateNode deleteNode 76.7. Kubernetes Nodes Producer Examples listNodes: this operation list the nodes on a kubernetes cluster. from("direct:list"). toF("kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes"). to("mock:result"); This operation returns a List of Nodes from your cluster. listNodesByLabels: this operation list the nodes by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); } }); toF("kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels"). to("mock:result"); This operation returns a List of Nodes from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 76.8. Kubernetes Nodes Consumer Example fromF("kubernetes-nodes://%s?oauthToken=%s&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Node node = exchange.getIn().getBody(Node.class); log.info("Got event with configmap name: " + node.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events for the node test. 76.9. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-nodes:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-nodes:///?kubernetesClient=#kubernetesClient&operation=listNodes\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_NODES_LABELS, labels); } }); toF(\"kubernetes-deployments:///?kubernetesClient=#kubernetesClient&operation=listNodesByLabels\"). to(\"mock:result\");", "fromF(\"kubernetes-nodes://%s?oauthToken=%s&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Node node = exchange.getIn().getBody(Node.class); log.info(\"Got event with configmap name: \" + node.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-nodes-component-starter
Appendix B. Working with certmonger
Appendix B. Working with certmonger Part of managing machine authentication is managing machine certificates. On clients, IdM manages the certificate lifecycle with the certmonger service, which works together with the certificate authority (CA) provided by IdM. The certmonger daemon and its command-line clients simplify the process of generating public/private key pairs, creating certificate requests, and submitting requests to the CA for signing. As part of managing certificates, the certmonger daemon monitors certificates for expiration and can renew certificates that are about to expire. The certificates that certmonger monitors are tracked in files stored in a configurable directory. The default location is /var/lib/certmonger/requests . certmonger uses the IdM getcert command to manage all certificates. As covered in Section 3.4, "Examples: Installing with Different CA Configurations" , an IdM server can be configured to use different types of certificate authorities. The most common (and recommended) configuration is to use a full CA server, but it is also possible to use a much more limited, self-signed CA. The exact getcert command used by certmonger to communicate with the IdM backend depends on which type of CA is used. The ipa-getcert command is used with a full CA, while the selfsign-getcert command is used with a self-signed CA. Note Because of general security issues, self-signed certificates are not typically used in production, but can be used for development and testing. B.1. Requesting a Certificate with certmonger With the IdM CA, certmonger uses the ipa-getcert command. Certificates and keys are stored locally in plaintext files ( .pem ) or in an NSS database, identified by the certificate nickname. When requesting a certificate, then, the request should identify the location where the certificate will be stored and the nickname of the certificate. For example: The /etc/pki/nssdb file is the global NSS database, and Server-Cert is the nickname of this certificate. The certificate nickname must be unique within this database. When requesting a certificate to be used with an IdM service, the -K option is required to specify the service principal. Otherwise, certmonger assumes the certificate is for a host. The -N option must specify the certificate subject DN, and the subject base DN must match the base DN for the IdM server, or the request is rejected. Example B.1. Using certmonger for a Service The options vary depending on whether you are using a self-signed certificate ( selfsign-getcert ) and the desired configuration for the final certificate, as well as other settings. In Example B.1, "Using certmonger for a Service" , these are common options: The -r option will automatically renew the certificate if the key pair already exists. This is used by default. The -f option stores the certificate in the given file. The -k option either stores the key in the given file or, if the key file already exists, uses the key in the file. The -N option gives the subject name. The -D option gives the DNS domain name. The -U option sets the extended key usage flag.
[ "ipa-getcert request -d /etc/pki/nssdb -n Server-Cert", "ipa-getcert request -d /etc/httpd/alias -n Server-Cert -K HTTP/client1.example.com -N 'CN=client1.example.com,O=EXAMPLE.COM'", "ipa-getcert request -r -f /etc/httpd/conf/ssl.crt/server.crt -k /etc/httpd/conf/ssl.key/server.key -N CN=`hostname --fqdn` -D `hostname` -U id-kp-serverAuth" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/identity_management_guide/certmongerX
Chapter 2. Deploy OpenShift Data Foundation using local storage devices
Chapter 2. Deploy OpenShift Data Foundation using local storage devices Deploying OpenShift Data Foundation on OpenShift Container Platform using local storage devices provides you with the option to create internal cluster resources. Follow this deployment method to use local storage to back persistent volumes for your OpenShift Container Platform applications. Use this section to deploy OpenShift Data Foundation on IBM Z infrastructure where OpenShift Container Platform is already installed. 2.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.18 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 2.2. Installing Local Storage Operator Install the Local Storage Operator from the Operator Hub before creating Red Hat OpenShift Data Foundation clusters on local storage devices. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Type local storage in the Filter by keyword box to find the Local Storage Operator from the list of operators, and click on it. Set the following options on the Install Operator page: Update channel as stable . Installation mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-local-storage . Update approval as Automatic . Click Install . Verification steps Verify that the Local Storage Operator shows a green tick indicating successful installation. 2.3. Finding available storage devices (optional) This step is additional information and can be skipped as the disks are automatically discovered during storage cluster creation. Use this procedure to identify the device names for each of the three or more worker nodes that you have labeled with the OpenShift Data Foundation label cluster.ocs.openshift.io/openshift-storage='' before creating Persistent Volumes (PV) for IBM Z. Procedure List and verify the name of the worker nodes with the OpenShift Data Foundation label. Example output: Log in to each worker node that is used for OpenShift Data Foundation resources and find the unique by-id device name for each available raw block device. Example output: In this example, for bmworker01 , the available local device is sdb . Identify the unique ID for each of the devices selected in Step 2. In the above example, the ID for the local device sdb Repeat the above step to identify the device ID for all the other nodes that have the storage devices to be used by OpenShift Data Foundation. See this Knowledge Base article for more details. 2.4. Enabling DASD devices If you are using DASD devices, you must enable them before creating an OpenShift Data Foundation cluster on IBM Z. Once the DASD devices are available to z/VM guests, complete the following steps from the compute or infrastructure node on which an OpenShift Data Foundation storage node is being installed. Procedure To enable the DASD device, run the following command: 1 For <device_bus_id>, specify the ID of the DASD device bus-ID. For example, 0.0.b100 . To verify the status of the DASD device you can use the the lsdasd and lsblk commands. To low-level format the device and specify the disk name, run the following command: 1 For <device_name>, specify the disk name. For example, dasdb . Important The use of DASD quick-formatting Extent Space Efficient (ESE) DASD is not supported on OpenShift Data Foundation. If you are using ESE DASDs, make sure to disable quick-formatting with the --mode=full parameter. To auto-create one partition using the whole disk, run the following command: 1 For <device_name>, enter the disk name you have specified in the step. For example, dasdb . Once these steps are completed, the device is available during OpenShift Data Foundation deployment as /dev/dasdb1 . Important During LocalVolumeSet creation, make sure to select only the Part option as device type. Additional resources For details on the commands, see Commands for Linux on IBM Z in IBM documentation. 2.5. Creating OpenShift Data Foundation cluster on IBM Z Use this procedure to create an OpenShift Data Foundation cluster on IBM Z. Prerequisites Ensure that all the requirements in the Requirements for installing OpenShift Data Foundation using local storage devices section are met. You must have at least three worker nodes with the same storage type and size attached to each node (for example, 200 GB) to use local storage devices on IBM Z or IBM(R) LinuxONE. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, perform the following: Select the Create a new StorageClass using the local storage devices for Backing storage type option. Select Full Deployment for the Deployment type option. Click . Important You are prompted to install the Local Storage Operator if it is not already installed. Click Install , and follow the procedure as described in Installing Local Storage Operator . In the Create local volume set page, provide the following information: Enter a name for the LocalVolumeSet and the StorageClass . By default, the local volume set name appears for the storage class name. You can change the name. Choose one of the following: Disks on all nodes Uses the available disks that match the selected filters on all the nodes. Disks on selected nodes Uses the available disks that match the selected filters only on the selected nodes. Important The flexible scaling feature is enabled only when the storage cluster that you created with three or more nodes are spread across fewer than the minimum requirement of three availability zones. For information about flexible scaling, see knowledgebase article on Scaling OpenShift Data Foundation cluster using YAML when flexible scaling is enabled . Flexible scaling features get enabled at the time of deployment and can not be enabled or disabled later on. If the nodes selected do not match the OpenShift Data Foundation cluster requirement of an aggregated 30 CPUs and 72 GiB of RAM, a minimal cluster is deployed. For minimum starting node requirements, see the Resource requirements section in the Planning guide. From the available list of Disk Type , select SSD/NVME . Expand the Advanced section and set the following options: Volume Mode Block is selected by default. Device Type Select one or more device type from the dropdown list. By default, Disk and Part options are included in the Device Type field. Note For a multi-path device, select the Mpath option from the drop-down exclusively. For a DASD-based cluster, ensure that only the Part option is included in the Device Type and remove the Disk option. Disk Size Set a minimum size of 100GB for the device and maximum available size of the device that needs to be included. Maximum Disks Limit This indicates the maximum number of PVs that can be created on a node. If this field is left empty, then PVs are created for all the available disks on the matching nodes. Click . A pop-up to confirm the creation of LocalVolumeSet is displayed. Click Yes to continue. In the Capacity and nodes page, configure the following: Available raw capacity is populated with the capacity value based on all the attached disks associated with the storage class. This takes some time to show up. The Selected nodes list shows the nodes based on the storage class. You can check the box to select Taint nodes. Click . Optional: In the Security and network page, configure the following based on your requirement: To enable encryption, select Enable data encryption for block and file storage . Choose one or both of the following Encryption level : Cluster-wide encryption Encrypts the entire cluster (block and file). StorageClass encryption Creates encrypted persistent volume (block only) using encryption enabled storage class. Select Connect to an external key management service checkbox. This is optional for cluster-wide encryption. Key Management Service Provider is set to Vault by default. Enter Vault Service Name , host Address of Vault server ('https:// <hostname or ip> ''), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide CA Certificate , Client Certificate and Client Private Key . Click Save . Select Default (SDN) as Multus is not yet supported on OpenShift Data Foundation on IBM Z. Click . In the Data Protection page, if you are configuring Regional-DR solution for Openshift Data Foundation then select the Prepare cluster for disaster recovery (Regional-DR only) checkbox, else click . In the Review and create page:: Review the configuration details. To modify any configuration settings, click Back to go back to the configuration page. Click Create StorageSystem . Verification steps To verify the final Status of the installed storage cluster: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources . Verify that Status of StorageCluster is Ready and has a green tick mark to it. To verify if flexible scaling is enabled on your storage cluster, perform the following steps: In the OpenShift Web Console, navigate to Installed Operators OpenShift Data Foundation Storage System ocs-storagecluster-storagesystem Resources ocs-storagecluster . In the YAML tab, search for the keys flexibleScaling in spec section and failureDomain in status section. If flexible scaling is true and failureDomain is set to host, flexible scaling feature is enabled. To verify that all components for OpenShift Data Foundation are successfully installed, see Verifying your OpenShift Data Foundation deployment . Additional resources To expand the capacity of the initial cluster, see the Scaling Storage guide.
[ "oc annotate namespace openshift-storage openshift.io/node-selector=", "oc get nodes -l=cluster.ocs.openshift.io/openshift-storage=", "NAME STATUS ROLES AGE VERSION bmworker01 Ready worker 6h45m v1.16.2 bmworker02 Ready worker 6h45m v1.16.2 bmworker03 Ready worker 6h45m v1.16.2", "oc debug node/<node name>", "oc debug node/bmworker01 Starting pod/bmworker01-debug To use host binaries, run `chroot /host` Pod IP: 10.0.135.71 If you don't see a command prompt, try pressing enter. sh-4.2# chroot /host sh-4.4# lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT loop0 7:0 0 500G 0 loop sda 8:0 0 120G 0 disk |-sda1 8:1 0 384M 0 part /boot `-sda4 8:4 0 119.6G 0 part `-coreos-luks-root-nocrypt 253:0 0 119.6G 0 dm /sysroot sdb 8:16 0 500G 0 disk", "sh-4.4#ls -l /dev/disk/by-id/ | grep sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-360050763808104bc2800000000000259 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-SIBM_2145_00e020412f0aXX00 -> ../../sdb lrwxrwxrwx. 1 root root 9 Feb 3 16:49 scsi-0x60050763808104bc2800000000000259 -> ../../sdb", "scsi-0x60050763808104bc2800000000000259", "sudo chzdev -e <device_bus_id> 1", "sudo dasdfmt /dev/<device_name> -b 4096 -p --mode=full 1", "sudo fdasd -a /dev/<device_name> 1", "spec: flexibleScaling: true [...] status: failureDomain: host" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_openshift_data_foundation_using_ibm_z/deploy-using-local-storage-devices-ibmz
Chapter 15. Accessing the RADOS Object Gateway S3 endpoint
Chapter 15. Accessing the RADOS Object Gateway S3 endpoint Users can access the RADOS Object Gateway (RGW) endpoint directly. In versions of Red Hat OpenShift Data Foundation, RGW service needed to be manually exposed to create RGW public route. As of OpenShift Data Foundation version 4.7, the RGW route is created by default and is named rook-ceph-rgw-ocs-storagecluster-cephobjectstore .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/managing_hybrid_and_multicloud_resources/accessing-the-rados-object-gateway-s3-endpoint_rhodf
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. Let us know how we can improve it. Submitting feedback through Jira (account required) Log in to the Jira website. Click Create in the top navigation bar. Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Click Create at the bottom of the dialogue.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/proc_providing-feedback-on-red-hat-documentation_working-with-idm-certificates
Chapter 1. Introduction to Automation content navigator
Chapter 1. Introduction to Automation content navigator As a content creator, you can use Automation content navigator to develop Ansible playbooks, collections, and roles that are compatible with the Red Hat Ansible Automation Platform. You can use Automation content navigator in the following environments, with seamless and predictable results across them all: Local development machines Automation execution environments Automation content navigator also produces an artifact file you can use to help you develop your playbooks and troubleshoot problem areas. 1.1. Uses for Automation content navigator Automation content navigator is a command line, content-creator-focused tool with a text-based user interface. You can use Automation content navigator to: Launch and watch jobs and playbooks. Share stored, completed playbook and job run artifacts in JSON format. Browse and introspect automation execution environments. Browse your file-based inventory. Render Ansible module documentation and extract examples you can use in your playbooks. View a detailed command output on the user interface. 1.2. Automation content navigator modes Automation content navigator operates in two modes: stdout mode Accepts most of the existing Ansible commands and extensions at the command line. text-based user interface mode Provides an interactive, text-based interface to the Ansible commands. Use this mode to evaluate content, run playbooks, and troubleshoot playbooks after they run using artifact files. 1.2.1. stdout mode Use the -m stdout subcommand with Automation content navigator to use the familiar Ansible commands, such as ansible-playbook within automation execution environments or on your local development environment. You can use commands you are familiar with for quick tasks. Automation content navigator also provides extensive help in this mode: --help Accessible from ansible-navigator command or from any subcommand, such as ansible-navigator config --help . subcommand help Accessible from the subcommand, for example ansible-navigator config --help-config . This help displays the details of all the parameters supported from the related Ansible command. 1.2.2. Text-based user interface mode The text-based user interface mode provides enhanced interaction with automation execution environments, collections, playbooks, and inventory. This mode is compatible with integrated development environments (IDE), such as Visual Studio Code. This mode includes a number of helpful user interface options: colon commands You can access all the Automation content navigator commands with a colon, such as :run or :collections navigating the text-based interface The screen shows how to page up or down, scroll, escape to a prior screen or access :help . output by line number You can access any line number in the displayed output by preceding it with a colon, for example :12 . color-coded output With colors enabled, Automation content navigator displays items, such as deprecated modules, in red. pagination and scrolling You can page up or down, scroll, or escape by using the options displayed at the bottom of each Automation content navigator screen. You cannot switch between modes after Automation content navigator is running. This document uses the text-based user interface mode for most procedures. 1.3. Automation content navigator commands The Automation content navigator commands run familiar Ansible CLI commands in -m stdout mode. You can use all the subcommands and options from the related Ansible CLI command. Use ansible-navigator --help for details. Table 1.1. Automation content navigator commands Command Description CLI example collections Explore available collections ansible-navigator collections --help config Explore the current Ansible configuration ansible-navigator config --help doc Review documentation for a module or plugin ansible-navigator doc --help images Explore execution environment images ansible-navigator images --help inventory Explore an inventory ansible-navigator inventory --help replay Explore a run by using a playbook artifact ansible-navigator replay --help run Run a playbook ansible-navigator run --help welcome Start at the welcome page ansible-navigator welcome --help 1.4. Relationship between Ansible and Automation content navigator commands The Automation content navigator commands run familiar Ansible CLI commands in -m stdout mode. You can use all the subcommands and options available in the related Ansible CLI command. Use ansible-navigator --help for details. Table 1.2. Comparison of Automation content navigator and Ansible CLI commands Automation content navigator command Ansible CLI command ansible-navigator collections ansible-galaxy collection ansible-navigator config ansible-config ansible-navigator doc ansible-doc ansible-navigator inventory ansible-inventory ansible-navigator run ansible-playbook
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/automation_content_navigator_creator_guide/assembly-intro-navigator_ansible-navigator
6.3.6. Activating and Mounting the Original Logical Volume
6.3.6. Activating and Mounting the Original Logical Volume Since you had to deactivate the logical volume mylv , you need to activate it again before you can mount it.
[ "lvchange -a y /dev/myvg/mylv mount /dev/myvg/mylv /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/yourvg/yourlv 24507776 32 24507744 1% /mnt /dev/myvg/mylv 24507776 32 24507744 1% /mnt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/active_mount_ex3
Preface
Preface Thank you for your interest in Red Hat Ansible Automation Platform. Ansible Automation Platform is a commercial offering that helps teams manage complex multi-tier deployments by adding control, knowledge, and delegation to Ansible-powered environments. This guide describes how to use Ansible plug-ins for Red Hat Developer Hub. This document has been updated to include information for the latest release of Ansible Automation Platform.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/using_ansible_plug-ins_for_red_hat_developer_hub/pr01
Chapter 16. Account Console
Chapter 16. Account Console Red Hat build of Keycloak users can manage their accounts through the Account Console. Users can configure their profiles, add two-factor authentication, include identity provider accounts, and oversee device activity. Additional resources The Account Console can be configured in terms of appearance and language preferences. An example is adding attributes to the Personal info page by clicking Personal info link and completing and saving details. For more information, see the Server Developer Guide . 16.1. Accessing the Account Console Any user can access the Account Console. Procedure Make note of the realm name and IP address for the Red Hat build of Keycloak server where your account exists. In a web browser, enter a URL in this format: server-root /realms/{realm-name}/account. Enter your login name and password. Account Console 16.2. Configuring ways to sign in You can sign in to this console using basic authentication (a login name and password) or two-factor authentication. For two-factor authentication, use one of the following procedures. 16.2.1. Two-factor authentication with OTP Prerequisites OTP is a valid authentication mechanism for your realm. Procedure Click Account security in the menu. Click Signing in . Click Set up authenticator application . Signing in Follow the directions that appear on the screen to use either FreeOTP or Google Authenticator on your mobile device as your OTP generator. Scan the QR code in the screen shot into the OTP generator on your mobile device. Log out and log in again. Respond to the prompt by entering an OTP that is provided on your mobile device. 16.2.2. Two-factor authentication with WebAuthn Prerequisites WebAuthn is a valid two-factor authentication mechanism for your realm. Please follow the WebAuthn section for more details. Procedure Click Account Security in the menu. Click Signing In . Click Set up Security Key . Signing In Prepare your WebAuthn Security Key. How you prepare this key depends on the type of WebAuthn security key you use. For example, for a USB based Yubikey, you may need to put your key into the USB port on your laptop. Click Register to register your security key. Log out and log in again. Assuming authentication flow was correctly set, a message appears asking you to authenticate with your Security Key as second factor. 16.2.3. Passwordless authentication with WebAuthn Prerequisites WebAuthn is a valid passwordless authentication mechanism for your realm. Please follow the Passwordless WebAuthn section for more details. Procedure Click Account Security in the menu. Click Signing In . Click Set up Security Key in the Passwordless section. Signing In Prepare your WebAuthn Security Key. How you prepare this key depends on the type of WebAuthn security key you use. For example, for a USB based Yubikey, you may need to put your key into the USB port on your laptop. Click Register to register your security key. Log out and log in again. Assuming authentication flow was correctly set, a message appears asking you to authenticate with your Security Key as second factor. You no longer need to provide your password to log in. 16.3. Viewing device activity You can view the devices that are logged in to your account. Procedure Click Account security in the menu. Click Device activity . Log out a device if it looks suspicious. Devices 16.4. Adding an identity provider account You can link your account with an identity broker . This option is often used to link social provider accounts. Procedure Log into the Admin Console. Click Identity providers in the menu. Select a provider and complete the fields. Return to the Account Console. Click Account security in the menu. Click Linked accounts . The identity provider you added appears in this page. Linked Accounts 16.5. Accessing other applications The Applications menu item shows users which applications you can access. In this case, only the Account Console is available. Applications 16.6. Viewing group memberships You can view the groups you are associated with by clicking the Groups menu. If you select Direct membership checkbox, you will see only the groups you are direct associated with. Prerequisites You need to have the view-groups account role for being able to view Groups menu. View group memberships
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/account-service
Chapter 2. Preparing your OpenShift cluster
Chapter 2. Preparing your OpenShift cluster This chapter explains how to install Red Hat Integration - Camel K and OpenShift Serverless on OpenShift, and how to install the required Camel K and OpenShift Serverless command-line client tools in your development environment. Section 2.1, "Installing Camel K" Section 2.2, "Installing OpenShift Serverless" 2.1. Installing Camel K You can install the Red Hat Integration - Camel K Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. After you install the Camel K Operator, you can install the Camel K CLI tool for command line access to all Camel K features. Prerequisites You have access to an OpenShift 4.6 (or later) cluster with the correct access level, the ability to create projects and install operators, and the ability to install CLI tools on your local system. Note You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI . Procedure In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges. Create a new OpenShift project: In the left navigation menu, click Home > Project > Create Project . Enter a project name, for example, my-camel-k-project , and then click Create . In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, type Camel K and then click the Red Hat Integration - Camel K Operator card. Read the information about the operator and then click Install . The Operator installation page opens. Select the following subscription settings: Update Channel > latest Choose among the following 2 options: Installation Mode > A specific namespace on the cluster > my-camel-k-project Installation Mode > All namespaces on the cluster (default) > Openshift operator Note Approval Strategy > Automatic Note The Installation mode > All namespaces on the cluster and Approval Strategy > Manual settings are also available if required by your environment. Click Install , and wait a few moments until the Camel K Operator is ready for use. Download and install the Camel K CLI tool: From the Help menu (?) at the top of the OpenShift web console, select Command line tools . Scroll down to the kamel - Red Hat Integration - Camel K - Command Line Interface section. Click the link to download the binary for your local operating system (Linux, Mac, Windows). Unzip and install the CLI in your system path. To verify that you can access the Kamel K CLI, open a command window and then type the following: kamel --help This command shows information about Camel K CLI commands. Note If you uninstall the Camel K operator from OperatorHub using OLM, the CRDs are not removed. To shift back to a Camel K operator, you must remove the CRDs manually by using the following command. oc get crd -l app=camel-k -o name | xargs oc delete step (optional) Specifying Camel K resource limits 2.1.1. Consistent integration platform settings You can create namespace local Integration Platform resources to overwrite settings used in the operator. These namespace local platform settings must be derived from the Integration Platform being used by the operator by default. That is, only explicitly specified settings overwrite the platform defaults used in the operator. Therefore, you must use a consistent platform settings hierarchy where the global operator platform settings always represent the basis for user specified platform settings. In case of global Camel K operator, if IntegrationPlatform specifies non-default spec.build.buildStrategy, this value is also propagated to namespaced Camel-K operators installed thereafter. Default value for buildStrategy is routine. USD oc get ip camel-k -o yaml -n openshift-operators apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: labels: app: camel-k name: camel-k namespace: openshift-operators spec: build: buildStrategy: pod The parameter buildStrategy of global operator IntegrationPlatform can be edited by one of the following ways; From Dashboard Administrator view: Operators Installed operators in namespace openshift-operators (that is, globally installed operators), select Red Hat Integration - Camel K Integration Platform YAML Now add or edit (if already present) spec.build.buildStrategy: pod Click Save Using the following command. Any namespaced Camel K operators installed subsequently would inherit settings from the global IntegrationPlatform. oc patch ip/camel-k -p '{"spec":{"build":{"buildStrategy": "pod"}}}' --type merge -n openshift-operators 2.1.2. Specifying Camel K resource limits When you install Camel K, the OpenShift pod for Camel K does not have any limits set for CPU and memory (RAM) resources. If you want to define resource limits for Camel K, you must edit the Camel K subscription resource that was created during the installation process. Prerequisite You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed as described in Installing Camel K . You know the resource limits that you want to apply to the Camel K subscription. For more information about resource limits, see the following documentation: Setting deployment resources in the OpenShift documentation. Managing Resources for Containers in the Kubernetes documentation. Procedure Log in to the OpenShift Web console. Select Operators > Installed Operators > Operator Details > Subscription . Select Actions > Edit Subscription . The file for the subscription opens in the YAML editor. Under the spec section, add a config.resources section and provide values for memory and cpu as shown in the following example: Save your changes. OpenShift updates the subscription and applies the resource limits that you specified. 2.2. Installing OpenShift Serverless You can install the OpenShift Serverless Operator on your OpenShift cluster from the OperatorHub. The OperatorHub is available from the OpenShift Container Platform web console and provides an interface for cluster administrators to discover and install Operators. The OpenShift Serverless Operator supports both Knative Serving and Knative Eventing features. For more details, see installing OpenShift Serverless Operator . Prerequisites You have cluster administrator access to an OpenShift project in which the Camel K Operator is installed. You installed the OpenShift CLI tool ( oc ) so that you can interact with the OpenShift cluster at the command line. For details on how to install the OpenShift CLI, see Installing the OpenShift CLI . Procedure In the OpenShift Container Platform web console, log in by using an account with cluster administrator privileges. In the left navigation menu, click Operators > OperatorHub . In the Filter by keyword text box, enter Serverless to find the OpenShift Serverless Operator . Read the information about the Operator and then click Install to display the Operator subscription page. Select the default subscription settings: Update Channel > Select the channel that matches your OpenShift version, for example, 4.14 Installation Mode > All namespaces on the cluster Approval Strategy > Automatic Note The Approval Strategy > Manual setting is also available if required by your environment. Click Install , and wait a few moments until the Operator is ready for use. Install the required Knative components using the steps in the OpenShift documentation: Installing Knative Serving Installing Knative Eventing (Optional) Download and install the OpenShift Serverless CLI tool: From the Help menu (?) at the top of the OpenShift web console, select Command line tools . Scroll down to the kn - OpenShift Serverless - Command Line Interface section. Click the link to download the binary for your local operating system (Linux, Mac, Windows) Unzip and install the CLI in your system path. To verify that you can access the kn CLI, open a command window and then type the following: kn --help This command shows information about OpenShift Serverless CLI commands. For more details, see the OpenShift Serverless CLI documentation . Additional resources Installing OpenShift Serverless in the OpenShift documentation 2.3. Configuring Maven repository for Camel K For Camel K operator, you can provide the Maven settings in a ConfigMap or a secret. Procedure To create a ConfigMap from a file, run the following command. Created ConfigMap can be then referenced in the IntegrationPlatform resource, from the spec.build.maven.settings field. Example Or you can edit the IntegrationPlatform resource directly to reference the ConfigMap that contains the Maven settings using following command: Configuring CA certificates for remote Maven repositories You can provide the CA certificates, used by the Maven commands to connect to the remote Maven repositories, in a Secret. Procedure Create a Secret from file using following command: Reference the created Secret in the IntegrationPlatform resource, from the spec.build.maven.caSecret field as shown below.
[ "You do not need to create a pull secret when installing Camel K from the OpenShift OperatorHub. The Camel K Operator automatically reuses the OpenShift cluster-level authentication to pull the Camel K image from `registry.redhat.io`.", "If you do not choose among the above two options, the system by default chooses a global namespace on the cluster then leading to openshift operator.", "oc get ip camel-k -o yaml -n openshift-operators apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: labels: app: camel-k name: camel-k namespace: openshift-operators spec: build: buildStrategy: pod", "patch ip/camel-k -p '{\"spec\":{\"build\":{\"buildStrategy\": \"pod\"}}}' --type merge -n openshift-operators", "spec: channel: default config: resources: limits: memory: 512Mi cpu: 500m requests: cpu: 200m memory: 128Mi", "create configmap maven-settings --from-file=settings.xml", "apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: name: camel-k spec: build: maven: settings: configMapKeyRef: key: settings.xml name: maven-settings", "edit ip camel-k", "create secret generic maven-ca-certs --from-file=ca.crt", "apiVersion: camel.apache.org/v1 kind: IntegrationPlatform metadata: name: camel-k spec: build: maven: caSecret: key: tls.crt name: tls-secret" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/getting_started_with_camel_k/preparing-openshift-cluster-camel-k
Chapter 3. Preparing Storage for Red Hat Virtualization
Chapter 3. Preparing Storage for Red Hat Virtualization Prepare storage to be used for storage domains in the new environment. A Red Hat Virtualization environment must have at least one data storage domain, but adding more is recommended. A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains. Self-hosted engines must have an additional data domain dedicated to the Manager virtual machine. This domain is created during the self-hosted engine deployment, and must be at least 74 GiB. You must prepare the storage for this domain before beginning the deployment. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) Red Hat Gluster Storage Important If you are using iSCSI storage, the self-hosted engine storage domain must use its own iSCSI target. Any additional storage domains must use a different iSCSI target. Warning Creating additional data storage domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine. 3.1. Preparing NFS Storage Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts. For information on setting up and configuring NFS, see Network File System (NFS) in the Red Hat Enterprise Linux 7 Storage Administration Guide . For information on how to export an 'NFS' share, see How to export 'NFS' share from NetApp Storage / EMC SAN in Red Hat Virtualization Specific system user accounts and system user groups are required by Red Hat Virtualization so the Manager can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown and chmod steps for all of the directories you intend to use as storage domains in Red Hat Virtualization. Procedure Create the group kvm : Create the user vdsm in the group kvm : Set the ownership of your exported directory to 36:36, which gives vdsm:kvm ownership: Change the mode of the directory so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users: 3.2. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Online Storage Management in the Red Hat Enterprise Linux 7 Storage Administration Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 3.3. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 3.4. Preparing Red Hat Gluster Storage For information on setting up and configuring Red Hat Gluster Storage, see the Red Hat Gluster Storage Installation Guide . For the Red Hat Gluster Storage versions that are supported with Red Hat Virtualization, see https://access.redhat.com/articles/2356261 . 3.5. Customizing Multipath Configurations for SAN Vendors To customize the multipath configuration settings, do not modify /etc/multipath.conf . Instead, create a new configuration file that overrides /etc/multipath.conf . Warning Upgrading Virtual Desktop and Server Manager (VDSM) overwrites the /etc/multipath.conf file. If multipath.conf contains customizations, overwriting it can trigger storage issues. Prerequisites This topic only applies to systems that have been configured to use multipath connections storage domains, and therefore have a /etc/multipath.conf file. Do not override the user_friendly_names and find_multipaths settings. For more information, see Section 3.6, "Recommended Settings for Multipath.conf" Avoid overriding no_path_retry and polling_interval unless required by the storage vendor. For more information, see Section 3.6, "Recommended Settings for Multipath.conf" Procedure To override the values of settings in /etc/multipath.conf , create a new configuration file in the /etc/multipath/conf.d/ directory. Note The files in /etc/multipath/conf.d/ execute in alphabetical order. Follow the convention of naming the file with a number at the beginning of its name. For example, /etc/multipath/conf.d/90-myfile.conf . Copy the settings you want to override from /etc/multipath.conf to the new configuration file in /etc/multipath/conf.d/ . Edit the setting values and save your changes. Apply the new configuration settings by entering the systemctl reload multipathd command. Note Avoid restarting the multipathd service. Doing so generates errors in the VDSM logs. Verification steps If you override the VDSM-generated settings in /etc/multipath.conf , verify that the new configuration performs as expected in a variety of failure scenarios. For example, disable all of the storage connections. Then enable one connection at a time and verify that doing so makes the storage domain reachable. Troubleshooting If a Red Hat Virtualization Host has trouble accessing shared storage, check /etc/multpath.conf and files under /etc/multipath/conf.d/ for values that are incompatible with the SAN. Additional resources Red Hat Enterprise Linux DM Multipath in the RHEL documentation. Configuring iSCSI Multipathing in the Administration Guide. How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? on the Red Hat Customer Portal, which shows an example multipath.conf file and was the basis for this topic. 3.6. Recommended Settings for Multipath.conf When overriding /etc/multipath.conf , Do not override the following settings: user_friendly_names no This setting controls whether user-friendly names are assigned to devices in addition to the actual device names. Multiple hosts must use the same name to access devices. Disabling this setting prevents user-friendly names from interfering with this requirement. find_multipaths no This setting controls whether RHVH tries to access all devices through multipath, even if only one path is available. Disabling this setting prevents RHV from using the too-clever behavior when this setting is enabled. Avoid overriding the following settings unless required by the storage system vendor: no_path_retry 4 This setting controls the number of polling attempts to retry when no paths are available. Before RHV version 4.2, the value of no_path_retry was fail because QEMU had trouble with the I/O queuing when no paths were available. The fail value made it fail quickly and paused the virtual machine. RHV version 4.2 changed this value to 4 so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the time all paths fail. For more details, see the commit that changed this setting . polling_interval 5 This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
[ "groupadd kvm -g 36", "useradd vdsm -u 36 -g 36", "chown -R 36:36 /exports/data", "chmod 0755 /exports/data", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_self-hosted_engine_using_the_command_line/Preparing_Storage_for_RHV_SHE_cli_deploy
23.21. A Sample Virtual Machine XML Configuration
23.21. A Sample Virtual Machine XML Configuration The following table shows a sample XML configuration of a guest virtual machine (VM), also referred to as domain XML , and explains the content of the configuration. To obtain the XML configuration of a VM, use the virsh dumpxml command. For information about editing VM configuration, see the Virtualization Getting Started Guide . Table 23.33. A Sample Domain XML Configuration Domain XML section Description <domain type='kvm'> <name>Testguest1</name> <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static'>1</vcpu> This is a KVM called Testguest1 with 1024 MiB allocated RAM. For information about configuring general VM parameters, see Section 23.1, "General Information and Metadata" . <vcpu placement='static'>1</vcpu> The guest VM has 1 allocated vCPU. For information about CPU allocation, see Section 23.4, "CPU allocation" . <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <boot dev='hd'/> </os> The machine architecture is set to AMD64 and Intel 64 architecture, and uses the Intel 440FX machine type to determine feature compatibility. The OS is booted from the hard drive. For information about modifying OS parameters, see Section 23.2, "Operating System Booting" . <features> <acpi/> <apic/> <vmport state='off'/> </features> The hypervisor features acpi and apic are disabled and the VMWare IO port is turned off. For information about modifying Hypervisor features, see - Section 23.14, "Hypervisor Features" . <cpu mode='host-passthrough' check='none'/> The guest CPU features are set to be the same as those on the host CPU. For information about modifying CPU features, see - Section 23.12, "CPU Models and Topology" . <clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock> The guest's virtual hardware clock uses the UTC time zone. In addition, three different timers are set up for synchronization with the QEMU hypervisor. For information about modifying time-keeping settings, see - Section 23.15, "Timekeeping" . <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash> When the VM powers off, or its OS terminates unexpectedly, libvirt terminates the guest and releases all its allocated resources. When the guest is rebooted, it is restarted with the same configuration. For more information about configuring these settings, see - Section 23.13, "Events Configuration" . <pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm> The S3 and S4 ACPI sleep states for this guest VM are disabled. " />. <devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/Testguest.qcow2'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk> The VM uses the /usr/bin/qemu-kvm binary file for emulation. In addition, it has two disks attached. The first disk is a virtualized hard-drive based on the /var/lib/libvirt/images/Testguest.qcow2 stored on the host, and its logical device name is set to hda . For more information about managing disks, see - Section 23.17.1, "Hard Drives, Floppy Disks, and CD-ROMs" . <controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller> The VM uses four controllers for attaching USB devices, and a root controller for PCI-Express (PCIe) devices. In addition, a virtio-serial controller is available, which enables the VM to interact with the host in a variety of ways, such as the serial console. For more information about configuring controllers, see - Section 23.17.3, "Controllers" . <interface type='network'> <mac address='52:54:00:65:29:21'/> <source network='default'/> <model type='rtl8139'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface> A network interface is set up in the VM that uses the default virtual network and the rtl8139 network device model. For more information about configuring network interfaces, see - Section 23.17.8, "Network Interfaces" . <serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> A pty serial console is set up on the VM, which enables the most rudimentary VM communication with the host. The console uses the paravirtualized SPICE channel. This is set up automatically and changing these settings is not recommended. For an overview of character devices, see - Section 23.17.8, "Network Interfaces" . For detailed information about serial ports and consoles , see Section 23.17.14, "Guest Virtual Machine Interfaces" . For more information about channels , see Section 23.17.15, "Channel" . <input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/> The VM uses a virtual ps2 port which is set up to receive mouse and keyboard input. This is set up automatically and changing these settings is not recommended. For more information, see Section 23.17.9, "Input Devices" . <graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics> The VM uses the SPICE protocol for rendering its graphical output with auto-allocated port numbers and image compression turned off. For information about configuring graphic devices, see Section 23.17.11, "Graphical Framebuffers" . <sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video> An ICH6 HDA sound device is set up for the VM, and the QEMU QXL paravirtualized framebuffer device is set up as the video accelerator. This is set up automatically and changing these settings is not recommended. For information about configuring sound devices , see Section 23.17.17, "Sound Devices" . For configuring video devices , see Section 23.17.12, "Video Devices" . <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='1'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> </devices> </domain> The VM has two redirectors for attaching USB devices remotely, and memory ballooning is turned on. This is set up automatically and changing these settings is not recommended. For detailed information, see Section 23.17.6, "Redirected devices"
[ "<domain type='kvm'> <name>Testguest1</name> <uuid>ec6fbaa1-3eb4-49da-bf61-bb02fbec4967</uuid> <memory unit='KiB'>1048576</memory> <currentMemory unit='KiB'>1048576</currentMemory> <vcpu placement='static'>1</vcpu>", "<vcpu placement='static'>1</vcpu>", "<os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <boot dev='hd'/> </os>", "<features> <acpi/> <apic/> <vmport state='off'/> </features>", "<cpu mode='host-passthrough' check='none'/>", "<clock offset='utc'> <timer name='rtc' tickpolicy='catchup'/> <timer name='pit' tickpolicy='delay'/> <timer name='hpet' present='no'/> </clock>", "<on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>destroy</on_crash>", "<pm> <suspend-to-mem enabled='no'/> <suspend-to-disk enabled='no'/> </pm>", "<devices> <emulator>/usr/bin/qemu-kvm</emulator> <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/var/lib/libvirt/images/Testguest.qcow2'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> <disk type='file' device='cdrom'> <driver name='qemu' type='raw'/> <target dev='hdb' bus='ide'/> <readonly/> <address type='drive' controller='0' bus='0' target='0' unit='1'/> </disk>", "<controller type='usb' index='0' model='ich9-ehci1'> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x2'/> </controller> <controller type='pci' index='0' model='pci-root'/> <controller type='ide' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </controller>", "<interface type='network'> <mac address='52:54:00:65:29:21'/> <source network='default'/> <model type='rtl8139'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </interface>", "<serial type='pty'> <target port='0'/> </serial> <console type='pty'> <target type='serial' port='0'/> </console> <channel type='spicevmc'> <target type='virtio' name='com.redhat.spice.0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel>", "<input type='mouse' bus='ps2'/> <input type='keyboard' bus='ps2'/>", "<graphics type='spice' autoport='yes'> <listen type='address'/> <image compression='off'/> </graphics>", "<sound model='ich6'> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </sound> <video> <model type='qxl' ram='65536' vram='65536' vgamem='16384' heads='1' primary='yes'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </video>", "<redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='1'/> </redirdev> <redirdev bus='usb' type='spicevmc'> <address type='usb' bus='0' port='2'/> </redirdev> <memballoon model='virtio'> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </memballoon> </devices> </domain>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-manipulating_the_domain_xml-a_sample_configuration_file
Chapter 3. Installing a cluster on vSphere with customizations
Chapter 3. Installing a cluster on vSphere with customizations In OpenShift Container Platform version 4.12, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 3.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Important Installing a cluster on VMware vSphere versions 7.0 and 7.0 Update 1 is deprecated. These versions are still fully supported, but all vSphere 6.x versions are no longer supported. Version 4.12 of OpenShift Container Platform requires VMware virtual hardware version 15 or later. To update the hardware version for your vSphere virtual machines, see the "Updating hardware on nodes running in vSphere" article in the Updating clusters section. Table 3.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 or later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. 3.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 3.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 3.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 3.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 3.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. OpenShift Container Platform generally supports compute-only vMotion, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . Using Storage vMotion can cause issues and is not supported. If you are using vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.12 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.10. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Table 3.7. Example of a configuration with multiple vSphere datacenters that run in a single VMware vCenter Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b 3.11. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.8. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.9. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt. For libvirt, the default value is 192.168.126.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.10. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). fips Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 3.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.11. Additional VMware vSphere cluster parameters Parameter Description Values vCenter The fully-qualified hostname or IP address of the vCenter server. String username The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String password The password for the vCenter user name. String datacenter The name of the data center to use in the vCenter instance. String defaultDatastore The name of the default datastore to use for provisioning volumes. String folder Optional. The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . resourcePool Optional. The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . network The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String cluster The vCenter cluster to install the OpenShift Container Platform cluster in. String apiVIPs The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . ingressVIPs The virtual IP (VIP) address that you configured for cluster ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . diskType Optional. The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . 3.11.1.5. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.12. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 3.11.1.6. Region and zone enablement configuration parameters To use the region and zone enablement feature, you must specify region and zone enablement parameters in your installation file. Important Before you modify the install-config.yaml file to configure a region and zone enablement environment, read the "VMware vSphere region and zone enablement" and the "Configuring regions and zones for a VMware vCenter" sections. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.13. Region and zone enablement parameters Parameter Description Values failureDomains Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. String failureDomains.name The name of the failure domain. The machine pools use this name to reference the failure domain. String failureDomains.server Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String failureDomains.region You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. String failureDomains.zone You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. String failureDomains.topology.computeCluster This parameter defines the compute cluster associated with the failure domain. If you do not define this parameter in your configuration, the compute cluster takes the value of platform.vsphere.cluster and platform.vsphere.datacenter . String failureDomains.topology.folder The absolute path of an existing folder where the installation program creates the virtual machines. If you do not define this parameter in your configuration, the folder takes the value of platform.vsphere.folder . String failureDomains.topology.datacenter Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. If you do not define this parameter in your configuration, the datacenter defaults to platform.vsphere.datacenter . String failureDomains.topology.datastore Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String failureDomains.topology.networks Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter in your configuration, the network takes the value of platform.vsphere.network . String failureDomains.topology.resourcePool Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String 3.11.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 4 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 5 Optional: Provide additional configuration for the machine pool parameters for the compute and control plane machines. 6 The cluster name that you specified in your DNS records. 7 Optional: Provide an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 8 The vSphere disk provisioning method. 9 The vSphere cluster to install the OpenShift Container Platform cluster in. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 3.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.11.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file to deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Important VMware vSphere region and zone enablement is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Important The example uses the govc command. The govc command is an open source command available from VMware. The govc command is not available from Red Hat. Red Hat Support does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website. Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Note You cannot change a failure domain after you installed an OpenShift Container Platform cluster on the VMware vSphere platform. You can add additional failure domains after cluster installation. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" controlPlane: name: master replicas: 3 vsphere: zones: 3 - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" 9 cluster: cluster 10 resourcePool: "/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: "/<datacenter1>/host/<cluster1>" 18 resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" 19 networks: 20 - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" # ... 1 You must define set the TechPreviewNoUpgrade as the value for this parameter, so that you can use the VMware vSphere region and zone enablement feature. 2 3 An optional parameter for specifying a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. If you do not define this parameter, nodes will be distributed among all defined failure-domains. 4 5 6 7 8 9 10 11 The default vCenter topology. The installation program uses this topology information to deploy the bootstrap node. Additionally, the topology defines the default datastore for vSphere persistent volumes. 12 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. If you do not define this parameter, the installation program uses the default vCenter topology. 13 Defines the name of the failure domain. Each failure domain is referenced in the zones parameter to scope a machine pool to the failure domain. 14 You define a region by using a tag from the openshift-region tag category. The tag must be attached to the vCenter datacenter. 15 You define a zone by using a tag from the openshift-zone tag category. The tag must be attached to the vCenter datacenter. 16 Specifies the vCenter resources associated with the failure domain. 17 An optional parameter for defining the vSphere datacenter that is associated with a failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 18 An optional parameter for stating the absolute file path for the compute cluster that is associated with the failure domain. If you do not define this parameter, the installation program uses the default vCenter topology. 19 An optional parameter for the installer-provisioned infrastructure. The parameter sets the absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, resources are installed in the root of the cluster /example_datacenter/host/example_cluster/Resources . 20 An optional parameter that lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. If you do not define this parameter, the installation program uses the default vCenter topology. 21 An optional parameter for specifying a datastore to use for provisioning volumes. If you do not define this parameter, the installation program uses the default vCenter topology. 3.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Note If the cloud provider account that you configured on your host does not have sufficient permissions to deploy the cluster, the installation process stops, and the missing permissions are displayed. Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.13. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.15. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 3.15.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.15.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.15.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.16. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.12, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.18. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.18.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "apiVersion: v1 baseDomain: example.com 1 compute: 2 name: worker replicas: 3 platform: vsphere: 3 cpus: 2 coresPerSocket: 2 memoryMB: 8192 osDisk: diskSizeGB: 120 controlPlane: 4 name: master replicas: 3 platform: vsphere: 5 cpus: 4 coresPerSocket: 2 memoryMB: 16384 osDisk: diskSizeGB: 120 metadata: name: cluster 6 platform: vsphere: vcenter: your.vcenter.server username: username password: password datacenter: datacenter defaultDatastore: datastore folder: folder resourcePool: resource_pool 7 diskType: thin 8 network: VM_Network cluster: vsphere_cluster_name 9 apiVIPs: - api_vip ingressVIPs: - ingress_vip fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "apiVersion: v1 baseDomain: example.com featureSet: TechPreviewNoUpgrade 1 compute: name: worker replicas: 3 vsphere: zones: 2 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" controlPlane: name: master replicas: 3 vsphere: zones: 3 - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" metadata: name: cluster platform: vsphere: vcenter: <vcenter_server> 4 username: <username> 5 password: <password> 6 datacenter: datacenter 7 defaultDatastore: datastore 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" 9 cluster: cluster 10 resourcePool: \"/<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>\" 11 diskType: thin failureDomains: 12 - name: <machine_pool_zone_1> 13 region: <region_tag_1> 14 zone: <zone_tag_1> 15 topology: 16 datacenter: <datacenter1> 17 computeCluster: \"/<datacenter1>/host/<cluster1>\" 18 resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" 19 networks: 20 - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" 21 - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\"", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_vsphere/installing-vsphere-installer-provisioned-customizations
Part VI. Technical Appendixes
Part VI. Technical Appendixes The appendixes in this section do not contain instructions on installing Red Hat Enterprise Linux. Instead, they provide technical background that you might find helpful to understand the options that Red Hat Enterprise Linux offers you at various points in the installation process.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/part-technical-appendixes
29.2. Rekeying Kerberos Principals
29.2. Rekeying Kerberos Principals Rekeying a Kerberos principal adds a new keytab entry with a higher key version number (KVNO) to the principal's keytab. The original entry remains in the keytab, but is no longer used to issue tickets. Find all keytabs issued within the required time period. For example, the following commands use the ldapsearch utility to display all host and service principals created between midnight on January 1, 2016, and 11:59 PM on December 31, 2016 in Greenwich Mean Time (GMT): The searchbase ( -b ) defines the subtree where ldapsearch looks for the principals: Host principals are stored under the cn=computers,cn=accounts,dc=example,dc=com subtree. Service principals are stored under the cn=services,cn=accounts,dc=example,dc=com subtree. The krblastpwdchange parameter filters the search results by the last change date. The parameter accepts the YYYYMMDD format for the date and the HHMMSS format for the time of day in GMT. Specifying the dn and krbprincipalname attributes limits the search results to the entry name and principal. For every service and host that requires rekeying the principal, use the ipa-getkeytab utility to retrieve a new keytab entry. Pass the following options: --principal ( -p ) to specify the principal --keytab ( -k ) to specify the location of the original keytab --server ( -s ) to specify the Identity Management server host name For example: To rekey a host principal with its keytab in the default location of /etc/krb5.keytab : To rekey the keytab for the Apache service in the default location of /etc/httpd/conf/ipa.keytab : Important Some services, such as NFS version 4, support only a limited set of encryption types. Pass the appropriate arguments to the ipa-getkeytab command to configure the keytab properly. Optional. Verify that you rekeyed the principals successfully. Use the klist utility to list all Kerberos tickets. For example, to list all keytab entries in /etc/krb5.keytab : The output shows that the keytab entry for client.example.com was rekeyed with a higher KVNO. The original keytab still exists in the database with the KVNO. Tickets issued against the earlier keytab continue to work, while new tickets are issued using the key with the highest KVNO. This avoids any disruption to system operations.
[ "ldapsearch -x -b \"cn=computers,cn=accounts,dc=example,dc=com\" \"(&(krblastpwdchange>=20160101000000)(krblastpwdchange<=20161231235959))\" dn krbprincipalname", "ldapsearch -x -b \"cn=services,cn=accounts,dc=example,dc=com\" \"(&(krblastpwdchange>=20160101000000)(krblastpwdchange<=20161231235959))\" dn krbprincipalname", "ipa-getkeytab -p host/ [email protected] -s server.example.com -k /etc/krb5.keytab", "ipa-getkeytab -p HTTP/ [email protected] -s server.example.com -k /etc/httpd/conf/ipa.keytab", "klist -kt /etc/krb5.keytab Keytab: WRFILE:/etc/krb5.keytab KVNO Timestamp Principal ---- ----------------- -------------------------------------------------------- 1 06/09/16 05:58:47 host/[email protected](aes256-cts-hmac-sha1-96) 2 06/09/16 11:23:01 host/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 krbtgt/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 HTTP/[email protected](aes256-cts-hmac-sha1-96) 1 03/09/16 13:57:16 ldap/[email protected](aes256-cts-hmac-sha1-96)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/rekeying-tickets
Java SDK Guide
Java SDK Guide Red Hat Virtualization 4.4 Using the Red Hat Virtualization Java SDK Red Hat Virtualization Documentation Team Red Hat Customer Content Services [email protected] Abstract This guide describes how to install and work with version 4 of the Red Hat Virtualization Java software development kit.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/java_sdk_guide/index
2.4. Data Roles
2.4. Data Roles All authenticated users have access to a VDB. To restrict access, configure data roles. Set these in the Teiid Designer or the dynamic VDB's vdb.xml file. As part of the data role definition, you can map them to JAAS roles specified in <mapped-role-name> tags. (Establish these mappings using the addDataRoleMapping() method.) How these JAAS roles are associated with users depends on which particular JAAS login module you use. For instance, the default UsersRolesLoginModule associates users with JAAS roles in plain text files. For more information about data roles, see Red Hat JBoss Data Virtualization Development Guide: Reference Material . Important Do not use "admin" or "user" as JAAS role names as these are reserved specifically for Dashboard Builder permissions.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/security_guide/data_roles3
Chapter 10. Configuring virtual GPUs for instances
Chapter 10. Configuring virtual GPUs for instances To support GPU-based rendering on your instances, you can define and manage virtual GPU (vGPU) resources according to your available physical GPU devices and your hypervisor type. You can use this configuration to divide the rendering workloads between all your physical GPU devices more effectively, and to have more control over scheduling your vGPU-enabled instances. To enable vGPU in the Compute (nova) service, create flavors that your cloud users can use to create Red Hat Enterprise Linux (RHEL) instances with vGPU devices. Each instance can then support GPU workloads with virtual GPU devices that correspond to the physical GPU devices. The Compute service tracks the number of vGPU devices that are available for each GPU profile you define on each host. The Compute service schedules instances to these hosts based on the flavor, attaches the devices, and monitors usage on an ongoing basis. When an instance is deleted, the Compute service adds the vGPU devices back to the available pool. Important Red Hat enables the use of NVIDIA vGPU in RHOSP without the requirement for support exceptions. However, Red Hat does not provide technical support for the NVIDIA vGPU drivers. The NVIDIA vGPU drivers are shipped and supported by NVIDIA. You require an NVIDIA Certified Support Services subscription to obtain NVIDIA Enterprise Support for NVIDIA vGPU software. For issues that result from the use of NVIDIA vGPUs where you are unable to reproduce the issue on a supported component, the following support policies apply: When Red Hat does not suspect that the third-party component is involved in the issue, the normal Scope of Support and Red Hat SLA apply. When Red Hat suspects that the third-party component is involved in the issue, the customer will be directed to NVIDIA in line with the Red Hat third party support and certification policies . For more information, see the Knowledge Base article Obtaining Support from NVIDIA . 10.1. Supported configurations and limitations Supported GPU cards For a list of supported NVIDIA GPU cards, see Virtual GPU Software Supported Products on the NVIDIA website. Limitations when using vGPU devices Each instance can use only one vGPU resource. Live migration of vGPU instances between hosts is not supported. Evacuation of vGPU instances is not supported. If you need to reboot the Compute node that hosts the vGPU instances, the vGPUs are not automatically reassigned to the recreated instances. You must either cold migrate the instances before you reboot the Compute node, or manually allocate each vGPU to the correct instance after reboot. To manually allocate each vGPU, you must retrieve the mdev UUID from the instance XML for each vGPU instance that runs on the Compute node before you reboot. You can use one of the following commands to discover the mdev UUID for each instance: For a RHEL version 8.4 Compute node: For a RHEL version 9.2 Compute node: Replace <instance_name> with the libvirt instance name, OS-EXT-SRV-ATTR:instance_name , returned in a /servers request to the Compute API. Suspend operations on a vGPU-enabled instance is not supported due to a libvirt limitation. Instead, you can snapshot or shelve the instance. By default, vGPU types on Compute hosts are not exposed to API users. To expose the vGPU types on Compute hosts to API users, you must configure resource provider traits and create flavors that require the traits. For more information, see Creating a custom vGPU resource provider trait . Alternatively, if you only have one vGPU type, you can grant access by adding the hosts to a host aggregate. For more information, see Creating and managing host aggregates . If you use NVIDIA accelerator hardware, you must comply with the NVIDIA licensing requirements. For example, NVIDIA vGPU GRID requires a licensing server. For more information about the NVIDIA licensing requirements, see NVIDIA License Server Release Notes on the NVIDIA website. 10.2. Configuring vGPU on the Compute nodes To enable your cloud users to create instances that use a virtual GPU (vGPU), you must configure the Compute nodes that have the physical GPUs: Designate Compute nodes for vGPU. Configure the Compute node for vGPU. Deploy the overcloud. Optional: Create custom traits for vGPU types. Optional: Create a custom GPU instance image. Create a vGPU flavor for launching instances that have vGPU. Tip If the GPU hardware is limited, you can also configure a host aggregate to optimize scheduling on the vGPU Compute nodes. To schedule only instances that request vGPUs on the vGPU Compute nodes, create a host aggregate of the vGPU Compute nodes, and configure the Compute scheduler to place only vGPU instances on the host aggregate. For more information, see Creating and managing host aggregates and Filtering by isolating host aggregates . Note To use an NVIDIA GRID vGPU, you must comply with the NVIDIA GRID licensing requirements and you must have the URL of your self-hosted license server. For more information, see the NVIDIA License Server Release Notes web page. 10.2.1. Prerequisites You have downloaded the NVIDIA GRID host driver RPM package that corresponds to your GPU device from the NVIDIA website. To determine which driver you need, see the NVIDIA Driver Downloads Portal . You must be a registered NVIDIA customer to download the drivers from the portal. You have built a custom overcloud image that has the NVIDIA GRID host driver installed. 10.2.2. Designating Compute nodes for vGPU To designate Compute nodes for vGPU workloads, you must create a new role file to configure the vGPU role, and configure the bare metal nodes with a GPU resource class to use to tag the GPU-enabled Compute nodes. Note The following procedure applies to new overcloud nodes that have not yet been provisioned. To assign a resource class to an existing overcloud node that has already been provisioned, you must use the scale down procedure to unprovision the node, then use the scale up procedure to reprovision the node with the new resource class assignment. For more information, see Scaling overcloud nodes . Procedure Log in to the undercloud as the stack user. Source the stackrc file: Generate a new roles data file named roles_data_gpu.yaml that includes the Controller , Compute , and ComputeGpu roles, along with any other roles that you need for the overcloud: Open roles_data_gpu.yaml and edit or add the following parameters and sections: Section/Parameter Current value New value Role comment Role: Compute Role: ComputeGpu Role name name: Compute name: ComputeGpu description Basic Compute Node role GPU Compute Node role HostnameFormatDefault -compute- -computegpu- deprecated_nic_config_name compute.yaml compute-gpu.yaml Register the GPU-enabled Compute nodes for the overcloud by adding them to your node definition template, node.json or node.yaml . For more information, see Registering nodes for the overcloud in the Installing and managing Red Hat OpenStack Platform with director guide. Inspect the node hardware: For more information, see Creating an inventory of the bare-metal node hardware in the Installing and managing Red Hat OpenStack Platform with director guide. Tag each bare metal node that you want to designate for GPU workloads with a custom GPU resource class: Replace <node> with the ID of the baremetal node. Add the ComputeGpu role to your node definition file, overcloud-baremetal-deploy.yaml , and define any predictive node placements, resource classes, network topologies, or other attributes that you want to assign to your nodes: 1 You can reuse an existing network topology or create a new custom network interface template for the role. For more information, see Custom network interface templates in the Installing and managing Red Hat OpenStack Platform with director guide. If you do not define the network definitions by using the network_config property, then the default network definitions are used. For more information about the properties you can use to configure node attributes in your node definition file, see Bare metal node provisioning attributes . For an example node definition file, see Example node definition file . Run the provisioning command to provision the new nodes for your role: Replace <stack> with the name of the stack for which the bare-metal nodes are provisioned. If not specified, the default is overcloud . Include the --network-config optional argument to provide the network definitions to the cli-overcloud-node-network-config.yaml Ansible playbook. If you do not define the network definitions by using the network_config property, then the default network definitions are used. Monitor the provisioning progress in a separate terminal. When provisioning is successful, the node state changes from available to active : If you did not run the provisioning command with the --network-config option, then configure the <Role>NetworkConfigTemplate parameters in your network-environment.yaml file to point to your NIC template files: Replace <gpu_net_top> with the name of the file that contains the network topology of the ComputeGpu role, for example, compute.yaml to use the default network topology. 10.2.3. Configuring the Compute node for vGPU and deploying the overcloud You need to retrieve and assign the vGPU type that corresponds to the physical GPU device in your environment, and prepare the environment files to configure the Compute node for vGPU. Procedure Install Red Hat Enterprise Linux and the NVIDIA GRID driver on a temporary Compute node and launch the node. Virtual GPUs are mediated devices, or mdev type devices. Retrieve the PCI address for each mdev device on each Compute node: The PCI address is used as the device driver directory name, for example, 0000:84:00.0 . Review the supported mdev types for each available pGPU device on each Compute node to discover the available vGPU types: Replace <mdev_device> with the PCI address for the mdev device, for example, 0000:84:00.0 . For example, the following Compute node has 4 pGPUs, and each pGPU supports the same 11 vGPU types: Create a gpu.yaml file to specify the vGPU types that each GPU device supports: Optional: To configure more than one vGPU type, map the supported vGPU types to the pGPUs: Replace <vgpu_type> with the name of the vGPU type to create a label for the vGPU group, for example, vgpu_nvidia-35 . Use a comma-separated list of vgpu_<vgpu_type> definitions to map additional vGPU types. Replace <pci_address> with the PCI address of a pGPU device that supports the vGPU type, for example, 0000:84:00.0 . Use a comma-separated list of <pci_address> definitions to map the vGPU group to additional pGPUs. Example: NovaVGPUTypesDeviceAddressesMapping: {'vgpu_nvidia-35': ['0000:84:00.0', '0000:85:00.0'],'vgpu_nvidia-36': ['0000:86:00.0']} The nvidia-35 vGPU type is supported by the pGPUs that are in the PCI addresses 0000:84:00.0 and 0000:85:00.0 . The nvidia-36 vGPU type is supported only by the pGPUs that are in the PCI address 0000:86:00.0 . Save the updates to your Compute environment file. Add your new role and environment files to the stack with your other environment files and deploy the overcloud: 10.3. Creating a custom vGPU resource provider trait You can create custom resource provider traits for each vGPU type that your RHOSP environment supports. You can then create flavors that your cloud users can use to launch instances on hosts that have those custom traits. Custom traits are defined in uppercase letters, and must begin with the prefix CUSTOM_ . For more information on resource provider traits, see Filtering by resource provider traits . Procedure Create a new trait: Replace <TRAIT_NAME> with the name of the trait. The name can contain only the letters A through Z, the numbers 0 through 9 and the underscore "_" character. Collect the existing resource provider traits of each host: Check the existing resource provider traits for the traits you require a host or host aggregate to have: If the traits you require are not already added to the resource provider, then add the existing traits and your required traits to the resource providers for each host: Replace <TRAIT_NAME> with the name of the trait that you want to add to the resource provider. You can use the --trait option more than once to add additional traits, as required. Note This command performs a full replacement of the traits for the resource provider. Therefore, you must retrieve the list of existing resource provider traits on the host and set them again to prevent them from being removed. 10.4. Creating a custom GPU instance image To enable your cloud users to create instances that use a virtual GPU (vGPU), you can create a custom vGPU-enabled image for launching instances. Use the following procedure to create a custom vGPU-enabled instance image with the NVIDIA GRID guest driver and license file. Prerequisites You have configured and deployed the overcloud with GPU-enabled Compute nodes. Procedure Log in to the undercloud as the stack user. Source the overcloudrc credential file: Create an instance with the hardware and software profile that your vGPU instances require: Replace <flavor> with the name or ID of the flavor that has the hardware profile that your vGPU instances require. For information about creating a vGPU flavor, see Creating a vGPU flavor for instances . Replace <image> with the name or ID of the image that has the software profile that your vGPU instances require. For information about downloading RHEL cloud images, see Creating RHEL KVM or RHOSP-compatible images in Creating and managing images . Log in to the instance as a cloud-user. Create the gridd.conf NVIDIA GRID license file on the instance, following the NVIDIA guidance: Licensing an NVIDIA vGPU on Linux by Using a Configuration File . Install the GPU driver on the instance. For more information about installing an NVIDIA driver, see Installing the NVIDIA vGPU Software Graphics Driver on Linux . Note Use the hw_video_model image property to define the GPU driver type. You can choose none if you want to disable the emulated GPUs for your vGPU instances. For more information about supported drivers, see Image configuration parameters . Create an image snapshot of the instance: Optional: Delete the instance. 10.5. Creating a vGPU flavor for instances To enable your cloud users to create instances for GPU workloads, you can create a GPU flavor that can be used to launch vGPU instances, and assign the vGPU resource to that flavor. Prerequisites You have configured and deployed the overcloud with GPU-designated Compute nodes. Procedure Create an NVIDIA GPU flavor, for example: Assign a vGPU resource to the flavor: Note You can assign only one vGPU for each instance. Optional: To customize the flavor for a specific vGPU type, add a required trait to the flavor: For information on how to create custom resource provider traits for each vGPU type, see Creating a custom vGPU resource provider trait . 10.6. Launching a vGPU instance You can create a GPU-enabled instance for GPU workloads. Procedure Create an instance using a GPU flavor and image, for example: Log in to the instance as a cloud-user. To verify that the GPU is accessible from the instance, enter the following command from the instance: 10.7. Enabling PCI passthrough for a GPU device You can use PCI passthrough to attach a physical PCI device, such as a graphics card, to an instance. If you use PCI passthrough for a device, the instance reserves exclusive access to the device for performing tasks, and the device is not available to the host. Prerequisites The pciutils package is installed on the physical servers that have the PCI cards. The driver for the GPU device must be installed on the instance that the device is passed through to. Therefore, you need to have created a custom instance image that has the required GPU driver installed. For more information about how to create a custom instance image with the GPU driver installed, see Creating a custom GPU instance image . Procedure To determine the vendor ID and product ID for each passthrough device type, enter the following command on the physical server that has the PCI cards: For example, to determine the vendor and product ID for an NVIDIA GPU, enter the following command: To determine if each PCI device has Single Root I/O Virtualization (SR-IOV) capabilities, enter the following command on the physical server that has the PCI cards: To configure the Controller node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_controller.yaml . Add PciPassthroughFilter to the NovaSchedulerEnabledFilters parameter in pci_passthru_controller.yaml : To specify the PCI alias for the devices on the Controller node, add the following configuration to pci_passthru_controller.yaml : If the PCI device has SR-IOV capabilities: If the PCI device does not have SR-IOV capabilities: For more information on configuring the device_type field, see PCI passthrough device type field . Note If the nova-api service is running in a role other than the Controller, then replace ControllerExtraConfig with the user role, in the format <Role>ExtraConfig . To configure the Compute node on the overcloud for PCI passthrough, create an environment file, for example, pci_passthru_compute.yaml . To specify the available PCIs for the devices on the Compute node, add the following to pci_passthru_compute.yaml : You must create a copy of the PCI alias on the Compute node for instance migration and resize operations. To specify the PCI alias for the devices on the Compute node, add the following to pci_passthru_compute.yaml : If the PCI device has SR-IOV capabilities: If the PCI device does not have SR-IOV capabilities: Note The Compute node aliases must be identical to the aliases on the Controller node. To enable IOMMU in the server BIOS of the Compute nodes to support PCI passthrough, add the KernelArgs parameter to pci_passthru_compute.yaml : Note When you first add the KernelArgs parameter to the configuration of a role, the overcloud nodes are automatically rebooted. If required, you can disable the automatic rebooting of nodes and instead perform node reboots manually after each overcloud deployment. For more information, see Configuring manual node reboot to define KernelArgs . Add your custom environment files to the stack with your other environment files and deploy the overcloud: Configure a flavor to request the PCI devices. The following example requests two devices, each with a vendor ID of 10de and a product ID of 13f2 : Verification Create an instance with a PCI passthrough device: Replace <custom_gpu> with the name of your custom instance image that has the required GPU driver installed. Log in to the instance as a cloud user. For more information, see Connecting to an instance . To verify that the GPU is accessible from the instance, enter the following command from the instance: To check the NVIDIA System Management Interface status, enter the following command from the instance: Example output:
[ "sudo podman exec -it nova_libvirt virsh dumpxml <instance_name> | grep mdev", "sudo podman exec -it nova_virtqemud virsh dumpxml <instance_name> | grep mdev", "[stack@director ~]USD source ~/stackrc", "(undercloud)USD openstack overcloud roles generate -o /home/stack/templates/roles_data_gpu.yaml Compute:ComputeGpu Compute Controller", "(undercloud)USD openstack overcloud node introspect --all-manageable --provide", "(undercloud)USD openstack baremetal node set --resource-class baremetal.GPU <node>", "- name: Controller count: 3 - name: Compute count: 3 - name: ComputeGpu count: 1 defaults: resource_class: baremetal.GPU network_config: template: /home/stack/templates/nic-config/myRoleTopology.j2 1", "(undercloud)USD openstack overcloud node provision --stack <stack> [--network-config \\] --output /home/stack/templates/overcloud-baremetal-deployed.yaml /home/stack/templates/overcloud-baremetal-deploy.yaml", "(undercloud)USD watch openstack baremetal node list", "parameter_defaults: ComputeNetworkConfigTemplate: /home/stack/templates/nic-configs/compute.j2 ComputeGpuNetworkConfigTemplate: /home/stack/templates/nic-configs/<gpu_net_top>.j2 ControllerNetworkConfigTemplate: /home/stack/templates/nic-configs/controller.j2", "ls /sys/class/mdev_bus/", "ls /sys/class/mdev_bus/<mdev_device>/mdev_supported_types", "ls /sys/class/mdev_bus/0000:84:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 ls /sys/class/mdev_bus/0000:85:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 ls /sys/class/mdev_bus/0000:86:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45 ls /sys/class/mdev_bus/0000:87:00.0/mdev_supported_types: nvidia-35 nvidia-36 nvidia-37 nvidia-38 nvidia-39 nvidia-40 nvidia-41 nvidia-42 nvidia-43 nvidia-44 nvidia-45", "parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-35 - nvidia-36", "parameter_defaults: ComputeGpuExtraConfig: nova::compute::vgpu::enabled_vgpu_types: - nvidia-35 - nvidia-36 NovaVGPUTypesDeviceAddressesMapping: {'vgpu_<vgpu_type>': ['<pci_address>', '<pci_address>'],'vgpu_<vgpu_type>': ['<pci_address>', '<pci_address>']}", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -r /home/stack/templates/roles_data_gpu.yaml -e /home/stack/templates/network-environment.yaml -e /home/stack/templates/gpu.yaml -e /home/stack/templates/overcloud-baremetal-deployed.yaml -e /home/stack/templates/node-info.yaml", "(overcloud)USD openstack --os-placement-api-version 1.6 trait create CUSTOM_<TRAIT_NAME>", "(overcloud)USD existing_traits=USD(openstack --os-placement-api-version 1.6 resource provider trait list -f value <host_uuid> | sed 's/^/--trait /')", "(overcloud)USD echo USDexisting_traits", "(overcloud)USD openstack --os-placement-api-version 1.6 resource provider trait set USDexisting_traits --trait CUSTOM_<TRAIT_NAME> <host_uuid>", "source ~/overcloudrc", "(overcloud)USD openstack server create --flavor <flavor> --image <image> temp_vgpu_instance", "(overcloud)USD openstack server image create --name vgpu_image temp_vgpu_instance", "(overcloud)USD openstack flavor create --vcpus 6 --ram 8192 --disk 100 m1.small-gpu", "(overcloud)USD openstack flavor set m1.small-gpu --property \"resources:VGPU=1\"", "(overcloud)USD openstack flavor set m1.small-gpu --property trait:CUSTOM_NVIDIA_11=required", "(overcloud)USD openstack server create --flavor m1.small-gpu --image vgpu_image --security-group web --nic net-id=internal0 --key-name lambda vgpu-instance", "lspci -nn | grep <gpu_name>", "lspci -nn | grep -i <gpu_name>", "lspci -nn | grep -i nvidia 3b:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1eb8] (rev a1) d8:00.0 3D controller [0302]: NVIDIA Corporation TU104GL [Tesla T4] [10de:1db4] (rev a1)", "lspci -v -s 3b:00.0 3b:00.0 3D controller: NVIDIA Corporation TU104GL [Tesla T4] (rev a1) Capabilities: [bcc] Single Root I/O Virtualization (SR-IOV)", "parameter_defaults: NovaSchedulerEnabledFilters: - AvailabilityZoneFilter - ComputeFilter - ComputeCapabilitiesFilter - ImagePropertiesFilter - ServerGroupAntiAffinityFilter - ServerGroupAffinityFilter - PciPassthroughFilter - NUMATopologyFilter", "ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"", "ControllerExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"", "parameter_defaults: NovaPCIPassthrough: - vendor_id: \"10de\" product_id: \"1eb8\"", "ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" device_type: \"type-PF\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\" device_type: \"type-PF\"", "ComputeExtraConfig: nova::pci::aliases: - name: \"t4\" product_id: \"1eb8\" vendor_id: \"10de\" - name: \"v100\" product_id: \"1db4\" vendor_id: \"10de\"", "parameter_defaults: ComputeParameters: KernelArgs: \"intel_iommu=on iommu=pt\"", "(undercloud)USD openstack overcloud deploy --templates -e [your environment files] -e /home/stack/templates/pci_passthru_controller.yaml -e /home/stack/templates/pci_passthru_compute.yaml", "openstack flavor set m1.large --property \"pci_passthrough:alias\"=\"t4:2\"", "openstack server create --flavor m1.large --image <custom_gpu> --wait test-pci", "lspci -nn | grep <gpu_name>", "nvidia-smi", "----------------------------------------------------------------------------- | NVIDIA-SMI 440.33.01 Driver Version: 440.33.01 CUDA Version: 10.2 | |------------------------------- ---------------------- ----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | |=============================== ====================== ======================| | 0 Tesla T4 Off | 00000000:01:00.0 Off | 0 | | N/A 43C P0 20W / 70W | 0MiB / 15109MiB | 0% Default | ------------------------------- ---------------------- ---------------------- ----------------------------------------------------------------------------- | Processes: GPU Memory | | GPU PID Type Process name Usage | |=============================================================================| | No running processes found | -----------------------------------------------------------------------------" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/configuring_the_compute_service_for_instance_creation/assembly_configuring-virtual-gpus-for-instances_vgpu
Appendix F. High Availability LVM (HA-LVM)
Appendix F. High Availability LVM (HA-LVM) The Red Hat High Availability Add-On provides support for high availability LVM volumes (HA-LVM) in a failover configuration. This is distinct from active/active configurations enabled by the Clustered Logical Volume Manager (CLVM), which is a set of clustering extensions to LVM that allow a cluster of computers to manage shared storage. When to use CLVM or HA-LVM should be based on the needs of the applications or services being deployed. If the applications are cluster-aware and have been tuned to run simultaneously on multiple machines at a time, then CLVM should be used. Specifically, if more than one node of your cluster will require access to your storage which is then shared among the active nodes, then you must use CLVM. CLVM allows a user to configure logical volumes on shared storage by locking access to physical storage while a logical volume is being configured, and uses clustered locking services to manage the shared storage. For information on CLVM, and on LVM configuration in general, see Logical Volume Manager Administration . If the applications run optimally in active/passive (failover) configurations where only a single node that accesses the storage is active at any one time, you should use High Availability Logical Volume Management agents (HA-LVM). Most applications will run better in an active/passive configuration, as they are not designed or optimized to run concurrently with other instances. Choosing to run an application that is not cluster-aware on clustered logical volumes may result in degraded performance if the logical volume is mirrored. This is because there is cluster communication overhead for the logical volumes themselves in these instances. A cluster-aware application must be able to achieve performance gains above the performance losses introduced by cluster file systems and cluster-aware logical volumes. This is achievable for some applications and workloads more easily than others. Determining what the requirements of the cluster are and whether the extra effort toward optimizing for an active/active cluster will pay dividends is the way to choose between the two LVM variants. Most users will achieve the best HA results from using HA-LVM. HA-LVM and CLVM are similar in the fact that they prevent corruption of LVM metadata and its logical volumes, which could otherwise occur if multiple machines are allowed to make overlapping changes. HA-LVM imposes the restriction that a logical volume can only be activated exclusively; that is, active on only one machine at a time. This means that only local (non-clustered) implementations of the storage drivers are used. Avoiding the cluster coordination overhead in this way increases performance. CLVM does not impose these restrictions - a user is free to activate a logical volume on all machines in a cluster; this forces the use of cluster-aware storage drivers, which allow for cluster-aware file systems and applications to be put on top. HA-LVM can be setup to use one of two methods for achieving its mandate of exclusive logical volume activation. The preferred method uses CLVM, but it will only ever activate the logical volumes exclusively. This has the advantage of easier setup and better prevention of administrative mistakes (like removing a logical volume that is in use). In order to use CLVM, the High Availability Add-On and Resilient Storage Add-On software, including the clvmd daemon, must be running. The procedure for configuring HA-LVM using this method is described in Section F.1, "Configuring HA-LVM Failover with CLVM (preferred)" . The second method uses local machine locking and LVM "tags". This method has the advantage of not requiring any LVM cluster packages; however, there are more steps involved in setting it up and it does not prevent an administrator from mistakenly removing a logical volume from a node in the cluster where it is not active. The procedure for configuring HA-LVM using this method is described in Section F.2, "Configuring HA-LVM Failover with Tagging" . F.1. Configuring HA-LVM Failover with CLVM (preferred) To set up HA-LVM failover (using the preferred CLVM variant), perform the following steps: Ensure that your system is configured to support CLVM, which requires the following: The High Availability Add-On and Resilient Storage Add-On are installed, including the cmirror package if the CLVM logical volumes are to be mirrored. The locking_type parameter in the global section of the /etc/lvm/lvm.conf file is set to the value '3'. The High Availability Add-On and Resilient Storage Add-On software, including the clvmd daemon, must be running. For CLVM mirroring, the cmirrord service must be started as well. Create the logical volume and file system using standard LVM and file system commands, as in the following example. For information on creating LVM logical volumes, refer to Logical Volume Manager Administration . Edit the /etc/cluster/cluster.conf file to include the newly created logical volume as a resource in one of your services. Alternately, you can use Conga or the ccs command to configure LVM and file system resources for the cluster. The following is a sample resource manager section from the /etc/cluster/cluster.conf file that configures a CLVM logical volume as a cluster resource:
[ "pvcreate /dev/sd[cde]1 vgcreate -cy shared_vg /dev/sd[cde]1 lvcreate -L 10G -n ha_lv shared_vg mkfs.ext4 /dev/shared_vg/ha_lv lvchange -an shared_vg/ha_lv", "<rm> <failoverdomains> <failoverdomain name=\"FD\" ordered=\"1\" restricted=\"0\"> <failoverdomainnode name=\"neo-01\" priority=\"1\"/> <failoverdomainnode name=\"neo-02\" priority=\"2\"/> </failoverdomain> </failoverdomains> <resources> <lvm name=\"lvm\" vg_name=\"shared_vg\" lv_name=\"ha-lv\"/> <fs name=\"FS\" device=\"/dev/shared_vg/ha-lv\" force_fsck=\"0\" force_unmount=\"1\" fsid=\"64050\" fstype=\"ext4\" mountpoint=\"/mnt\" options=\"\" self_fence=\"0\"/> </resources> <service autostart=\"1\" domain=\"FD\" name=\"serv\" recovery=\"relocate\"> <lvm ref=\"lvm\"/> <fs ref=\"FS\"/> </service> </rm>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ap-ha-halvm-ca
Chapter 6. Configuring fencing for an HA cluster on Red Hat OpenStack Platform
Chapter 6. Configuring fencing for an HA cluster on Red Hat OpenStack Platform Fencing configuration ensures that a malfunctioning node on your HA cluster is automatically isolated. This prevents the node from consuming the cluster's resources or compromising the cluster's functionality. Use the fence_openstack fence agent to configure a fence device for an HA cluster on RHOSP. You can view the options for the RHOSP fence agent with the following command. Prerequisites A configured HA cluster running on RHOSP Access to the RHOSP APIs, using the RHOSP authentication method you will use for cluster configuration, as described in Setting up an authentication method for RHOSP The cluster property stonith-enabled set to true , which is the default value. Red Hat does not support clusters when fencing is disabled, as it is not suitable for a production environment. Run the following command to ensure that fencing is enbaled. Procedure Complete the following steps from any node in the cluster. Determine the UUID for each node in your cluster. The following command displays the full list of all of the RHOSP instance names within the ha-example project along with the UUID for the cluster node associated with that RHOSP instance, under the heading ID . The node host name might not match the RHOSP instance name. Create the fencing device, using the pcmk_host_map parameter to map each node in the cluster to the UUID for that node. Each of the following example fence device creation commands uses a different authentication method. The following command creates a fence_openstack fencing device for a 3-node cluster, using a clouds.yaml configuration file for authentication. For the cloud= parameter , specify the name of the cloud in your clouds.yaml` file. The following command creates a fence_openstack fencing device, using an OpenRC environment script for authentication. The following command creates a fence_openstack fencing device, using a user name and password for authentication. The authentication parameters, including username , password , project_name , and auth_url , are provided by the RHOSP administrator. To ensure immediate and complete fencing, disable ACPI Soft-Off on all cluster nodes. For information about disabling ACPI Soft-Off, see Configuring ACPI For use with integrated fence devices . Verification From one node in the cluster, fence a different node in the cluster and check the cluster status. If the fenced node is offline, the fencing operation was successful. Restart the node that you fenced and check the status to verify that the node started.
[ "pcs stonith describe fence_openstack", "pcs property config --all Cluster Properties: . . . stonith-enabled: true", "openstack --os-cloud=\"ha-example\" server list ... | ID | Name | | 6d86fa7d-b31f-4f8a-895e-b3558df9decb|testnode-node03-vm| | 43ed5fe8-6cc7-4af0-8acd-a4fea293bc62|testnode-node02-vm| | 4df08e9d-2fa6-4c04-9e66-36a6f002250e|testnode-node01-vm|", "pcs stonith create fenceopenstack fence_openstack pcmk_host_map=\"node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb\" power_timeout=\"240\" pcmk_reboot_timeout=\"480\" pcmk_reboot_retries=\"4\" cloud=\"ha-example\"", "pcs stonith create fenceopenstack fence_openstack pcmk_host_map=\"node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb\" power_timeout=\"240\" pcmk_reboot_timeout=\"480\" pcmk_reboot_retries=\"4\" openrc=\"/root/openrc\"", "pcs stonith create fenceopenstack fence_openstack pcmk_host_map=\"node01:4df08e9d-2fa6-4c04-9e66-36a6f002250e;node02:43ed5fe8-6cc7-4af0-8acd-a4fea293bc62;node03:6d86fa7d-b31f-4f8a-895e-b3558df9decb\" power_timeout=\"240\" pcmk_reboot_timeout=\"480\" pcmk_reboot_retries=\"4\" username=\"XXX\" password=\"XXX\" project_name=\"rhelha\" auth_url=\"XXX\" user_domain_name=\"Default\"", "pcs stonith fence node02 pcs status", "pcs cluster start node02 pcs status" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_a_red_hat_high_availability_cluster_on_red_hat_openstack_platform/configuring-fencing-for-an-ha-cluster-on-red-hat-openstack-platform_configurng-a-red-hat-high-availability-cluster-on-red-hat-openstack-platform
Configuring client-side notifications for Cryostat
Configuring client-side notifications for Cryostat Red Hat build of Cryostat 2 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/configuring_client-side_notifications_for_cryostat/index
20.10. Connecting the Serial Console for the Guest Virtual Machine
20.10. Connecting the Serial Console for the Guest Virtual Machine The virsh console domain [--devname devicename ] [--force] [--safe] command connects the virtual serial console for the guest virtual machine. This is very useful for example for guests that do not provide VNC or SPICE protocols (and thus does not offer video display for GUI tools ) and that do not have network connection (and thus cannot be interacted with using SSH). The optional --devname parameter refers to the device alias of an alternate console, serial, or parallel device configured for the guest virtual machine. If this parameter is omitted, the primary console will be opened. If the --safe option is specified, the connection is only attempted if the driver supports safe console handling. This option specifies that the server has to ensure exclusive access to console devices. Optionally, the force option may be specified, which requests to disconnect any existing sessions, such as in the case of a broken connection. Example 20.19. How to start a guest virtual machine in console mode The following example starts a previously created guest1 virtual machine so that it connects to the serial console using safe console handling: # virsh console guest1 --safe
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-domain_commands-connecting_the_serial_console_for_the_guest_virtual_machine