title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
Chapter 4. Installing the JBoss Data Virtualization Development Tools
Chapter 4. Installing the JBoss Data Virtualization Development Tools 4.1. Installing JBoss Data Virtualization Development Tools Prerequisites The following software must be installed: Red Hat Developer Studio 12.0 with Integration Stack. On the Red Hat Customer Portal , click Downloads Red Hat JBoss Data Virtualization and download Red Hat Developer Studio Integration Stack 12.0.0 Stand-alone Installer . Important Make sure you use Red Hat Developer Studio 12.0.0 as it is the last version that works with Teiid Designer. Important During installation, make sure you select Red Hat Data Virtualization Development on the Select Additional Features to Install screen. An archiving tool for extracting the contents of compressed files. Open JDK (or another supported Java Virtual Machine). Procedure 4.1. Install the Latest Version of Teiid Designer Go to https://access.redhat.com/ and log in to the Customer Portal with your Red Hat login. Click Downloads Red Hat JBoss Data Virtualization . Click Download to the Red Hat JBoss Data Virtualization Teiid Designer [VERSION] Update Site Zip option and save the datavirt-teiid-designer-[VERSION]-updatesite.zip file. Start Red Hat Developer Studio . In Red Hat Developer Studio , select Help Install New Software... from the main menu. On the Available Software page, click the Add ... button. On the Add Repository dialog: Enter "Red Hat Data Virtualization Teiid Designer" (or another unique name) in the Name field. Click the Archive... button, navigate to the location where the datavirt-teiid-designer-[VERSION]-updatesite.zip file was downloaded, and click OK . Click Add . Back on the Available Software page, select Data Virtualization and all of its children from the list of available items. Click . On the Install Details page, review the items to be installed and click . Accept any additional dependencies and license agreements, then click Finish .
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/installation_guide/chap-installing_the_product
Chapter 23. Enhancing security with the kernel integrity subsystem
Chapter 23. Enhancing security with the kernel integrity subsystem You can improve the protection of your system by using components of the kernel integrity subsystem. Learn more about the relevant components and their configuration. Note Red Hat products distributed through methods such as RPMs, ISOs, and zip files are signed with cryptographic signatures. The RHEL Kernel keyring system includes certificates for Red Hat product signing keys only. Therefore, to ensure kernels are tamper-proof, you must not use other hash features. 23.1. The kernel integrity subsystem The integrity subsystem is the kernel component that maintains the overall integrity of system data. This subsystem helps in maintaining the system in the same state from the time it was built. Using this subsystem, you can protect executable files, libraries, and configuration files. The kernel integrity subsystem consists of two major components: Integrity Measurement Architecture (IMA) IMA measures file content whenever it is executed or accessed by cryptographically hashing or signing with cryptographic keys. The keys are stored in the kernel keyring subsystem. IMA places the measured values within the kernel's memory space. This prevents users of the system from modifying the measured values. IMA allows local and remote parties to verify the measured values. IMA provides local validation of the current content of files against the values previously stored in the measurement list within the kernel memory. This extension forbids performing any operation on a specific file in case the current and the measures do not match. Extended Verification Module (EVM) EVM protects extended attributes of files (also known as xattr ) related to system security, such as IMA measurements and SELinux attributes. EVM cryptographically hashes their corresponding values or signs them with cryptographic keys. The keys are stored in the kernel keyring subsystem. The kernel integrity subsystem can use the Trusted Platform Module (TPM) to further harden system security. A TPM is a hardware, firmware, or virtual component with integrated cryptographic keys that are built according to the TPM specification by the Trusted Computing Group (TCG) for important cryptographic functions. By providing cryptographic functions from a protected and tamper-proof area of the hardware chip, TPMs are protected from software-based attacks. TPMs provide the following features: Random-number generator Generator and secure storage for cryptographic keys Hashing generator Remote attestation Additional resources Security hardening Basic and advanced configuration of Security-Enhanced Linux (SELinux) 23.2. Trusted and encrypted keys Trusted keys and encrypted keys are an important part of enhancing system security. Trusted and encrypted keys are variable-length symmetric keys generated by the kernel that use the kernel keyring service. You can verify the integrity of the keys, for example, to allow the extended verification module (EVM) to verify and confirm the integrity of a running system. User-level programs can only access the keys in the form of encrypted blobs . Trusted keys Trusted keys need the Trusted Platform Module (TPM) chip, which is used to both create and encrypt (seal) the keys. Each TPM has a master wrapping key, called the storage root key, which is stored within the TPM itself. Note RHEL 9 supports only TPM 2.0. If you must use TPM 1.2, use RHEL 8. For more information, see the Red Hat Knowledgebase solution Is Trusted Platform Module (TPM) supported by Red Hat? . You can verify the status of TPM 2.0 chip: You can also enable a TPM 2.0 chip and manage the TPM 2.0 device through settings in the machine firmware. In addition to that, you can seal the trusted keys with a specific set of the TPM's platform configuration register (PCR) values. PCR contains a set of integrity-management values that reflect the firmware, boot loader, and operating system. PCR-sealed keys can only be decrypted by the TPM on the system where they were encrypted. However, when you load a PCR-sealed trusted key to a keyring, its associated PCR values are verified. After verification, you can update the key with new or future PCR values, for example, to support booting a new kernel. Also, you can save a single key as multiple blobs, each with a different PCR value. Encrypted keys Encrypted keys do not require a TPM, because they use the kernel Advanced Encryption Standard (AES), which makes them faster than trusted keys. Encrypted keys are created using kernel-generated random numbers and encrypted by a master key when they are exported into user-space blobs. The master key is either a trusted key or a user key. If the master key is not trusted, the security of the encrypted key depends on the user key that was used to encrypt it. 23.3. Working with trusted keys You can improve system security by using the keyctl utility to create, export, load and update trusted keys. Prerequisites Trusted Platform Module (TPM) is enabled and active. See The kernel integrity subsystem and Trusted and encrypted keys . You can verify that your system has a TPM by entering the tpm2_pcrread command. If the output from this command displays several hashes, you have a TPM. Procedure Create a 2048-bit RSA key with an SHA-256 primary storage key with a persistent handle of, for example, 81000001 , by using one of the following utilities: By using the tss2 package: By using the tpm2-tools package: Create a trusted key by using a TPM 2.0 with the syntax of keyctl add trusted <NAME> "new <KEY_LENGTH> keyhandle= <PERSISTENT-HANDLE> [options]" <KEYRING> . In this example, the persistent handle is 81000001 . The command creates a trusted key called kmk with the length of 32 bytes (256 bits) and places it in the user keyring ( @u ). The keys may have a length of 32 to 128 bytes (256 to 1024 bits). List the current structure of the kernel keyrings: Export the key to a user-space blob by using the serial number of the trusted key: The command uses the pipe subcommand and the serial number of kmk . Load the trusted key from the user-space blob: Create secure encrypted keys that use the TPM-sealed trusted key ( kmk ). Follow this syntax: keyctl add encrypted <NAME> "new [FORMAT] <KEY_TYPE>:<PRIMARY_KEY_NAME> <KEY_LENGTH>" <KEYRING> : Additional resources the keyctl(1) manual page 23.4. Working with encrypted keys You can improve system security on systems where a Trusted Platform Module (TPM) is not available by managing encrypted keys. Encrypted keys, unless sealed by a trusted primary key, inherit the security level of the user primary key (random-number key) used for encryption. Therefore, it is highly recommended to load the primary user key securely, ideally early in the boot process. Procedure Generate a user key by using a random sequence of numbers: The command generates a user key called kmk-user which acts as a primary key and is used to seal the actual encrypted keys. Generate an encrypted key using the primary key from the step: Verification List all keys in the specified user keyring: Additional resources The keyctl(1) manual page 23.5. Enabling IMA and EVM You can enable and configure Integrity measurement architecture (IMA) and extended verification module (EVM) to improve the security of the operating system. Important Always enable EVM together with IMA. Although you can enable EVM alone, EVM appraisal is only triggered by an IMA appraisal rule. Therefore, EVM does not protect file metadata such as SELinux attributes. If file metadata is tampered with offline, EVM can only prevent file metadata changes. It does not prevent file access, such as executing the file. Prerequisites Secure Boot is temporarily disabled. Note When Secure Boot is enabled, the ima_appraise=fix kernel command-line parameter does not work. The securityfs file system is mounted on the /sys/kernel/security/ directory and the /sys/kernel/security/integrity/ima/ directory exists. You can verify where securityfs is mounted by using the mount command: The systemd service manager is patched to support IMA and EVM on boot time. Verify by using the following command: For example: Procedure Enable IMA and EVM in the fix mode for the current boot entry and allow users to gather and update the IMA measurements by adding the following kernel command-line parameters: The command enables IMA and EVM in the fix mode for the current boot entry to gather and update the IMA measurements. The ima_policy=appraise_tcb kernel command-line parameter ensures that the kernel uses the default Trusted Computing Base (TCB) measurement policy and the appraisal step. The appraisal step forbids access to files whose prior and current measures do not match. Reboot to make the changes come into effect. Optional: Verify the parameters added to the kernel command line: Create a kernel master key to protect the EVM key: The kmk is kept entirely in the kernel space memory. The 32-byte long value of the kmk is generated from random bytes from the /dev/urandom file and placed in the user ( @u ) keyring. The key serial number is on the first line of the output. Create an encrypted EVM key based on the kmk : The command uses the kmk to generate and encrypt a 64-byte long user key (named evm-key ) and places it in the user ( @u ) keyring. The key serial number is on the first line of the output. Important It is necessary to name the user key as evm-key because that is the name the EVM subsystem is expecting and is working with. Create a directory for exported keys. Search for the kmk and export its unencrypted value into the new directory. Search for the evm-key and export its encrypted value into the new directory. The evm-key has been encrypted by the kernel master key earlier. Optional: View the newly created keys: Optional: If the keys are removed from the keyring, for example after system reboot, you can import the already exported kmk and evm-key instead of creating new ones. Import the kmk . Import the evm-key . Activate EVM. Relabel the whole system. Warning Enabling IMA and EVM without relabeling the system might make the majority of the files on the system inaccessible. Verification Verify that EVM has been initialized: 23.6. Collecting file hashes with integrity measurement architecture In the measurement phase, you can create file hashes and store them as extended attributes ( xattrs ) of those files. With the file hashes, you can generate either an RSA-based digital signature or a Hash-based Message Authentication Code (HMAC-SHA1) and prevent offline tampering attacks on the extended attributes. Prerequisites IMA and EVM are enabled. For more information, see Enabling integrity measurement architecture and extended verification module . A valid trusted key or encrypted key is stored in the kernel keyring. The ima-evm-utils , attr , and keyutils packages are installed. Procedure Create a test file: IMA and EVM ensure that the test_file example file has assigned hash values that are stored as its extended attributes. Inspect the file's extended attributes: The example output shows extended attributes with the IMA and EVM hash values and SELinux context. EVM adds a security.evm extended attribute related to the other attributes. At this point, you can use the evmctl utility on security.evm to generate either an RSA-based digital signature or a Hash-based Message Authentication Code (HMAC-SHA1). Additional resources Security hardening 23.7. Adding IMA signatures to package files To allow the kernel, Keylime, fapolicyd , and debuginfo packages to perform their integrity checks, you need to add IMA signatures to RPM files. After installing the rpm-plugin-ima plug-in, newly installed RPM files automatically have IMA signatures placed in the security.ima extended file attribute. However, you need to reinstall existing packages to obtain IMA signatures. Procedure Install the rpm-plugin-ima plug-in: Reinstall all packages: Verification Confirm that the reinstalled package file has a valid IMA signature. For example, to check the IMA signature of the /usr/bin/bash file, run the following command: Verify the IMA signature of a file with a specified certificate. For example, to check that the IMA signature of /usr/bin/bash is accessible by /usr/share/doc/kernel-keys/USD(uname -r)/ima.cer , run the following command:". 23.8. Enabling kernel runtime integrity monitoring You can enable kernel runtime integrity monitoring that IMA appraisal provides. Prerequisites The kernel installed on your system has version 5.14.0-359 or higher. The dracut package has version 057-43.git20230816 or higher. The keyutils package is installed. The ima-evm-utils package is installed. The files covered by the policy have valid signatures. For instructions, see Adding IMA signatures to package files . Procedure To copy the Red Hat IMA code signing key to the /etc/ima/keys file, run: To add the IMA code signing key to the .ima keyring, run: Depending on your threat model, define an IMA policy in the /etc/sysconfig/ima-policy file. For example, the following IMA policy checks the integrity of both executables and involved memory mapping library files: To load the IMA policy to make sure the kernel accepts this IMA policy, run: To enable the dracut integrity module to automatically load the IMA code signing key and the IMA policy, run: 23.9. Creating custom IMA keys using OpenSSL You can use OpenSSL to generate a CSR for your digital certificates to secure your code. The kernel searches the .ima keyring for a code signing key to verify an IMA signature. Before you add a code signing key to the .ima keyring, you need to ensure that IMA CA key signed this key in the .builtin_trusted_keys or .secondary_trusted_keys keyrings. Prerequisites The custom IMA CA key has the following extensions: the basic constraints extension with the CA boolean asserted. the KeyUsage extension with the keyCertSign bit asserted but without the digitalSignature asserted. The custom IMA code signing key falls under the following criteria: The IMA CA key signed this custom IMA code signing key. The custom key includes the subjectKeyIdentifier extension. Procedure To generate a custom IMA CA key pair, run: Optional: To check the content of the ima_ca.conf file, run: To generate a private key and a certificate signing request (CSR) for the IMA code signing key, run: Optional: To check the content of the ima.conf file, run: Use the IMA CA private key to sign the CSR to create the IMA code signing certificate: 23.10. Deploying a custom signed IMA policy for UEFI systems In the Secure Boot environment, you may want to only load a signed IMA policy signed by your custom IMA key. Prerequisites The MOK list contains the custom IMA key. For guidance, see Enrolling public key on target system by adding the public key to the MOK list . The kernel installed on your system has version 5.14.0-335 or higher. Procedure Enable Secure Boot. Permanently add the ima_policy=secure_boot kernel parameter. For instructions, see Configuring kernel parameters permanently with sysctl . Prepare your IMA policy by running the command: Sign the policy with your custom IMA code signing key by running the command: Load the IMA policy by running the command:
[ "cat /sys/class/tpm/tpm0/tpm_version_major 2", "TPM_DEVICE=/dev/tpm0 tsscreateprimary -hi o -st Handle 80000000 TPM_DEVICE=/dev/tpm0 tssevictcontrol -hi o -ho 80000000 -hp 81000001", "tpm2_createprimary --key-algorithm=rsa2048 --key-context=key.ctxt name-alg: value: sha256 raw: 0xb ... sym-keybits: 128 rsa: xxxxxx... tpm2_evictcontrol -c key.ctxt 0x81000001 persistentHandle: 0x81000001 action: persisted", "keyctl add trusted kmk \"new 32 keyhandle=0x81000001\" @u 642500861", "keyctl show Session Keyring -3 --alswrv 500 500 keyring: ses 97833714 --alswrv 500 -1 \\ keyring: uid.1000 642500861 --alswrv 500 500 \\ trusted: kmk", "keyctl pipe 642500861 > kmk.blob", "keyctl add trusted kmk \"load `cat kmk.blob`\" @u 268728824", "keyctl add encrypted encr-key \"new trusted:kmk 32\" @u 159771175", "keyctl add user kmk-user \"USD(dd if=/dev/urandom bs=1 count=32 2>/dev/null)\" @u 427069434", "keyctl add encrypted encr-key \"new user:kmk-user 32\" @u 1012412758", "keyctl list @u 2 keys in keyring: 427069434: --alswrv 1000 1000 user: kmk-user 1012412758: --alswrv 1000 1000 encrypted: encr-key", "mount securityfs on /sys/kernel/security type securityfs (rw,nosuid,nodev,noexec,relatime)", "grep < options > pattern < files >", "dmesg | grep -i -e EVM -e IMA -w [ 0.943873] ima: No TPM chip found, activating TPM-bypass! [ 0.944566] ima: Allocated hash algorithm: sha256 [ 0.944579] ima: No architecture policies found [ 0.944601] evm: Initialising EVM extended attributes: [ 0.944602] evm: security.selinux [ 0.944604] evm: security.SMACK64 (disabled) [ 0.944605] evm: security.SMACK64EXEC (disabled) [ 0.944607] evm: security.SMACK64TRANSMUTE (disabled) [ 0.944608] evm: security.SMACK64MMAP (disabled) [ 0.944609] evm: security.apparmor (disabled) [ 0.944611] evm: security.ima [ 0.944612] evm: security.capability [ 0.944613] evm: HMAC attrs: 0x1 [ 1.314520] systemd[1]: systemd 252-18.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified) [ 1.717675] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. [ 4.799436] systemd[1]: systemd 252-18.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)", "grubby --update-kernel=/boot/vmlinuz-USD(uname -r) --args=\"ima_policy=appraise_tcb ima_appraise=fix evm=fix\"", "cat /proc/cmdline BOOT_IMAGE=(hd0,msdos1)/vmlinuz-5.14.0-1.el9.x86_64 root=/dev/mapper/rhel-root ro crashkernel=1G-4G:192M,4G-64G:256M,64G-:512M resume=/dev/mapper/rhel-swap rd.lvm.lv=rhel/root rd.lvm.lv=rhel/swap rhgb quiet ima_policy=appraise_tcb ima_appraise=fix evm=fix", "keyctl add user kmk \"USD(dd if=/dev/urandom bs=1 count=32 2> /dev/null)\" @u 748544121", "keyctl add encrypted evm-key \"new user:kmk 64\" @u 641780271", "mkdir -p /etc/keys/", "keyctl pipe USD(keyctl search @u user kmk) > /etc/keys/kmk", "keyctl pipe USD(keyctl search @u encrypted evm-key) > /etc/keys/evm-key", "keyctl show Session Keyring 974575405 --alswrv 0 0 keyring: ses 299489774 --alswrv 0 65534 \\ keyring: uid.0 748544121 --alswrv 0 0 \\ user: kmk 641780271 --alswrv 0 0 \\_ encrypted: evm-key ls -l /etc/keys/ total 8 -rw-r--r--. 1 root root 246 Jun 24 12:44 evm-key -rw-r--r--. 1 root root 32 Jun 24 12:43 kmk", "keyctl add user kmk \"USD(cat /etc/keys/kmk)\" @u 451342217", "keyctl add encrypted evm-key \"load USD(cat /etc/keys/evm-key)\" @u 924537557", "echo 1 > /sys/kernel/security/evm", "find / -fstype xfs -type f -uid 0 -exec head -n 1 '{}' >/dev/null \\;", "dmesg | tail -1 [... ] evm: key initialized", "echo < Test_text > > test_file", "getfattr -m . -d test_file file: test_file security.evm=0sAnDIy4VPA0HArpPO/EqiutnNyBql security.ima=0sAQOEDeuUnWzwwKYk+n66h/vby3eD", "dnf install rpm-plugin-ima -y", "dnf reinstall '*' -y", "getfattr -m security.ima -d /usr/bin/bash", "'security.ima=0sAwIE0zIESQBnMGUCMFhf0iBeM7NjjhCCHVt4/ORx1eCegjrWSHzFbJMCsAhR9bYU2hNGjiWUYT2IIqWaaAIxALFGUkqGP5vDLuxQXibO9g7HFcfyZzRBY4rbKPsXcAIZRtDHVS5dQBZqM3hyS5v1MA=='", "evmctl ima_verify -k /usr/share/doc/kernel-keys/USD(uname -r)/ima.cer /usr/bin/bash", "'key 1: d3320449 /usr/share/doc/kernel-keys/5.14.0-359.el9.x86-64/ima.cer /usr/bin/bash:' verification is OK", "mkdir -p /etc/keys/ima cp /usr/share/doc/kernel-keys/USD(uname -r)/ima.cer /etc/ima/keys", "keyctl padd asymmetric RedHat-IMA %:.ima < /etc/ima/keys/ima.cer", "PROC_SUPER_MAGIC = 0x9fa0 dont_appraise fsmagic=0x9fa0 SYSFS_MAGIC = 0x62656572 dont_appraise fsmagic=0x62656572 DEBUGFS_MAGIC = 0x64626720 dont_appraise fsmagic=0x64626720 TMPFS_MAGIC = 0x01021994 dont_appraise fsmagic=0x1021994 RAMFS_MAGIC dont_appraise fsmagic=0x858458f6 DEVPTS_SUPER_MAGIC=0x1cd1 dont_appraise fsmagic=0x1cd1 BINFMTFS_MAGIC=0x42494e4d dont_appraise fsmagic=0x42494e4d SECURITYFS_MAGIC=0x73636673 dont_appraise fsmagic=0x73636673 SELINUX_MAGIC=0xf97cff8c dont_appraise fsmagic=0xf97cff8c SMACK_MAGIC=0x43415d53 dont_appraise fsmagic=0x43415d53 NSFS_MAGIC=0x6e736673 dont_appraise fsmagic=0x6e736673 EFIVARFS_MAGIC dont_appraise fsmagic=0xde5e81e4 CGROUP_SUPER_MAGIC=0x27e0eb dont_appraise fsmagic=0x27e0eb CGROUP2_SUPER_MAGIC=0x63677270 dont_appraise fsmagic=0x63677270 appraise func=BPRM_CHECK appraise func=FILE_MMAP mask=MAY_EXEC", "echo /etc/sysconfig/ima-policy > /sys/kernel/security/ima/policy echo USD? 0", "echo 'add_dracutmodules+=\" integrity \"' > /etc/dracut.conf.d/98-integrity.conf dracut -f", "openssl req -new -x509 -utf8 -sha256 -days 3650 -batch -config ima_ca.conf -outform DER -out custom_ima_ca.der -keyout custom_ima_ca.priv", "cat ima_ca.conf [ req ] default_bits = 2048 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = ca [ req_distinguished_name ] O = YOUR_ORG CN = YOUR_COMMON_NAME IMA CA emailAddress = YOUR_EMAIL [ ca ] basicConstraints=critical,CA:TRUE subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer keyUsage=critical,keyCertSign,cRLSign", "openssl req -new -utf8 -sha256 -days 365 -batch -config ima.conf -out custom_ima.csr -keyout custom_ima.priv", "cat ima.conf [ req ] default_bits = 2048 distinguished_name = req_distinguished_name prompt = no string_mask = utf8only x509_extensions = code_signing [ req_distinguished_name ] O = YOUR_ORG CN = YOUR_COMMON_NAME IMA signing key emailAddress = YOUR_EMAIL [ code_signing ] basicConstraints=critical,CA:FALSE keyUsage=digitalSignature subjectKeyIdentifier=hash authorityKeyIdentifier=keyid:always,issuer", "openssl x509 -req -in custom_ima.csr -days 365 -extfile ima.conf -extensions code_signing -CA custom_ima_ca.der -CAkey custom_ima_ca.priv -CAcreateserial -outform DER -out ima.der", "evmctl ima_sign /etc/sysconfig/ima-policy -k < PATH_TO_YOUR_CUSTOM_IMA_KEY > Place your public certificate under /etc/keys/ima/ and add it to the .ima keyring", "keyctl padd asymmetric CUSTOM_IMA1 %:.ima < /etc/ima/keys/my_ima.cer", "echo /etc/sysconfig/ima-policy > /sys/kernel/security/ima/policy echo USD? 0" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_monitoring_and_updating_the_kernel/enhancing-security-with-the-kernel-integrity-subsystem_managing-monitoring-and-updating-the-kernel
Chapter 3. New features and enhancements
Chapter 3. New features and enhancements JBoss EAP 8.0 introduces the following new features and enhancements. 3.1. Jakarta EE 10 support JBoss EAP 8 provides support for Jakarta EE 10 and implements the Jakarta EE 10 Core Profile, Web Profile and Full Platform standards, including: Jakarta Activation 2.1 Jakarta Annotations 2.1 Jakarta Authentication 3.0 Jakarta Authorization 2.1 Jakarta Batch 2.1 Jakarta Bean Validation 3.0 Jakarta Concurrency 3.0 Jakarta Connectors 2.1 Jakarta Contexts and Dependency Injection 4.0 Jakarta Debugging Support for Other Languages 2.0 Jakarta Dependency Injection 2.0 Jakarta Enterprise Beans 4.0 Jakarta Enterprise Web Services 2.0 Jakarta Expression Language 5.0 Jakarta Interceptors 2.1 Jakarta JSON Binding 3.0 Jakarta JSON Processing 2.1 Jakarta Mail 2.1 Jakarta Messaging 3.1 Jakarta Persistence 3.1 Jakarta RESTful Web Services 3.1 Jakarta Security 3.0 Jakarta Server Faces 4.0 Jakarta Server Pages 3.1 Jakarta Servlet 6.0 Jakarta SOAP with Attachments 1.3 Jakarta Standard Tag Library 3.0 Jakarta Transactions 2.0 Jakarta WebSocket 2.1 Jakarta XML Binding 4.0 Jakarta XML Web Services 4.0 Jakarta EE 10 has many changes when compared to Jakarta EE 8. For more information, see How to migrate your JBoss EAP applications from Jakarta EE 8 to Jakarta EE 10 . Package Namespace Change The packages used for all EE APIs have changed from javax to jakarta . This follows the move of Java EE to the Eclipse Foundation and the establishment of Jakarta EE. Note This change does not affect javax packages that are part of Java SE. Additional resources For more information, see The javax to jakarta Package Namespace Change . 3.2. Red Hat Insights Java client JBoss EAP 8.0 version onward contains the Red Hat Insights Java client. The Red Hat Insights Java client is enabled for JBoss EAP only if JBoss EAP is installed on Red Hat Enterprise Linux (RHEL), and the RHEL system has Red Hat Insights client installed, configured, and registered. For more information, see the Client Configuration Guide for Red Hat Insights . The Red Hat Insights dashboard for Runtimes will be available in a future release on Red Hat Hybrid Cloud Console . Similar to the RHEL dashboard which is available on the Red Hat Hybrid Cloud Console, the Runtimes dashboard will show the inventory of the Runtimes installations, CVE details, and help you select the JVM options. You can opt-out of the Red Hat Insights client by setting the environment variable RHT_INSIGHTS_JAVA_OPT_OUT to true . For more information, see the knowledge base article Red Hat Insights for Runtimes . 3.3. Management console Inclusive language, label changes Toward Red Hat's commitment to replacing problematic language in our code, documentation, and web properties, beginning with 8.0, the JBoss EAP management console will display more inclusive wording and labels. Specifically, you will notice the following changes to the management console resource addresses and user interface elements: New term term primary master secondary slave blocklist blacklist allowlist whitelist Adding, editing, and removing constant HTTP headers to response messages In the JBoss EAP 8.0 management console, you can now add, edit, or remove constant HTTP response headers. To add a new path and header, from the Server page, select Constant Headers , then click Add . To edit or remove an existing path header, select the path whose header you want to modify, then click either Edit or Remove . Displaying Java Message Service bridge statistics for processed messages A message bridge consumes messages from a source queue or topic, then sends them on to a target queue or topic, usually on a different server. A bridge can also send messages from one cluster to another. The Java Message Service (JMS) bridge provides statistics about messages that the bridge processed. Specifically, it collects the following data: number of messages successfully committed (message count) number of messages aborted (messages aborted) With this update, the JBoss EAP 8.0 management console includes a new JMS Bridge column to display these statistics in the Runtime section. Note that this new feature affects the /subsystem=messaging-activemq/jms-bridge=* resource. Configuring enhanced audit logging In the JBoss EAP 8.0 management console, you can configure the following two additional audit logging attributes in your /subsystem=elytron/syslog-audit-log=* resource: syslog-format Define the format for your audit log messages. Supported values are RFC3164 and RFC5424 . ("RFC" stands for "request for comments.") reconnect-attempts Define the maximum number of failed attempts JBoss EAP should make to connect to the syslog server before closing the endpoint. /deployment subresources require include-runtime=true With Red Hat JBoss Enterprise Application Platform 8.0, the submodel of /deployment has changed to runtime. For management operations that use /deployment subresources you must add include-runtime=true . Starting servers in suspended mode You can now use the JBoss EAP 8.0 management console to start servers in suspended mode. Select the new Start in suspended mode option, available in the following drop-down menus: Runtime > Topology Runtime > Server Groups Runtime > Server Groups > Server Runtime > Host > Server Configuring the certificate-authority attribute for the certificate-authority-account resource With JBoss EAP 8.0, you can use any certificate authority for your certificate-authority-account Elytron resource. Previously, JBoss EAP supported only the Let's Encrypt certificate authority, and the certificate-authority attribute was not configurable. With this update, you can add, configure, or remove any certificate authority by opening the JBoss EAP management console and clicking Configuration > Subsystems > Security > Other Settings > Other Settings > Certificate Authority . From there, click Add to add a new certificate authority. To modify one you already have, select it, then click Edit . To remove a certificate authority, select it, then click Remove . Configuring the OCSP as an Elytron trust manager With JBoss EAP 8.0, you can configure the Online Certificate Status Protocol (OCSP) as the trust manager for the Elytron undertow subsystem. Previously, JBoss EAP supported only a certificate revocation list (CRL) as trust manager. With this update, you can configure the OCSP as your trust manager by opening the JBoss EAP management console and clicking Configuration > Subsystems > Elytron > Other Settings > SSL > Trust Manager . , either select or create a trust manager and then, from the Trust Manager window, select the OCSP tab and click Add . Pausing Java Message Service topics From the JBoss EAP 8.0 management console, you can now navigate to Runtime > Messaging > Server > Server Name > Destination to select and then pause a Java Message Service (JMS) topic. After you address the related messaging issue, you can also resume the paused topic. JMS previously sent messages to all active subscribers without any way to interrupt them. Non-heap memory usage added to server status preview With JBoss EAP 8.0, you can see more information in the server status preview about the memory consumption of your server. Previously, the preview displayed only heap memory usage: Used and Committed . With this update, it also displays the same information for non-heap memory usage. Automatically add or update credential store passwords when you add or update a datasource Beginning with JBoss EAP 8.0, when you create a datasource from the management console, you can automatically add a password for that datasource to your credential store. From the management console, select Configuration > Subsystems > Datasources , then click Add to add a new datasource. , enter the credential store name where you want to save the password for the new datasource, an alias for the credential, and the plain text password you want to use. To modify an existing datasource, select it, then click Edit . Create, read, update, and delete Elytron resources From the JBoss EAP 8.0 management console, you can now create, read, update, or delete any of the following four evidence decoders: Aggregate Evidence Decoders Custom Evidence Decoders X500 Subject Evidence Decoders X509 Subject Alt Name Evidence Decoder To take one of these actions, navigate to Configuration > Subsystems > Security > Mappers & Decoders > Evidence Decoder . Viewing the deployment hash value The JBoss EAP 8.0 management console can now display your deployment hash value in the deployment preview. This means that you can determine at a glance whether your deployment was valid and successful. Adding and configuring interceptors in the EJB 3 subsystem From the JBoss EAP 8.0 management console, you can now add and configure system-wide, server-side interceptors in the ejb3 subsystem. From the console, select Configuration > EJB > Container to make your additions or changes. Configuring Infinispan distributed web session affinity With JBoss EAP 8.0, in the distributable-web subsystem, you now have more control over the affinity, or load balancer "stickiness", of a distributed web session. To change your session affinity to something other than the Primary-owner default, in the management console, click Configuration > Distributable Web > View > Infinispan Session . , choose a session and select Affinity to make your changes. Affinity options now include the following: Local None Primary-owner Ranked Previously, the only available affinity was Primary-owner . Configuring global directories in EE subsystem With the JBoss EAP 8.0 management console, you can now configure a new ee subsystem resource, /subsystem=ee/global-directory=* . You can use a global directory to add content to a deployment class path without listing the contents of the directory. To configure a global directory resource, navigate to Configuration > Subsystems > EE > Globals . Configuring cipher suites in Elytron With the JBoss EAP 8.0 management console, you can now enable TLS 1.3 cipher suites using the cipher-suite-names attribute to secure your network connection. Specifically, you can now configure the following elytron subsystem resources: /subsystem=elytron/client-ssl-context=* /subsystem=elytron/server-ssl-context=* To configure the cipher-suite-names attribute for the /subsystem=elytron/client-ssl-context=* resource from the management console, navigate to Configuration > Subsystems > Security > Other Settings > SSL > Client SSL Context . To configure the cipher-suite-names attribute for the /subsystem=elytron/server-ssl-context=* resource from the management console, navigate to Configuration > Subsystems > Security > Other Settings > SSL > Server SSL Context . Securing applications and management console with OIDC With the JBoss EAP 8.0, you can secure applications deployed to JBoss EAP, and the JBoss EAP management console with OpenID Connect (OIDC) from the management console. JBoss EAP 8.0 provides native support for OpenID Connect (OIDC) with the elytron-oidc-client subsystem. To configure the elytron-oidc-client subsystem from the management console, navigate to Configuration > Subsystems > Elytron OIDC Client . To secure applications deployed to JBoss EAP, configure the following resources: provider secure-deployment For more information, see Securing applications with OIDC in the Using single sign-on with JBoss EAP guide. To secure the JBoss EAP management interfaces, configure the following resources: provider secure-deployment secure-server Additionally, you can configure role-based access control (RBAC) for management console when securing it with OIDC by navigating to Access Control and clicking Enable RBAC . For more information, see Securing the JBoss EAP management console with an OpenID provider in the Using single sign-on with JBoss EAP guide. Note You can use the realm resource to configure a Red Hat build of Keycloak realm. This is provided for convenience. You can copy the configuration in the keycloak client adapter and use it in the realm resource configuration. However, using the provider resource is recommended instead. 3.4. Management CLI Registering web context when deploying an application You can use the deployment deploy-file command from the management command-line interface (CLI) to deploy applications to a standalone server or in a managed domain. Deploy an application to a standalone server Deploy an application to all server groups in a managed domain Deploy an application to specific server groups in a managed domain In the preceding examples, the default value for the runtime-name attribute is test-application.war . When specifying the runtime-name attribute with the --runtime-name option, you must include the .war extension in the name or the web context will not be registered by JBoss EAP. For example: 3.5. Security JAAS realm in the elytron subsystem In JBoss EAP 8.0, the legacy security subsystem has been removed. To continue using your custom login modules with the elytron subsystem, use the new Java Authentication and Authorization Service (JAAS) security realm, jaas-realm . Note jaas-realm only supports JAAS-compatible login modules. For information about JAAS, see Java Authentication and Authorization Service (JAAS) Reference Guide . jaas-realm does not support custom login modules that extend or are dependent upon PicketBox APIs. Although elytron subsystem provides jaas-realm , it is preferable to use other existing security realms that the subsystem provides. These include jdbc-realm , ldap-realm , token-realm , and others. You can also combine different security realms by configuring aggregate-realm , distributed-realm , or failover-realm . If none of these suits your purpose, implement a custom security realm and use it instead of custom login module. The following are cases where you should use jaas-realm instead of implementing a custom security realm: You are migrating to the elytron subsystem from legacy security and already have custom login modules implemented. You are migrating from other application servers to JBoss EAP and already have the login modules implemented. You require combining multiple login modules with various flags and options provided to those login modules. These flags and options might not be configurable for the provided security realms in the elytron subsystem. For more information, see Creating a JAAS realm in the Securing applications and management interfaces using multiple identity stores guide. Configure multiple certificate revocation lists in Elytron and Elytron client You can now configure multiple certificate revocation lists (CRL) in the elytron subsystem and WildFly Elytron client when you use several Certificate Authorities (CA). You can specify the list of CRLs to use in the certificate-revocation-lists attribute in the trust-manager . For more information, see Configuring certificate revocation checks in Elytron in the Configuring SSL/TLS in JBoss EAP guide. Keycloak SAML adapter feature pack The archive distribution of Keycloak SAML adapter is no longer provided with JBoss EAP. Instead, you can use the Keycloak SAML adapter feature pack to install the keycloak-saml subsystem and related configurations. The Keycloak SAML adapter feature pack provides the following layers that you can install depending on your use case: keycloak-saml keycloak-client-saml keycloak-client-saml-ejb For more information, see Using single sign-on with JBoss EAP guide . Native OpenID Connect client JBoss EAP now provides native support for OpenID Connect (OIDC) with the elytron-oidc-client subsystem. Therefore, Red Hat build of Keycloak Client Adapter is not provided in this release. The elytron-oidc-client subsystem acts as the Relying Party (RP). The elytron-oidc-client subsystem supports bearer-only authentication, and also provides multi-tenancy support. You can use the multi-tenancy support, for example, to authenticate users for an application from multiple Red Hat build of Keycloak realms. Note The JBoss EAP native OIDC client does not support RP-Initiated logout. You can use the elytron-oidc-client subsystem to secure applications deployed to JBoss EAP and the JBoss EAP management console with OIDC. Additionally, you can propagate the security identity, obtained from an OIDC provider, from a Servlet to Jakarta Enterprise Beans in both of the following cases: The Servlet and the Jakarta Enterprise Beans are in the same deployment. The Servlet and the Jakarta Enterprise Beans are in different deployments. For more information, see Using single sign-on with JBoss EAP guide . New hash-encoding and hash-charset attributes for hashed passwords You can now specify the character set and the string format for the hashed passwords that are stored in elytron subsystem security realms by using the hash-charset and hash-encoding attributes. The default hash-charset value is UTF-8 . You can set the hash-encoding value to either base64 or hex ; base64 is the default for all realms except the properties-realm where hex is the default. The new attributes are included in the following security realms: filesystem-realm jdbc-realm ldap-realm properties-realm For more information, see the Securing applications and management interfaces using an identity store guide. New encoding attribute for Elytron file-based audit log You can now specify the encoding for file-based audit logs in Elytron by using the encoding attribute. The default value is UTF-8 . The following values are possible: UTF-8 UTF-16BE UTF-16LE UTF-16 US-ASCII ISO-8859-1 For more information, see Elytron audit logging in the Securing applications and management interfaces using an identity store guide. SSLv2Hello Beginning with JBoss EAP 8.0 Beta, you can specify the SSLv2Hello protocol for server-ssl-context and client-ssl-context in the elytron subsystem. Warning You must configure another encryption protocol if you want to configure SSLv2Hello because the purpose of the latter is to determine which encryption protocols the connected server supports. IBM JDK does not support SSLv2Hello in its client, although a server-side connection always accepts this protocol. Updates to filesystem-realm You can now encrypt the clear passwords, hashed passwords, and attributes associated with identities in a filesystem-realm for better security. You can do this in two ways: Create an encrypted filesystem-realm by referencing a secret key in the add operation. Encrypt an existing filesystem-realm using the new filesystem-realm-encrypt command in the WildFly Elytron Tool. You can now also enable integrity checks for a filesystem-realm to ensure that the identities in the filesystem-realm were not tampered with since the last authorized write. You can do this by referencing a key pair when you create the filesystem-realm using the add operation. WildFly Elytron generates a signature for the identity file using the key pair. An integrity check runs whenever an identity file is read. For more information, see Filesystem realm in Elytron in the Securing applications and management interfaces using an identity store guide. Updates to distributed-realm You can now configure distributed-realm to continue searching the referenced security realms even when the connection to any identity store fails by setting the new attribute ignore-unavailable-realms to true . By default, in case the connection to any identity store fails before an identity is matched, the authentication fails with an exception RealmUnavailableException as before. When you set ignore-unavailable-realms to true , a SecurityEvent is emitted in case any of the queried realms are unavailable. You can configure this behavior by setting emit-events to false . For more information, see the following resources in the Securing applications and management interfaces using multiple identity stores guide: Distributed realm in Elytron distributed-realm attibutes Elytron support provided for SSLContexts in Artemis In JBoss EAP 8, Elytron support is provided to instantiate the SSLContext variable in Messaging subsystem. This feature saves you from configuring SSLContext in multiple places as Elytron instantiates this variable. The connectors for the SSLContext must be defined on the elytron subsystem of the client's JBoss EAP server, which means that you cannot define it from a standalone messaging client application. New Elytron client java security provider Elytron client now provides a Java security provider, org.wildfly.security.auth.client.WildFlyElytronClientDefaultSSLContextProvider , that you can use to register a Java virtual machine (JVM)-wide default SSLContext . When you register the provider in your JVM with high enough priority, then all client libraries that use SSLContext.getDefault() method obtain an instance of the SSL context that is configured to be default in Elytron client configuration. This way you can make use of Elytron client's SSL context configuration without interacting with Elytron API directly. For more information, see Using Elytron client default SSLcontext security provider in JBoss EAP clients in the Configuring SSL/TLS in JBoss EAP guide. Ability to obtain custom principal from Elytron In JBoss EAP 8.0, you can now obtain a custom principal from Elytron. Previously, Elytron required principal to be an instance of NamePrincipal for authentication. While it was possible to use SecurityIdentity obtained from the current SecurityDomain and utilize SecurityIdentity attributes to obtain information from realms, it required reliance on SecurityDomain and SecurityIdentity instead of more generic and standardized methods like jakarta.security.enterprise.SecurityContext.getCallerPrincipal() . You can now obtain a custom principal from the getCallerPrincipal() method when using Elytron. If your application code using legacy security relies on getting a custom principal from the getCallerPrincipal() method, you can migrate your application without requiring code changes. 3.6. Clustering Configuring web session replication using a ProtoStream You can now configure web session replication using a ProtoStream instead of JBoss Marshalling in JBoss EAP 8.0. See How to configure web session replication to use ProtoStream instead of JBoss Marshalling in JBoss EAP 8.0 . Stopping batch job execution from a different node You can now stop batch job execution from a different clustered node in JBoss EAP 8.0. For more information see using Batch Processing JBeret with a clustering of nodes sharing the same job repository in JBoss EAP 8.0 . 3.7. Jakarta EE Jakarta EE Core Profile Jakarta EE 10 Core Profile is now available in JBoss EAP 8.0. The Core Profile is a small, lightweight profile that provides Jakarta EE specifications suitable for smaller runtimes, such as microservices and cloud services. The Jakarta EE 10 Core Profile is available as a Galleon provisioning layer, ee-core-profile-server . For more information about the Core Profile Galleon layer, see Capability trimming in JBoss EAP for OpenShift: Base layers . 3.8. Datasource subsystem Configuring custom exception-sorter or valid-connection-checker for a datasource You can now configure a custom exception-sorter or valid-connection-checker for a datasource using a JBoss Module. See How to configure a custom exception-sorter or valid-connection-checker for a datasource in JBoss EAP 8 . Support for eap-datasources-galleon-pack for JBoss EAP 8.0 You can now use the eap-datasources-galleon-pack Galleon feature-pack to provision a JBoss EAP 8.0 server that can connect to your databases. 3.9. Hibernate Hibernate Search 6 replaces Hibernate Search 5 APIs Hibernate Search 5 APIs have been removed and are replaced with Hibernate Search 6 APIs in JBoss EAP 8.0. To view a list of the removed features, see Hibernate Search 5 APIs Deprecated in JBoss EAP 7.4 and removed in EAP 8.0 . Note Hibernate Search 6 APIs are backwards-incompatible with Hibernate Search 5 APIs. You will need to migrate your applications to Hibernate Search 6. The latest version of Hibernate Search 6 included in JBoss EAP 8.0 is 6.2. If you are migrating from Hibernate Search 5, you should take into account the migration to version 6.0, 6.1, and 6.2. See the following migrations guides for more information: To migrate your applications from Hibernate Search 5, see the Hibernate Search 6.0 migration guide . To migrate your applications from Hibernate Search 6.0 to 6.1, see the Hibernate Search 6.1 migration guide . To migrate your applications from Hibernate Search 6.1 to 6.2, see the Hibernate Search 6.2 migration guide Note Hibernate Search 6.2 is compatible with Hibernate ORM 6.2. For more information, see the section Hibernate ORM 6 in the Hibernate Search 6.2 Reference documentation. Hibernate Search 6 supports Elasticsearch JBoss EAP 8.0 also provides support for using an Elasticsearch backend in Hibernate Search 6 to index data into remote Elasticsearch or OpenSearch clusters. To see a list of possible Hibernate Search architectures and backends, see Table 2. Comparison of architectures in the Hibernate Search 6.2 reference documentation. For more information about configuring Hibernate Search 6, see Using Hibernate Search in the WildFly Developer guide. 3.10. Infinispan Support for Infinispan distributed query, counter, and lock APIs and CDI modules You can now use the Infinispan APIs for distributed query, counters, and locks in JBoss EAP 8.0. The Infinispan CDI module is also available in JBoss EAP 8.0 for creating and injecting caches. For more information, see EAP 8 now supports Infinispan query, counters, locks, and CDI . 3.11. Messaging Addition of a new Galleon layer A new Galleon layer is added to provide support for the Jakarta Messaging Service (JMS) integration with an embedded ActiveMQ Artemis broker. For more information, refer to the section Galleon layer for embedded broker messaging in the Migration Guide. 3.12. Web server (Undertow) Configuring a cookie for web request affinity You can now configure a separate cookie to store session affinity information for load balancers by using the affinity-cookie resource at the address /subsystem=undertow/servlet-container=default/setting=affinity-cookie . For more information, see the Red Hat Knowledgebase solution How to configure the affinity-cookie and session-cookie in JBoss EAP 8 . 3.13. ejb3 subsystem JBoss EAP 8.0 server interoperability with JBoss EAP 7 and JBoss EAP 6 In JBoss EAP 8.0 you can enable interoperability between JBoss EAP 8.0 and older versions of your JBoss EAP server. JBoss EAP supports Jakarta EE 10 whose API class uses the jakarta package namespace. However, older versions of JBoss EAP use the javax package namespace. Important The older versions supported are JBoss EAP 6 and JBoss EAP 7 interoperability between JBoss EAP 6 and JBoss EAP 7 is not affected by this issue as both servers support the javax package namespace. For more information about how to enable interoperability between JBoss EAP 8.0 and older versions of JBoss EAP see, how to enable interoperability . Infinispan-based distributed timers In JBoss EAP 8.0, you can now use Infinispan-based distributed timers to schedule persistent Jakarta Enterprise Bean timers within a cluster, which you can scale to large clusters. For more information, see EAP 8 - how to configure Infinispan based distributed timers . Distributable EJB subsystem Use the distributable-ejb subsystem to configure clustering abstractions providers required for ejb3 subsystem functionalities, such as: Stateful session beans (SFSB) cache factories Client mappings registries for EJB client applications Distributed EJB timers You can currently define these providers at a system-wide level. It is planned to develop functionality to enable deployment-specific providers by customizing the ejb3 subsystem. For more information, see What is the distributable-ejb subsystem in EAP 8 . 3.14. OpenShift Red Hat build of Keycloak SAML support for JBoss EAP 8.0 Using Red Hat build of Keycloak SAML adapters with JBoss EAP 8.0 Source-to-Image (S2I) image will be supported when the adapters are released. For more information, see OpenShift, SSO SAML support for EAP 8 . Provisioning a JBoss EAP server using the Maven plug-in You can now use the JBoss EAP Maven plug-in on OpenShift to: Provision a trimmed server using Galleon. Install your application on the provisioned server. Tune the server configuration using the JBoss EAP management CLI. Package extra files into the server installation, such as a keystore file. Integrate the plug-in into your JBoss EAP 8.0 source-to-image application build. For more information, see Provisioning a JBoss EAP server using the Maven plug-in . OpenID Connect support for JBoss EAP source-to-image You can now secure applications deployed to JBoss EAP with OpenID Connect (OIDC) using the new elytron-oidc-client subsystem instead of installing the previously required Red Hat build of Keycloak Client Adapter. You can configure an elytron-oidc-client subsystem by using the environment variables to secure the application with OIDC. The Red Hat build of Keycloak Client Adapter is not provided in this release. For more information, see Using OpenID Connect to secure JBoss EAP applications on OpenShift . Building application images using Source-to-Image In JBoss EAP 8.0, an installed server has been removed from Source-to-Image (S2I) builder images. Galleon feature-packs and layers are now used to provision the server during the S2I build phase. To provision the server, include and configure the JBoss EAP Maven plug-in in the pom.xml file of your application. For more information, see Building application images using source-to-image in OpenShift . Override management attributes with environment variables To more easily adapt your JBoss EAP server configuration to your server environment, you can use an environment variable to override the value of any management attribute, without editing your configuration file. You cannot override management attributes of type LIST , OBJECT , or PROPERTY . In JBoss EAP 8.0 OpenShift runtime image, this feature is enabled by default. For more information, see Overriding management attributes with environment variables . Environment variable checks for resolving management model expressions JBoss EAP now supports environment variable checks when resolving management model expressions. In versions of JBoss EAP, the JBoss EAP server only checked for Java system properties in the management expression. Now, the server checks for relevant environment variables and system properties. If you use both, JBoss EAP will use the Java system property, rather than the environment variable, to resolve the management model expression. For more information about using environment variables to resolve management model expressions, see Using environment variables and model expression resolution . Maven compatibility Maven, versions 3.8.5 or earlier, include a version of the Apache Maven WAR plugin that is earlier than 3.3.2. This causes packaging errors with eap-maven-plugin . To resolve this issue, you must upgrade to Maven version 3.8.6 or later. Alternatively, you can add the maven-war-plugin dependency, version 3.3.2 or later, to your application pom.xml . Enhancements to node naming The value of the jboss.node.name system property is generated from the pod hostname and can be customized by using the JBOSS_NODE_NAME environment variable. This system property does not serve anymore as a transaction ID and does not have a limit of 23 characters in length, as it used to be in versions of JBoss EAP. However, in JBoss EAP 8.0, a new system property, jboss.tx.node.id , is also generated from the pod hostname and can be customized by using the JBOSS_NODE_NAME environment variable. This system property is now limited to 23 characters in length and serves as the transaction ID. Changes to Java options in JBoss EAP 8.0 images The JVM automatically tunes the memory and cpu limits and Garbage Collector configuration in JBoss EAP 8.0 images. Instead of computing -Xms and -Xmx options, images use -XX:InitialRAMPercentage and -XX:MaxRAMPercentage options to achieve the same capability dynamically. CONTAINER_CORE_LIMIT and JAVA_CORE_LIMIT have been removed. Additionally, -XX:ParallelGCThreads, -Djava.util.concurrent.ForkJoinPool.common.parallelism , and -XX:CICompilerCount are no longer used. Deploying a third-party application on OpenShift With JBoss EAP 8.0, you can create application images for OpenShift deployments by using compiled WAR files or EAR archives. By using a Dockerfile, you can deploy these archives to a JBoss EAP server with the complete runtime stack, including the operating system, Java, and JBoss EAP components. You can create the application image without depending on Source-to-Image (S2I). Excluded files in the JBoss EAP 8.0 server installation on OpenShift When installing JBoss EAP 8.0 server on OpenShift, the following files are not required and are intentionally excluded: bin/appclient.sh bin/wsprovide.sh bin/wsconsume.sh bin/jconsole.sh bin/client 3.15. Operator Enhanced Health Probe configuration with JBoss EAP 8.0 Operator The JBoss EAP 8.0 Operator now offers improved configuration options for health probes, focusing on better probe customization and compatibility between JBoss EAP 8.0 and JBoss EAP 7.4 images. This enhancement ensures smooth interoperability between both images, allowing probes to adjust their execution method flexibly. Key Improvements in your JBoss EAP 8.0 instance: Ability to work with JBoss EAP 8.0 and JBoss EAP 7-based images. Ability to configure LivenessProbe , ReadinessProbe , and StartupProbes . Startup Probe example configuration: Note By default, JBoss EAP 8.0 applications retain shell probes to ensure backward compatibility for JBoss EAP 7-based applications. 3.16. Quickstarts and BOMs Supported EAP 8 quickstarts All supported JBoss EAP 8 quickstarts are located at jboss-eap-quickstarts . New JBoss EAP BOMs for Maven JBoss EAP BOMs provide the Maven BOM files that specify the versions of JBoss EAP dependencies that are needed for building or testing your Maven projects. In addition, Jakarta EE 10 BOMs provide dependency management for related frameworks such as Hibernate, RESTasy, and proprietary components like Infinispan and Client BOMs. 3.17. Server Migration Tool JBoss EAP Server Migration Tool The Server Migration Tool is now a standalone migration tool and is no longer included with JBoss EAP 8.0. You can download the migration tool separately. 3.18. ActiveMQ Artemis Failure to add bridge on the ActiveMQ server In JBoss EAP 7, you could create a Java Message Service (JMS) bridge in the messaging-activemq subsystem before creating the source queue. The bridge remained inactive until the source queue was created. In JBoss EAP 8, you must create the source queue before creating a JMS bridge with the bridge:add command. If you create the JMS bridge before you create the source queue, the bridge:add command will fail. Adding a new connector in the messaging-activemq subsystem In JBoss EAP 8.0, when a new connector is added to a configuration model using the CLI in the messaging-activemq subsystem, you must restart or reload the server so that the connector can be accessed by the other parts of the system. In JBoss EAP 7.4, a connector would be added and referenced by other parts of the system but it cannot be used without restarting or reloading the server. 3.19. Jakarta Faces implementation Changes in Jakarta Faces implementation for MyFaces In releases, you could replace the Jakarta Faces implementation with an alternative. However, for MyFaces in JBoss EAP 8.0, this functionality has been moved to an external feature pack that requires provisioning by using the Galleon tool. If you want to use a non-default Mojarra version, manual configuration is necessary. For more information, see How to configure the Multi-JSF feature in EAP 8 . 3.20. High availability Updates to the JGroup protocol stack A new "RED" protocol has been added to the JGroup protocol stack in JBoss EAP 8.0. Additionally, the existing protocols have been upgraded. The following table lists the protocol updates: Old protocol Upgraded protocol FD_SOCK FD_SOCK2 FD_ALL FD_ALL3 VERIFY_SUSPECT VERIFY_SUSPECT2 FRAG3 FRAG4 While the old protocol stack will still work in JBoss EAP 8.0, use the upgraded stack for optimal results. 3.21. The jboss-eap-installation-manager You can now install and update JBoss EAP 8.0 using the jboss-eap-installation-manager . You can also perform server management operations, including updating, reverting, and various channel management tasks. For more information, see The Installation guide . 3.22. Management CLI integration of jboss-eap-installation-manager In JBoss EAP 8.0, a significant enhancement has been introduced with the integration of the jboss-eap-installation-manger with the Management CLI under the installer command. This enhancement allows you to seamlessly perform a wide range of server management operations such as updating, reverting, and managing channel operations in a standalone or a managed domain mode. For more information, see The Update guide . 3.23. Web Console integration of jboss-eap-installation-manager In JBoss EAP 8.0, you can now use the web console to update, revert, and manage channels in your JBoss EAP installation. However, it is recommended to use the jboss-eap-installation-manager . For more information, see The Update guide . 3.24. JBoss EAP Application Migration If you have used the galleon/provisioning.xml configuration file, to provision your JBoss EAP 7.4 installation with a valid S2I and you want to convert the file to a valid configuration for JBoss EAP 8 you must take note of the following changes: In your galleon/provisioning.xml configuration file you must use the org.jboss.eap:wildfly-ee-galleon-pack and org.jboss.eap:eap-cloud-galleon-pack feature packs instead of the eap-s2i feature pack. To successfully use these feature packs, you must also enable the use of JBoss EAP 8 channels by either configuring the eap-maven-plugin in the application pom.xml or using the S2I environment variable. Additional resources The Galleon provisioning file . Creating an S2I build using the legacy S2I provisioning capabilities . The Maven plug-in configuration attributes .
[ "deployment deploy-file /path/to/test-application.war", "deployment deploy-file /path/to/test-application.war --all-server-groups", "deployment deploy-file /path/to/test-application.war --server-groups=main-server-group,other-server-group", "--runtime-name=my-application.war", "apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: spec: applicationImage: '...' livenessProbe: httpGet: path: /health/live port: 9990 scheme: HTTP initialDelaySeconds: 30 readinessProbe: httpGet: path: /health/ready port: 9990 scheme: HTTP initialDelaySeconds: 10 replicas: 1 startupProbe: httpGet: path: /health/started port: 9990 scheme: HTTP initialDelaySeconds: 60" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/release_notes_for_red_hat_jboss_enterprise_application_platform_8.0/new_features_and_enhancements
Chapter 3. A practical example of an Ansible Playbook
Chapter 3. A practical example of an Ansible Playbook Ansible can communicate with many different device classifications, from cloud-based REST APIs, to Linux and Windows systems, networking hardware, and much more. The following is a sample of two Ansible modules automatically updating two types of servers. 3.1. Playbook execution A playbook runs in order from top to bottom. Within each play, tasks also run in order from top to bottom. Playbooks with multiple 'plays' can orchestrate multi-machine deployments, running one play on your webservers, then another play on your database servers, then a third play on your network infrastructure, and so on. At a minimum, each play defines two things: the managed nodes to target, using a pattern at least one task to execute Note In Ansible 2.10 and later, use the fully-qualified collection name in your playbooks to ensure the correct module is selected, because multiple collections can contain modules with the same name (for example, user ). For further information, see Using collections in a playbook . In this example, the first play targets the web servers; the second play targets the database servers. --- - name: Update web servers hosts: webservers become: true tasks: - name: Ensure apache is at the latest version ansible.builtin.yum: name: httpd state: latest - name: Write the apache config file ansible.builtin.template: src: /srv/httpd.j2 dest: /etc/httpd.conf mode: "0644" - name: Update db servers hosts: databases become: true tasks: - name: Ensure postgresql is at the latest version ansible.builtin.yum: name: postgresql state: latest - name: Ensure that postgresql is started ansible.builtin.service: name: postgresql state: started The playbook contains two plays: The first checks if the web server software is up to date and runs the update if necessary. The second checks if database server software is up to date and runs the update if necessary. Your playbook can include more than just a hosts line and tasks. For example, this example playbook sets a remote_user for each play. This is the user account for the SSH connection. You can add other Playbook Keywords at the playbook, play, or task level to influence how Ansible behaves. Playbook keywords can control the connection plugin, whether to use privilege escalation, how to handle errors, and more. To support a variety of environments, Ansible enables you to set many of these parameters as command-line flags, in your Ansible configuration, or in your inventory. Learning the precedence rules for these sources of data can help you as you expand your Ansible ecosystem
[ "--- - name: Update web servers hosts: webservers become: true tasks: - name: Ensure apache is at the latest version ansible.builtin.yum: name: httpd state: latest - name: Write the apache config file ansible.builtin.template: src: /srv/httpd.j2 dest: /etc/httpd.conf mode: \"0644\" - name: Update db servers hosts: databases become: true tasks: - name: Ensure postgresql is at the latest version ansible.builtin.yum: name: postgresql state: latest - name: Ensure that postgresql is started ansible.builtin.service: name: postgresql state: started" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/getting_started_with_playbooks/assembly-playbook-practical-example
Chapter 2. LocalResourceAccessReview [authorization.openshift.io/v1]
Chapter 2. LocalResourceAccessReview [authorization.openshift.io/v1] Description LocalResourceAccessReview is a means to request a list of which users and groups are authorized to perform the action specified by spec in a particular namespace Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required namespace verb resourceAPIGroup resourceAPIVersion resource resourceName path isNonResourceURL 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources content RawExtension Content is the actual content of the request for create and update isNonResourceURL boolean IsNonResourceURL is true if this is a request for a non-resource URL (outside of the resource hierarchy) kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds namespace string Namespace is the namespace of the action being requested. Currently, there is no distinction between no namespace and all namespaces path string Path is the path of a non resource URL resource string Resource is one of the existing resource types resourceAPIGroup string Group is the API group of the resource Serialized as resourceAPIGroup to avoid confusion with the 'groups' field when inlined resourceAPIVersion string Version is the API version of the resource Serialized as resourceAPIVersion to avoid confusion with TypeMeta.apiVersion and ObjectMeta.resourceVersion when inlined resourceName string ResourceName is the name of the resource being requested for a "get" or deleted for a "delete" verb string Verb is one of: get, list, watch, create, update, delete 2.2. API endpoints The following API endpoints are available: /apis/authorization.openshift.io/v1/namespaces/{namespace}/localresourceaccessreviews POST : create a LocalResourceAccessReview 2.2.1. /apis/authorization.openshift.io/v1/namespaces/{namespace}/localresourceaccessreviews Table 2.1. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. HTTP method POST Description create a LocalResourceAccessReview Table 2.2. Body parameters Parameter Type Description body LocalResourceAccessReview schema Table 2.3. HTTP responses HTTP code Reponse body 200 - OK LocalResourceAccessReview schema 201 - Created LocalResourceAccessReview schema 202 - Accepted LocalResourceAccessReview schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authorization_apis/localresourceaccessreview-authorization-openshift-io-v1
Appendix B. Audit System Reference
Appendix B. Audit System Reference B.1. Audit Event Fields Table B.1, "Event Fields" lists all currently-supported Audit event fields. An event field is the value preceding the equal sign in the Audit log files. Table B.1. Event Fields Event Field Explanation a0 , a1 , a2 , a3 Records the first four arguments of the system call, encoded in hexadecimal notation. acct Records a user's account name. addr Records the IPv4 or IPv6 address. This field usually follows a hostname field and contains the address the host name resolves to. arch Records information about the CPU architecture of the system, encoded in hexadecimal notation. auid Records the Audit user ID. This ID is assigned to a user upon login and is inherited by every process even when the user's identity changes (for example, by switching user accounts with su - john ). capability Records the number of bits that were used to set a particular Linux capability. For more information on Linux capabilities, see the capabilities (7) man page. cap_fi Records data related to the setting of an inherited file system-based capability. cap_fp Records data related to the setting of a permitted file system-based capability. cap_pe Records data related to the setting of an effective process-based capability. cap_pi Records data related to the setting of an inherited process-based capability. cap_pp Records data related to the setting of a permitted process-based capability. cgroup Records the path to the cgroup that contains the process at the time the Audit event was generated. cmd Records the entire command line that is executed. This is useful in case of shell interpreters where the exe field records, for example, /bin/bash as the shell interpreter and the cmd field records the rest of the command line that is executed, for example helloworld.sh --help . comm Records the command that is executed. This is useful in case of shell interpreters where the exe field records, for example, /bin/bash as the shell interpreter and the comm field records the name of the script that is executed, for example helloworld.sh . cwd Records the path to the directory in which a system call was invoked. data Records data associated with TTY records. dev Records the minor and major ID of the device that contains the file or directory recorded in an event. devmajor Records the major device ID. devminor Records the minor device ID. egid Records the effective group ID of the user who started the analyzed process. euid Records the effective user ID of the user who started the analyzed process. exe Records the path to the executable that was used to invoke the analyzed process. exit Records the exit code returned by a system call. This value varies by system call. You can interpret the value to its human-readable equivalent with the following command: ausearch --interpret --exit exit_code family Records the type of address protocol that was used, either IPv4 or IPv6. filetype Records the type of the file. flags Records the file system name flags. fsgid Records the file system group ID of the user who started the analyzed process. fsuid Records the file system user ID of the user who started the analyzed process. gid Records the group ID. hostname Records the host name. icmptype Records the type of a Internet Control Message Protocol (ICMP) package that is received. Audit messages containing this field are usually generated by iptables . id Records the user ID of an account that was changed. inode Records the inode number associated with the file or directory recorded in an Audit event. inode_gid Records the group ID of the inode's owner. inode_uid Records the user ID of the inode's owner. items Records the number of path records that are attached to this record. key Records the user defined string associated with a rule that generated a particular event in the Audit log. list Records the Audit rule list ID. The following is a list of known IDs: 0 - user 1 - task 4 - exit 5 - exclude mode Records the file or directory permissions, encoded in numerical notation. msg Records a time stamp and a unique ID of a record, or various event-specific <name> = <value> pairs provided by the kernel or user space applications. msgtype Records the message type that is returned in case of a user-based AVC denial. The message type is determined by D-Bus. name Records the full path of the file or directory that was passed to the system call as an argument. new-disk Records the name of a new disk resource that is assigned to a virtual machine. new-mem Records the amount of a new memory resource that is assigned to a virtual machine. new-vcpu Records the number of a new virtual CPU resource that is assigned to a virtual machine. new-net Records the MAC address of a new network interface resource that is assigned to a virtual machine. new_gid Records a group ID that is assigned to a user. oauid Records the user ID of the user that has logged in to access the system (as opposed to, for example, using su ) and has started the target process. This field is exclusive to the record of type OBJ_PID . ocomm Records the command that was used to start the target process.This field is exclusive to the record of type OBJ_PID . opid Records the process ID of the target process. This field is exclusive to the record of type OBJ_PID . oses Records the session ID of the target process. This field is exclusive to the record of type OBJ_PID . ouid Records the real user ID of the target process obj Records the SELinux context of an object. An object can be a file, a directory, a socket, or anything that is receiving the action of a subject. obj_gid Records the group ID of an object. obj_lev_high Records the high SELinux level of an object. obj_lev_low Records the low SELinux level of an object. obj_role Records the SELinux role of an object. obj_uid Records the UID of an object obj_user Records the user that is associated with an object. ogid Records the object owner's group ID. old-disk Records the name of an old disk resource when a new disk resource is assigned to a virtual machine. old-mem Records the amount of an old memory resource when a new amount of memory is assigned to a virtual machine. old-vcpu Records the number of an old virtual CPU resource when a new virtual CPU is assigned to a virtual machine. old-net Records the MAC address of an old network interface resource when a new network interface is assigned to a virtual machine. old_prom Records the value of the network promiscuity flag. ouid Records the real user ID of the user who started the target process. path Records the full path of the file or directory that was passed to the system call as an argument in case of AVC-related Audit events perm Records the file permission that was used to generate an event (that is, read, write, execute, or attribute change) pid The pid field semantics depend on the origin of the value in this field. In fields generated from user-space, this field holds a process ID. In fields generated by the kernel, this field holds a thread ID. The thread ID is equal to process ID for single-threaded processes. Note that the value of this thread ID is different from the values of pthread_t IDs used in user-space. For more information, see the gettid (2) man page. ppid Records the Parent Process ID (PID). prom Records the network promiscuity flag. proto Records the networking protocol that was used. This field is specific to Audit events generated by iptables . res Records the result of the operation that triggered the Audit event. result Records the result of the operation that triggered the Audit event. saddr Records the socket address. sauid Records the sender Audit login user ID. This ID is provided by D-Bus as the kernel is unable to see which user is sending the original auid . ses Records the session ID of the session from which the analyzed process was invoked. sgid Records the set group ID of the user who started the analyzed process. sig Records the number of a signal that causes a program to end abnormally. Usually, this is a sign of a system intrusion. subj Records the SELinux context of a subject. A subject can be a process, a user, or anything that is acting upon an object. subj_clr Records the SELinux clearance of a subject. subj_role Records the SELinux role of a subject. subj_sen Records the SELinux sensitivity of a subject. subj_user Records the user that is associated with a subject. success Records whether a system call was successful or failed. suid Records the set user ID of the user who started the analyzed process. syscall Records the type of the system call that was sent to the kernel. terminal Records the terminal name (without /dev/ ). tty Records the name of the controlling terminal. The value (none) is used if the process has no controlling terminal. uid Records the real user ID of the user who started the analyzed process. vm Records the name of a virtual machine from which the Audit event originated.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/app-audit_reference
5.21. c-ares
5.21. c-ares 5.21.1. RHBA-2012:0922 - c-ares bug fix update Updated c-ares packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The c-ares C library defines asynchronous DNS (Domain Name System) requests and provides name resolving API. Bug Fixes BZ# 730695 Previously, when searching for AF_UNSPEC or AF_INET6 address families, the c-ares library fell back to the AF_INET family if no AF_INET6 addresses were found. Consequently, IPv4 addresses were returned even if only IPv6 addresses were requested. With this update, c-ares performs the fallback only when searching for AF_UNSPEC addresses. BZ# 730693 The ares_parse_a_reply() function leaked memory when the user attempted to parse an invalid reply. With this update, the allocated memory is freed properly and the memory leak no longer occurs. BZ# 713133 A switch statement inside the ares_malloc_data() public function was missing a terminating break statement. This could result in unpredictable behavior and sometimes the application terminated unexpectedly. This update adds the missing switch statement and the ares_malloc_data() function now works as intended. BZ# 695426 When parsing SeRVice (SRV) record queries, c-ares was accessing memory incorrectly on architectures that require data to be aligned in memory. This caused the program to terminate unexpectedly with the SIGBUS signal. With this update, c-ares has been modified to access the memory correctly in the scenario described. BZ# 640944 Previously, the ares_gethostbyname manual page did not document the ARES_ENODATA error code as a valid and expected error code. With this update, the manual page has been modified accordingly. All users of c-ares are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/c-ares
function::user_string
function::user_string Name function::user_string - Retrieves string from user space Synopsis Arguments addr the user space address to retrieve the string from Description Returns the null terminated C string from a given user space memory address. Reports an error on the rare cases when userspace data is not accessible.
[ "user_string:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-user-string
2.4.3. The Root File System and GRUB
2.4.3. The Root File System and GRUB The use of the term root file system has a different meaning in regard to GRUB. It is important to remember that GRUB's root file system has nothing to do with the Linux root file system. The GRUB root file system is the top level of the specified device. For example, the image file (hd0,0)/grub/splash.xpm.gz is located within the /grub/ directory at the top-level (or root) of the (hd0,0) partition (which is actually the /boot/ partition for the system). , the kernel command is executed with the location of the kernel file as an option. Once the Linux kernel boots, it sets up the root file system that Linux users are familiar with. The original GRUB root file system and its mounts are forgotten; they only existed to boot the kernel file. Refer to the root and kernel commands in Section 2.6, "GRUB Commands" for more information.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-grub-terminology-rootfs
Chapter 11. Pacemaker Rules
Chapter 11. Pacemaker Rules Rules can be used to make your configuration more dynamic. One use of rules might be to assign machines to different processing groups (using a node attribute) based on time and to then use that attribute when creating location constraints. Each rule can contain a number of expressions, date-expressions and even other rules. The results of the expressions are combined based on the rule's boolean-op field to determine if the rule ultimately evaluates to true or false . What happens depends on the context in which the rule is being used. Table 11.1. Properties of a Rule Field Description role Limits the rule to apply only when the resource is in that role. Allowed values: Started , Slave, and Master . NOTE: A rule with role="Master" cannot determine the initial location of a clone instance. It will only affect which of the active instances will be promoted. score The score to apply if the rule evaluates to true . Limited to use in rules that are part of location constraints. score-attribute The node attribute to look up and use as a score if the rule evaluates to true . Limited to use in rules that are part of location constraints. boolean-op How to combine the result of multiple expression objects. Allowed values: and and or . The default value is and . 11.1. Node Attribute Expressions Node attribute expressions are used to control a resource based on the attributes defined by a node or nodes. Table 11.2. Properties of an Expression Field Description attribute The node attribute to test type Determines how the value(s) should be tested. Allowed values: string , integer , version . The default value is string operation The comparison to perform. Allowed values: * lt - True if the node attribute's value is less than value * gt - True if the node attribute's value is greater than value * lte - True if the node attribute's value is less than or equal to value * gte - True if the node attribute's value is greater than or equal to value * eq - True if the node attribute's value is equal to value * ne - True if the node attribute's value is not equal to value * defined - True if the node has the named attribute * not_defined - True if the node does not have the named attribute value User supplied value for comparison (required) In addition to any attributes added by the administrator, the cluster defines special, built-in node attributes for each node that can also be used, as described in Table 11.3, "Built-in Node Attributes" . Table 11.3. Built-in Node Attributes Name Description #uname Node name #id Node ID #kind Node type. Possible values are cluster , remote , and container . The value of kind is remote . for Pacemaker Remote nodes created with the ocf:pacemaker:remote resource, and container for Pacemaker Remote guest nodes and bundle nodes. #is_dc true if this node is a Designated Controller (DC), false otherwise #cluster_name The value of the cluster-name cluster property, if set #site_name The value of the site-name node attribute, if set, otherwise identical to #cluster-name #role The role the relevant multistate resource has on this node. Valid only within a rule for a location constraint for a multistate resource.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/ch-pacemakerrules-HAAR
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/using_shenandoah_garbage_collector_with_red_hat_build_of_openjdk_21/making-open-source-more-inclusive
Chapter 9. Volume Snapshots
Chapter 9. Volume Snapshots A volume snapshot is the state of the storage volume in a cluster at a particular point in time. These snapshots help to use storage more efficiently by not having to make a full copy each time and can be used as building blocks for developing an application. Volume snapshot class allows an administrator to specify different attributes belonging to a volume snapshot object. The OpenShift Data Foundation operator installs default volume snapshot classes depending on the platform in use. The operator owns and controls these default volume snapshot classes and they cannot be deleted or modified. You can create many snapshots of the same persistent volume claim (PVC) but cannot schedule periodic creation of snapshots. For CephFS, you can create up to 100 snapshots per PVC. For RADOS Block Device (RBD), you can create up to 512 snapshots per PVC. Note Persistent Volume encryption now supports volume snapshots. 9.1. Creating volume snapshots You can create a volume snapshot either from the Persistent Volume Claim (PVC) page or the Volume Snapshots page. Prerequisites For a consistent snapshot, the PVC should be in Bound state and not be in use. Ensure to stop all IO before taking the snapshot. Note OpenShift Data Foundation only provides crash consistency for a volume snapshot of a PVC if a pod is using it. For application consistency, be sure to first tear down a running pod to ensure consistent snapshots or use any quiesce mechanism provided by the application to ensure it. Procedure From the Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. To create a volume snapshot, do one of the following: Beside the desired PVC, click Action menu (...) Create Snapshot . Click on the PVC for which you want to create the snapshot and click Actions Create Snapshot . Enter a Name for the volume snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, click Create Volume Snapshot . Choose the required Project from the drop-down list. Choose the Persistent Volume Claim from the drop-down list. Enter a Name for the snapshot. Choose the Snapshot Class from the drop-down list. Click Create . You will be redirected to the Details page of the volume snapshot that is created. Verification steps Go to the Details page of the PVC and click the Volume Snapshots tab to see the list of volume snapshots. Verify that the new volume snapshot is listed. Click Storage Volume Snapshots from the OpenShift Web Console. Verify that the new volume snapshot is listed. Wait for the volume snapshot to be in Ready state. 9.2. Restoring volume snapshots When you restore a volume snapshot, a new Persistent Volume Claim (PVC) gets created. The restored PVC is independent of the volume snapshot and the parent PVC. You can restore a volume snapshot from either the Persistent Volume Claim page or the Volume Snapshots page. Procedure From the Persistent Volume Claims page You can restore volume snapshot from the Persistent Volume Claims page only if the parent PVC is present. Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name with the volume snapshot to restore a volume snapshot as a new PVC. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. From the Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots tab, click the Action menu (...) to the volume snapshot you want to restore. Click Restore as new PVC . Enter a name for the new PVC. Select the Storage Class name. Note For Rados Block Device (RBD), you must select a storage class with the same pool as that of the parent PVC. Restoring the snapshot of an encrypted PVC using a storage class where encryption is not enabled and vice versa is not supported. Select the Access Mode of your choice. Important The ReadOnlyMany (ROX) access mode is a Developer Preview feature and is subject to Developer Preview support limitations. Developer Preview releases are not intended to be run in production environments and are not supported through the Red Hat Customer Portal case management system. If you need assistance with ReadOnlyMany feature, reach out to the [email protected] mailing list and a member of the Red Hat Development Team will assist you as quickly as possible based on availability and work schedules. See Creating a clone or restoring a snapshot with the new readonly access mode to use the ROX access mode. Optional: For RBD, select Volume mode . Click Restore . You are redirected to the new PVC details page. Verification steps Click Storage Persistent Volume Claims from the OpenShift Web Console and confirm that the new PVC is listed in the Persistent Volume Claims page. Wait for the new PVC to reach Bound state. 9.3. Deleting volume snapshots Prerequisites For deleting a volume snapshot, the volume snapshot class which is used in that particular volume snapshot should be present. Procedure From Persistent Volume Claims page Click Storage Persistent Volume Claims from the OpenShift Web Console. Click on the PVC name which has the volume snapshot that needs to be deleted. In the Volume Snapshots tab, beside the desired volume snapshot, click Action menu (...) Delete Volume Snapshot . From Volume Snapshots page Click Storage Volume Snapshots from the OpenShift Web Console. In the Volume Snapshots page, beside the desired volume snapshot click Action menu (...) Delete Volume Snapshot . Verfication steps Ensure that the deleted volume snapshot is not present in the Volume Snapshots tab of the PVC details page. Click Storage Volume Snapshots and ensure that the deleted volume snapshot is not listed.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/managing_and_allocating_storage_resources/volume-snapshots_rhodf
Chapter 1. Backup and restore
Chapter 1. Backup and restore 1.1. Control plane backup and restore operations As a cluster administrator, you might need to stop an OpenShift Container Platform cluster for a period and restart it later. Some reasons for restarting a cluster are that you need to perform maintenance on a cluster or want to reduce resource costs. In OpenShift Container Platform, you can perform a graceful shutdown of a cluster so that you can easily restart the cluster later. You must back up etcd data before shutting down a cluster; etcd is the key-value store for OpenShift Container Platform, which persists the state of all resource objects. An etcd backup plays a crucial role in disaster recovery. In OpenShift Container Platform, you can also replace an unhealthy etcd member . When you want to get your cluster running again, restart the cluster gracefully . Note A cluster's certificates expire one year after the installation date. You can shut down a cluster and expect it to restart gracefully while the certificates are still valid. Although the cluster automatically retrieves the expired control plane certificates, you must still approve the certificate signing requests (CSRs) . You might run into several situations where OpenShift Container Platform does not work as expected, such as: You have a cluster that is not functional after the restart because of unexpected conditions, such as node failure, or network connectivity issues. You have deleted something critical in the cluster by mistake. You have lost the majority of your control plane hosts, leading to etcd quorum loss. You can always recover from a disaster situation by restoring your cluster to its state using the saved etcd snapshots. Additional resources Quorum protection with machine lifecycle hooks 1.2. Application backup and restore operations As a cluster administrator, you can back up and restore applications running on OpenShift Container Platform by using the OpenShift API for Data Protection (OADP). OADP backs up and restores Kubernetes resources and internal images, at the granularity of a namespace, by using the version of Velero that is appropriate for the version of OADP you install, according to the table in Downloading the Velero CLI tool . OADP backs up and restores persistent volumes (PVs) by using snapshots or Restic. For details, see OADP features . 1.2.1. OADP requirements OADP has the following requirements: You must be logged in as a user with a cluster-admin role. You must have object storage for storing backups, such as one of the following storage types: OpenShift Data Foundation Amazon Web Services Microsoft Azure Google Cloud Platform S3-compatible object storage Note If you want to use CSI backup on OCP 4.11 and later, install OADP 1.1. x . OADP 1.0. x does not support CSI backup on OCP 4.11 and later. OADP 1.0. x includes Velero 1.7. x and expects the API group snapshot.storage.k8s.io/v1beta1 , which is not present on OCP 4.11 and later. Important The CloudStorage API for S3 storage is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . To back up PVs with snapshots, you must have cloud storage that has a native snapshot API or supports Container Storage Interface (CSI) snapshots, such as the following providers: Amazon Web Services Microsoft Azure Google Cloud Platform CSI snapshot-enabled cloud storage, such as Ceph RBD or Ceph FS Note If you do not want to back up PVs by using snapshots, you can use Restic , which is installed by the OADP Operator by default. 1.2.2. Backing up and restoring applications You back up applications by creating a Backup custom resource (CR). See Creating a Backup CR . You can configure the following backup options: Creating backup hooks to run commands before or after the backup operation Scheduling backups Restic backups You restore application backups by creating a Restore (CR). See Creating a Restore CR . You can configure restore hooks to run commands in init containers or in the application container during the restore operation.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/backup_and_restore/backup-restore-overview
Chapter 24. Multiple networks
Chapter 24. Multiple networks 24.1. Understanding multiple networks In Kubernetes, container networking is delegated to networking plugins that implement the Container Network Interface (CNI). OpenShift Container Platform uses the Multus CNI plugin to allow chaining of CNI plugins. During cluster installation, you configure your default pod network. The default network handles all ordinary network traffic for the cluster. You can define an additional network based on the available CNI plugins and attach one or more of these networks to your pods. You can define more than one additional network for your cluster, depending on your needs. This gives you flexibility when you configure pods that deliver network functionality, such as switching or routing. 24.1.1. Usage scenarios for an additional network You can use an additional network in situations where network isolation is needed, including data plane and control plane separation. Isolating network traffic is useful for the following performance and security reasons: Performance You can send traffic on two different planes to manage how much traffic is along each plane. Security You can send sensitive traffic onto a network plane that is managed specifically for security considerations, and you can separate private data that must not be shared between tenants or customers. All of the pods in the cluster still use the cluster-wide default network to maintain connectivity across the cluster. Every pod has an eth0 interface that is attached to the cluster-wide pod network. You can view the interfaces for a pod by using the oc exec -it <pod_name> -- ip a command. If you add additional network interfaces that use Multus CNI, they are named net1 , net2 , ... , netN . To attach additional network interfaces to a pod, you must create configurations that define how the interfaces are attached. You specify each interface by using a NetworkAttachmentDefinition custom resource (CR). A CNI configuration inside each of these CRs defines how that interface is created. 24.1.2. Additional networks in OpenShift Container Platform OpenShift Container Platform provides the following CNI plugins for creating additional networks in your cluster: bridge : Configure a bridge-based additional network to allow pods on the same host to communicate with each other and the host. host-device : Configure a host-device additional network to allow pods access to a physical Ethernet network device on the host system. ipvlan : Configure an ipvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts, similar to a macvlan-based additional network. Unlike a macvlan-based additional network, each pod shares the same MAC address as the parent physical network interface. vlan : Configure a vlan-based additional network to allow VLAN-based network isolation and connectivity for pods. macvlan : Configure a macvlan-based additional network to allow pods on a host to communicate with other hosts and pods on those hosts by using a physical network interface. Each pod that is attached to a macvlan-based additional network is provided a unique MAC address. tap : Configure a tap-based additional network to create a tap device inside the container namespace. A tap device enables user space programs to send and receive network packets. SR-IOV : Configure an SR-IOV based additional network to allow pods to attach to a virtual function (VF) interface on SR-IOV capable hardware on the host system. 24.2. Configuring an additional network As a cluster administrator, you can configure an additional network for your cluster. The following network types are supported: Bridge Host device VLAN IPVLAN MACVLAN TAP OVN-Kubernetes 24.2.1. Approaches to managing an additional network You can manage the lifecycle of an additional network in OpenShift Container Platform by using one of two approaches: modifying the Cluster Network Operator (CNO) configuration or applying a YAML manifest. Each approach is mutually exclusive and you can only use one approach for managing an additional network at a time. For either approach, the additional network is managed by a Container Network Interface (CNI) plugin that you configure. The two different approaches are summarized here: Modifying the Cluster Network Operator (CNO) configuration: Configuring additional networks through CNO is only possible for cluster administrators. The CNO automatically creates and manages the NetworkAttachmentDefinition object. By using this approach, you can define NetworkAttachmentDefinition objects at install time through configuration of the install-config . Applying a YAML manifest: You can manage the additional network directly by creating an NetworkAttachmentDefinition object. Compared to modifying the CNO configuration, this approach gives you more granular control and flexibility when it comes to configuration. Note When deploying OpenShift Container Platform nodes with multiple network interfaces on Red Hat OpenStack Platform (RHOSP) with OVN Kubernetes, DNS configuration of the secondary interface might take precedence over the DNS configuration of the primary interface. In this case, remove the DNS nameservers for the subnet ID that is attached to the secondary interface: USD openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id> 24.2.2. IP address assignment for additional networks For additional networks, IP addresses can be assigned using an IP Address Management (IPAM) CNI plugin, which supports various assignment methods, including Dynamic Host Configuration Protocol (DHCP) and static assignment. The DHCP IPAM CNI plugin responsible for dynamic assignment of IP addresses operates with two distinct components: CNI Plugin : Responsible for integrating with the Kubernetes networking stack to request and release IP addresses. DHCP IPAM CNI Daemon : A listener for DHCP events that coordinates with existing DHCP servers in the environment to handle IP address assignment requests. This daemon is not a DHCP server itself. For networks requiring type: dhcp in their IPAM configuration, ensure the following: A DHCP server is available and running in the environment. The DHCP server is external to the cluster and is expected to be part of the customer's existing network infrastructure. The DHCP server is appropriately configured to serve IP addresses to the nodes. In cases where a DHCP server is unavailable in the environment, it is recommended to use the Whereabouts IPAM CNI plugin instead. The Whereabouts CNI provides similar IP address management capabilities without the need for an external DHCP server. Note Use the Whereabouts CNI plugin when there is no external DHCP server or where static IP address management is preferred. The Whereabouts plugin includes a reconciler daemon to manage stale IP address allocations. A DHCP lease must be periodically renewed throughout the container's lifetime, so a separate daemon, the DHCP IPAM CNI Daemon, is required. To deploy the DHCP IPAM CNI daemon, modify the Cluster Network Operator (CNO) configuration to trigger the deployment of this daemon as part of the additional network setup. Additional resources Dynamic IP address (DHCP) assignment configuration Dynamic IP address assignment configuration with Whereabouts 24.2.3. Configuration for an additional network attachment An additional network is configured by using the NetworkAttachmentDefinition API in the k8s.cni.cncf.io API group. Important Do not store any sensitive information or a secret in the NetworkAttachmentDefinition CRD because this information is accessible by the project administration user. The configuration for the API is described in the following table: Table 24.1. NetworkAttachmentDefinition API fields Field Type Description metadata.name string The name for the additional network. metadata.namespace string The namespace that the object is associated with. spec.config string The CNI plugin configuration in JSON format. 24.2.3.1. Configuration of an additional network through the Cluster Network Operator The configuration for an additional network attachment is specified as part of the Cluster Network Operator (CNO) configuration. The following YAML describes the configuration parameters for managing an additional network with the CNO: Cluster Network Operator configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { ... } type: Raw 1 An array of one or more additional network configurations. 2 The name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 3 The namespace to create the network attachment in. If you do not specify a value then the default namespace is used. Important To prevent namespace issues for the OVN-Kubernetes network plugin, do not name your additional network attachment default , because this namespace is reserved for the default additional network attachment. 4 A CNI plugin configuration in JSON format. 24.2.3.2. Configuration of an additional network from a YAML manifest The configuration for an additional network is specified from a YAML configuration file, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { ... } 1 The name for the additional network attachment that you are creating. 2 A CNI plugin configuration in JSON format. 24.2.4. Configurations for additional network types The specific configuration fields for additional networks is described in the following sections. 24.2.4.1. Configuration for a bridge additional network The following object describes the configuration parameters for the bridge CNI plugin: Table 24.2. Bridge CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: bridge . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. bridge string Optional: Specify the name of the virtual bridge to use. If the bridge interface does not exist on the host, it is created. The default value is cni0 . ipMasq boolean Optional: Set to true to enable IP masquerading for traffic that leaves the virtual network. The source IP address for all traffic is rewritten to the bridge's IP address. If the bridge does not have an IP address, this setting has no effect. The default value is false . isGateway boolean Optional: Set to true to assign an IP address to the bridge. The default value is false . isDefaultGateway boolean Optional: Set to true to configure the bridge as the default gateway for the virtual network. The default value is false . If isDefaultGateway is set to true , then isGateway is also set to true automatically. forceAddress boolean Optional: Set to true to allow assignment of a previously assigned IP address to the virtual bridge. When set to false , if an IPv4 address or an IPv6 address from overlapping subsets is assigned to the virtual bridge, an error occurs. The default value is false . hairpinMode boolean Optional: Set to true to allow the virtual bridge to send an Ethernet frame back through the virtual port it was received on. This mode is also known as reflective relay . The default value is false . promiscMode boolean Optional: Set to true to enable promiscuous mode on the bridge. The default value is false . vlan string Optional: Specify a virtual LAN (VLAN) tag as an integer value. By default, no VLAN tag is assigned. preserveDefaultVlan string Optional: Indicates whether the default vlan must be preserved on the veth end connected to the bridge. Defaults to true. vlanTrunk list Optional: Assign a VLAN trunk tag. The default value is none . mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. enabledad boolean Optional: Enables duplicate address detection for the container side veth . The default value is false . macspoofchk boolean Optional: Enables mac spoof check, limiting the traffic originating from the container to the mac address of the interface. The default value is false . Note The VLAN parameter configures the VLAN tag on the host end of the veth and also enables the vlan_filtering feature on the bridge interface. Note To configure uplink for a L2 network you need to allow the vlan on the uplink interface by using the following command: USD bridge vlan add vid VLAN_ID dev DEV 24.2.4.1.1. bridge configuration example The following example configures an additional network named bridge-net : { "cniVersion": "0.3.1", "name": "bridge-net", "type": "bridge", "isGateway": true, "vlan": 2, "ipam": { "type": "dhcp" } } 24.2.4.2. Configuration for a host device additional network Note Specify your network device by setting only one of the following parameters: device , hwaddr , kernelpath , or pciBusID . The following object describes the configuration parameters for the host-device CNI plugin: Table 24.3. Host device CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: host-device . device string Optional: The name of the device, such as eth0 . hwaddr string Optional: The device hardware MAC address. kernelpath string Optional: The Linux kernel device path, such as /sys/devices/pci0000:00/0000:00:1f.6 . pciBusID string Optional: The PCI address of the network device, such as 0000:00:1f.6 . 24.2.4.2.1. host-device configuration example The following example configures an additional network named hostdev-net : { "cniVersion": "0.3.1", "name": "hostdev-net", "type": "host-device", "device": "eth1" } 24.2.4.3. Configuration for a VLAN additional network The following object describes the configuration parameters for the VLAN, vlan , CNI plugin: Table 24.4. VLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: vlan . master string The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. vlanId integer Set the ID of the vlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. dns integer Optional: DNS information to return. For example, a priority-ordered list of DNS nameservers. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Important A NetworkAttachmentDefinition custom resource definition (CRD) with a vlan configuration can be used only on a single pod in a node because the CNI plugin cannot create multiple vlan subinterfaces with the same vlanId on the same master interface. 24.2.4.3.1. VLAN configuration example The following example demonstrates a vlan configuration with an additional network that is named vlan-net : { "name": "vlan-net", "cniVersion": "0.3.1", "type": "vlan", "master": "eth0", "mtu": 1500, "vlanId": 5, "linkInContainer": false, "ipam": { "type": "host-local", "subnet": "10.1.1.0/24" }, "dns": { "nameservers": [ "10.1.1.1", "8.8.8.8" ] } } 24.2.4.4. Configuration for an IPVLAN additional network The following object describes the configuration parameters for the IPVLAN, ipvlan , CNI plugin: Table 24.5. IPVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: ipvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. This is required unless the plugin is chained. mode string Optional: The operating mode for the virtual network. The value must be l2 , l3 , or l3s . The default value is l2 . master string Optional: The Ethernet interface to associate with the network attachment. If a master is not specified, the interface for the default network route is used. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Note The ipvlan object does not allow virtual interfaces to communicate with the master interface. Therefore the container will not be able to reach the host by using the ipvlan interface. Be sure that the container joins a network that provides connectivity to the host, such as a network supporting the Precision Time Protocol ( PTP ). A single master interface cannot simultaneously be configured to use both macvlan and ipvlan . For IP allocation schemes that cannot be interface agnostic, the ipvlan plugin can be chained with an earlier plugin that handles this logic. If the master is omitted, then the result must contain a single interface name for the ipvlan plugin to enslave. If ipam is omitted, then the result is used to configure the ipvlan interface. 24.2.4.4.1. ipvlan configuration example The following example configures an additional network named ipvlan-net : { "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "eth1", "linkInContainer": false, "mode": "l3", "ipam": { "type": "static", "addresses": [ { "address": "192.168.10.10/24" } ] } } 24.2.4.5. Configuration for a MACVLAN additional network The following object describes the configuration parameters for the MAC Virtual LAN (MACVLAN) Container Network Interface (CNI) plugin: Table 24.6. MACVLAN CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: macvlan . ipam object The configuration object for the IPAM CNI plugin. The plugin manages IP address assignment for the attachment definition. mode string Optional: Configures traffic visibility on the virtual network. Must be either bridge , passthru , private , or vepa . If a value is not provided, the default value is bridge . master string Optional: The host network interface to associate with the newly created macvlan interface. If a value is not specified, then the default route interface is used. mtu integer Optional: The maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. linkInContainer boolean Optional: Specifies whether the master interface is in the container network namespace or the main network namespace. Set the value to true to request the use of a container namespace master interface. Note If you specify the master key for the plugin configuration, use a different physical network interface than the one that is associated with your primary network plugin to avoid possible conflicts. 24.2.4.5.1. MACVLAN configuration example The following example configures an additional network named macvlan-net : { "cniVersion": "0.3.1", "name": "macvlan-net", "type": "macvlan", "master": "eth1", "linkInContainer": false, "mode": "bridge", "ipam": { "type": "dhcp" } } 24.2.4.6. Configuration for a TAP additional network The following object describes the configuration parameters for the TAP CNI plugin: Table 24.7. TAP CNI plugin JSON configuration object Field Type Description cniVersion string The CNI specification version. The 0.3.1 value is required. name string The value for the name parameter you provided previously for the CNO configuration. type string The name of the CNI plugin to configure: tap . mac string Optional: Request the specified MAC address for the interface. mtu integer Optional: Set the maximum transmission unit (MTU) to the specified value. The default value is automatically set by the kernel. selinuxcontext string Optional: The SELinux context to associate with the tap device. Note The value system_u:system_r:container_t:s0 is required for OpenShift Container Platform. multiQueue boolean Optional: Set to true to enable multi-queue. owner integer Optional: The user owning the tap device. group integer Optional: The group owning the tap device. bridge string Optional: Set the tap device as a port of an already existing bridge. 24.2.4.6.1. Tap configuration example The following example configures an additional network named mynet : { "name": "mynet", "cniVersion": "0.3.1", "type": "tap", "mac": "00:11:22:33:44:55", "mtu": 1500, "selinuxcontext": "system_u:system_r:container_t:s0", "multiQueue": true, "owner": 0, "group": 0 "bridge": "br1" } 24.2.4.6.2. Setting SELinux boolean for the TAP CNI plugin To create the tap device with the container_t SELinux context, enable the container_use_devices boolean on the host by using the Machine Config Operator (MCO). Prerequisites You have installed the OpenShift CLI ( oc ). Procedure Create a new YAML file named, such as setsebool-container-use-devices.yaml , with the following details: apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target Create the new MachineConfig object by running the following command: USD oc apply -f setsebool-container-use-devices.yaml Note Applying any changes to the MachineConfig object causes all affected nodes to gracefully reboot after the change is applied. This update can take some time to be applied. Verify the change is applied by running the following command: USD oc get machineconfigpools Expected output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d Note All nodes should be in the updated and ready state. Additional resources For more information about enabling an SELinux boolean on a node, see Setting SELinux booleans 24.2.4.7. Configuration for an OVN-Kubernetes additional network The Red Hat OpenShift Networking OVN-Kubernetes network plugin allows the configuration of secondary network interfaces for pods. To configure secondary network interfaces, you must define the configurations in the NetworkAttachmentDefinition custom resource definition (CRD). Note Pod and multi-network policy creation might remain in a pending state until the OVN-Kubernetes control plane agent in the nodes processes the associated network-attachment-definition CRD. You can configure an OVN-Kubernetes additional network in either layer 2 or localnet topologies. A layer 2 topology supports east-west cluster traffic, but does not allow access to the underlying physical network. A localnet topology allows connections to the physical network, but requires additional configuration of the underlying Open vSwitch (OVS) bridge on cluster nodes. The following sections provide example configurations for each of the topologies that OVN-Kubernetes currently allows for secondary networks. Note Networks names must be unique. For example, creating multiple NetworkAttachmentDefinition CRDs with different configurations that reference the same network is unsupported. 24.2.4.7.1. Supported platforms for OVN-Kubernetes additional network You can use an OVN-Kubernetes additional network with the following supported platforms: Bare metal IBM Power(R) IBM Z(R) IBM(R) LinuxONE VMware vSphere Red Hat OpenStack Platform (RHOSP) 24.2.4.7.2. OVN-Kubernetes network plugin JSON configuration table The following table describes the configuration parameters for the OVN-Kubernetes CNI network plugin: Table 24.8. OVN-Kubernetes network plugin JSON configuration table Field Type Description cniVersion string The CNI specification version. The required value is 0.3.1 . name string The name of the network. These networks are not namespaced. For example, you can have a network named l2-network referenced from two different NetworkAttachmentDefinition CRDs that exist on two different namespaces. This ensures that pods making use of the NetworkAttachmentDefinition CRD on their own different namespaces can communicate over the same secondary network. However, those two different NetworkAttachmentDefinition CRDs must also share the same network specific parameters such as topology , subnets , mtu , and excludeSubnets . type string The name of the CNI plugin to configure. This value must be set to ovn-k8s-cni-overlay . topology string The topological configuration for the network. Must be one of layer2 or localnet . subnets string The subnet to use for the network across the cluster. For "topology":"layer2" deployments, IPv6 ( 2001:DBB::/64 ) and dual-stack ( 192.168.100.0/24,2001:DBB::/64 ) subnets are supported. When omitted, the logical switch implementing the network only provides layer 2 communication, and users must configure IP addresses for the pods. Port security only prevents MAC spoofing. mtu string The maximum transmission unit (MTU). The default value, 1300 , is automatically set by the kernel. netAttachDefName string The metadata namespace and name of the network attachment definition CRD where this configuration is included. For example, if this configuration is defined in a NetworkAttachmentDefinition CRD in namespace ns1 named l2-network , this should be set to ns1/l2-network . excludeSubnets string A comma-separated list of CIDRs and IP addresses. IP addresses are removed from the assignable IP address pool and are never passed to the pods. vlanID integer If topology is set to localnet , the specified VLAN tag is assigned to traffic from this additional network. The default is to not assign a VLAN tag. 24.2.4.7.3. Compatibility with multi-network policy The multi-network policy API, which is provided by the MultiNetworkPolicy custom resource definition (CRD) in the k8s.cni.cncf.io API group, is compatible with an OVN-Kubernetes secondary network. When defining a network policy, the network policy rules that can be used depend on whether the OVN-Kubernetes secondary network defines the subnets field. Refer to the following table for details: Table 24.9. Supported multi-network policy selectors based on subnets CNI configuration subnets field specified Allowed multi-network policy selectors Yes podSelector and namespaceSelector ipBlock No ipBlock For example, the following multi-network policy is valid only if the subnets field is defined in the additional network CNI configuration for the additional network named blue2 : Example multi-network policy that uses a pod selector apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {} The following example uses the ipBlock network policy selector, which is always valid for an OVN-Kubernetes additional network: Example multi-network policy that uses an IP block selector apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30 24.2.4.7.4. Configuration for a layer 2 switched topology The switched (layer 2) topology networks interconnect the workloads through a cluster-wide logical switch. This configuration can be used for IPv6 and dual-stack deployments. Note Layer 2 switched topology networks only allow for the transfer of data packets between pods within a cluster. The following JSON example configures a switched secondary network: { "cniVersion": "0.3.1", "name": "l2-network", "type": "ovn-k8s-cni-overlay", "topology":"layer2", "subnets": "10.100.200.0/24", "mtu": 1300, "netAttachDefName": "ns1/l2-network", "excludeSubnets": "10.100.200.0/29" } 24.2.4.7.5. Configuration for a localnet topology The switched localnet topology interconnects the workloads created as Network Attachment Definitions (NADs) through a cluster-wide logical switch to a physical network. 24.2.4.7.5.1. Prerequisites for configuring OVN-Kubernetes additional network The NMState Operator is installed. For more information, see About the Kubernetes NMState Operator . 24.2.4.7.5.2. Configuration for an OVN-Kubernetes additional network mapping You must map an additional network to the OVN bridge to use it as an OVN-Kubernetes additional network. Bridge mappings allow network traffic to reach the physical network. A bridge mapping associates a physical network name, also known as an interface label, to a bridge created with Open vSwitch (OVS). You can create an NodeNetworkConfigurationPolicy object, part of the nmstate.io/v1 API group, to declaratively create the mapping. This API is provided by the NMState Operator. By using this API you can apply the bridge mapping to nodes that match your specified nodeSelector expression, such as node-role.kubernetes.io/worker: '' . When attaching an additional network, you can either use the existing br-ex bridge or create a new bridge. Which approach to use depends on your specific network infrastructure. If your nodes include only a single network interface, you must use the existing bridge. This network interface is owned and managed by OVN-Kubernetes and you must not remove it from the br-ex bridge or alter the interface configuration. If you remove or alter the network interface, your cluster network will stop working correctly. If your nodes include several network interfaces, you can attach a different network interface to a new bridge, and use that for your additional network. This approach provides for traffic isolation from your primary cluster network. The localnet1 network is mapped to the br-ex bridge in the following example: Example mapping for sharing a bridge apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5 1 The name for the configuration object. 2 A node selector that specifies the nodes to apply the node network configuration policy to. 3 The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes additional network. 4 The name of the OVS bridge on the node. This value is required only if you specify state: present . 5 The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . In the following example, the localnet2 network interface is attached to the ovs-br1 bridge. Through this attachment, the network interface is available to the OVN-Kubernetes network plugin as an additional network. Example mapping for nodes with multiple interfaces apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7 1 The name for the configuration object. 2 A node selector that specifies the nodes to apply the node network configuration policy to. 3 A new OVS bridge, separate from the default bridge used by OVN-Kubernetes for all cluster traffic. 4 A network device on the host system to associate with this new OVS bridge. 5 The name for the additional network from which traffic is forwarded to the OVS bridge. This additional network must match the name of the spec.config.name field of the NetworkAttachmentDefinition CRD that defines the OVN-Kubernetes additional network. 6 The name of the OVS bridge on the node. This value is required only if you specify state: present . 7 The state for the mapping. Must be either present to add the bridge or absent to remove the bridge. The default value is present . This declarative approach is recommended because the NMState Operator applies additional network configuration to all nodes specified by the node selector automatically and transparently. The following JSON example configures a localnet secondary network: { "cniVersion": "0.3.1", "name": "ns1-localnet-network", "type": "ovn-k8s-cni-overlay", "topology":"localnet", "subnets": "202.10.130.112/28", "vlanID": 33, "mtu": 1500, "netAttachDefName": "ns1/localnet-network" "excludeSubnets": "10.100.200.0/29" } 24.2.4.7.6. Configuring pods for additional networks You must specify the secondary network attachments through the k8s.v1.cni.cncf.io/networks annotation. The following example provisions a pod with two secondary attachments, one for each of the attachment configurations presented in this guide. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 24.2.4.7.7. Configuring pods with a static IP address The following example provisions a pod with a static IP address. Note You can only specify the IP address for a pod's secondary network attachment for layer 2 attachments. Specifying a static IP address for the pod is only possible when the attachment configuration does not feature subnets. apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "l2-network", 1 "mac": "02:03:04:05:06:07", 2 "interface": "myiface1", 3 "ips": [ "192.0.2.20/24" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container 1 The name of the network. This value must be unique across all NetworkAttachmentDefinition CRDs. 2 The MAC address to be assigned for the interface. 3 The name of the network interface to be created for the pod. 4 The IP addresses to be assigned to the network interface. 24.2.5. Configuration of IP address assignment for an additional network The IP address management (IPAM) Container Network Interface (CNI) plugin provides IP addresses for other CNI plugins. You can use the following IP address assignment types: Static assignment. Dynamic assignment through a DHCP server. The DHCP server you specify must be reachable from the additional network. Dynamic assignment through the Whereabouts IPAM CNI plugin. 24.2.5.1. Static IP address assignment configuration The following table describes the configuration for static IP address assignment: Table 24.10. ipam static configuration object Field Type Description type string The IPAM address type. The value static is required. addresses array An array of objects specifying IP addresses to assign to the virtual interface. Both IPv4 and IPv6 IP addresses are supported. routes array An array of objects specifying routes to configure inside the pod. dns array Optional: An array of objects specifying the DNS configuration. The addresses array requires objects with the following fields: Table 24.11. ipam.addresses[] array Field Type Description address string An IP address and network prefix that you specify. For example, if you specify 10.10.21.10/24 , then the additional network is assigned an IP address of 10.10.21.10 and the netmask is 255.255.255.0 . gateway string The default gateway to route egress network traffic to. Table 24.12. ipam.routes[] array Field Type Description dst string The IP address range in CIDR format, such as 192.168.17.0/24 or 0.0.0.0/0 for the default route. gw string The gateway where network traffic is routed. Table 24.13. ipam.dns object Field Type Description nameservers array An array of one or more IP addresses for to send DNS queries to. domain array The default domain to append to a hostname. For example, if the domain is set to example.com , a DNS lookup query for example-host is rewritten as example-host.example.com . search array An array of domain names to append to an unqualified hostname, such as example-host , during a DNS lookup query. Static IP address assignment configuration example { "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.7/24" } ] } } 24.2.5.2. Dynamic IP address (DHCP) assignment configuration The following JSON describes the configuration for dynamic IP address address assignment with DHCP. Renewal of DHCP leases A pod obtains its original DHCP lease when it is created. The lease must be periodically renewed by a minimal DHCP server deployment running on the cluster. To trigger the deployment of the DHCP server, you must create a shim network attachment by editing the Cluster Network Operator configuration, as in the following example: Example shim network attachment definition apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { "name": "dhcp-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "dhcp" } } # ... Table 24.14. ipam DHCP configuration object Field Type Description type string The IPAM address type. The value dhcp is required. Dynamic IP address (DHCP) assignment configuration example { "ipam": { "type": "dhcp" } } 24.2.5.3. Dynamic IP address assignment configuration with Whereabouts The Whereabouts CNI plugin allows the dynamic assignment of an IP address to an additional network without the use of a DHCP server. The following table describes the configuration for dynamic IP address assignment with Whereabouts: Table 24.15. ipam whereabouts configuration object Field Type Description type string The IPAM address type. The value whereabouts is required. range string An IP address and range in CIDR notation. IP addresses are assigned from within this range of addresses. exclude array Optional: A list of zero or more IP addresses and ranges in CIDR notation. IP addresses within an excluded address range are not assigned. Dynamic IP address assignment configuration example that uses Whereabouts { "ipam": { "type": "whereabouts", "range": "192.0.2.192/27", "exclude": [ "192.0.2.192/30", "192.0.2.196/32" ] } } 24.2.5.4. Creating a whereabouts-reconciler daemon set The Whereabouts reconciler is responsible for managing dynamic IP address assignments for the pods within a cluster by using the Whereabouts IP Address Management (IPAM) solution. It ensures that each pod gets a unique IP address from the specified IP address range. It also handles IP address releases when pods are deleted or scaled down. Note You can also use a NetworkAttachmentDefinition custom resource definition (CRD) for dynamic IP address assignment. The whereabouts-reconciler daemon set is automatically created when you configure an additional network through the Cluster Network Operator. It is not automatically created when you configure an additional network from a YAML manifest. To trigger the deployment of the whereabouts-reconciler daemon set, you must manually create a whereabouts-shim network attachment by editing the Cluster Network Operator custom resource (CR) file. Use the following procedure to deploy the whereabouts-reconciler daemon set. Procedure Edit the Network.operator.openshift.io custom resource (CR) by running the following command: USD oc edit network.operator.openshift.io cluster Include the additionalNetworks section shown in this example YAML extract within the spec definition of the custom resource (CR): apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster # ... spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { "name": "whereabouts-shim", "cniVersion": "0.3.1", "type": "bridge", "ipam": { "type": "whereabouts" } } type: Raw # ... Save the file and exit the text editor. Verify that the whereabouts-reconciler daemon set deployed successfully by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s 24.2.5.5. Configuring the Whereabouts IP reconciler schedule The Whereabouts IPAM CNI plugin runs the IP reconciler daily. This process cleans up any stranded IP allocations that might result in exhausting IPs and therefore prevent new pods from getting an IP allocated to them. Use this procedure to change the frequency at which the IP reconciler runs. Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have deployed the whereabouts-reconciler daemon set, and the whereabouts-reconciler pods are up and running. Procedure Run the following command to create a ConfigMap object named whereabouts-config in the openshift-multus namespace with a specific cron expression for the IP reconciler: USD oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression="*/15 * * * *" This cron expression indicates the IP reconciler runs every 15 minutes. Adjust the expression based on your specific requirements. Note The whereabouts-reconciler daemon set can only consume a cron expression pattern that includes five asterisks. The sixth, which is used to denote seconds, is currently not supported. Retrieve information about resources related to the whereabouts-reconciler daemon set and pods within the openshift-multus namespace by running the following command: USD oc get all -n openshift-multus | grep whereabouts-reconciler Example output pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s Run the following command to verify that the whereabouts-reconciler pod runs the IP reconciler with the configured interval: USD oc -n openshift-multus logs whereabouts-reconciler-2p7hw Example output 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_33_54.1375928161": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..data_tmp": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file "/cron-schedule/..data". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id "00c2d1c9-631d-403f-bb86-73ad104a6817" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/config": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: "/cron-schedule/..2024_02_02_16_26_17.3874177937": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success 24.2.5.6. Creating a configuration for assignment of dual-stack IP addresses dynamically Dual-stack IP address assignment can be configured with the ipRanges parameter for: IPv4 addresses IPv6 addresses multiple IP address assignment Procedure Set type to whereabouts . Use ipRanges to allocate IP addresses as shown in the following example: cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { "name": "whereabouts-dual-stack", "cniVersion": "0.3.1, "type": "bridge", "ipam": { "type": "whereabouts", "ipRanges": [ {"range": "192.168.10.0/24"}, {"range": "2001:db8::/64"} ] } } Attach network to a pod. For more information, see "Adding a pod to an additional network". Verify that all IP addresses are assigned. Run the following command to ensure the IP addresses are assigned as metadata. USD oc exec -it mypod -- ip a Additional resources Attaching a pod to an additional network 24.2.6. Creating an additional network attachment with the Cluster Network Operator The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition CRD automatically. Important Do not edit the NetworkAttachmentDefinition CRDs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Optional: Create the namespace for the additional networks: USD oc create namespace <namespace_name> To edit the CNO configuration, enter the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR that you are creating by adding the configuration for the additional network that you are creating, as in the following example CR. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # ... additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { "cniVersion": "0.3.1", "name": "tertiary-net", "type": "ipvlan", "master": "eth1", "mode": "l2", "ipam": { "type": "static", "addresses": [ { "address": "192.168.1.23/24" } ] } } Save your changes and quit the text editor to commit your changes. Verification Confirm that the CNO created the NetworkAttachmentDefinition CRD by running the following command. There might be a delay before the CNO creates the CRD. USD oc get network-attachment-definitions -n <namespace> where: <namespace> Specifies the namespace for the network attachment that you added to the CNO configuration. Example output NAME AGE test-network-1 14m 24.2.7. Creating an additional network attachment by applying a YAML manifest Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a YAML file with your additional network configuration, such as in the following example: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: -net spec: config: |- { "cniVersion": "0.3.1", "name": "work-network", "type": "host-device", "device": "eth1", "ipam": { "type": "dhcp" } } To create the additional network, enter the following command: USD oc apply -f <file>.yaml where: <file> Specifies the name of the file contained the YAML manifest. 24.2.8. About configuring the master interface in the container network namespace You can create a MAC-VLAN, an IP-VLAN, or a VLAN subinterface that is based on a master interface that exists in a container namespace. You can also create a master interface as part of the pod network configuration in a separate network attachment definition CRD. To use a container namespace master interface, you must specify true for the linkInContainer parameter that exists in the subinterface configuration of the NetworkAttachmentDefinition CRD. 24.2.8.1. Creating multiple VLANs on SR-IOV VFs An example use case for utilizing this feature is to create multiple VLANs based on SR-IOV VFs. To do so, begin by creating an SR-IOV network and then define the network attachments for the VLAN interfaces. The following example shows how to configure the setup illustrated in this diagram. Figure 24.1. Creating VLANs Prerequisites You installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. You have installed the SR-IOV Network Operator. Procedure Create a dedicated container namespace where you want to deploy your pod by using the following command: USD oc new-project test-namespace Create an SR-IOV node policy: Create an SriovNetworkNodePolicy object, and then save the YAML in the sriov-node-network-policy.yaml file: apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: "15b3" 1 deviceID: "101b" 2 rootDevices: ["00:05.0"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: "true" Note The SR-IOV network node policy configuration example, with the setting deviceType: netdevice , is tailored specifically for Mellanox Network Interface Cards (NICs). 1 The vendor hexadecimal code of the SR-IOV network device. The value 15b3 is associated with a Mellanox NIC. 2 The device hexadecimal code of the SR-IOV network device. Apply the YAML by running the following command: USD oc apply -f sriov-node-network-policy.yaml Note Applying this might take some time due to the node requiring a reboot. Create an SR-IOV network: Create the SriovNetwork custom resource (CR) for the additional SR-IOV network attachment as in the following example CR. Save the YAML as the file sriov-network-attachment.yaml : apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: "off" trust: "on" Apply the YAML by running the following command: USD oc apply -f sriov-network-attachment.yaml Create the VLAN additional network: Using the following YAML example, create a file named vlan100-additional-network-configuration.yaml : apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { "cniVersion": "0.4.0", "name": "vlan-100", "plugins": [ { "type": "vlan", "master": "ext0", 1 "mtu": 1500, "vlanId": 100, "linkInContainer": true, 2 "ipam": {"type": "whereabouts", "ipRanges": [{"range": "1.1.1.0/24"}]} } ] } 1 The VLAN configuration needs to specify the master name. This can be configured in the pod networks annotation. 2 The linkInContainer parameter must be specified. Apply the YAML file by running the following command: USD oc apply -f vlan100-additional-network-configuration.yaml Create a pod definition by using the earlier specified networks: Using the following YAML example, create a file named pod-a.yaml file: Note The manifest below includes 2 resources: Namespace with security labels Pod definition with appropriate network annotation apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: "false" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" 1 }, { "name": "vlan-100", "namespace": "test-namespace", "interface": "ext0.100" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: ["ALL"] ports: - containerPort: 80 seccompProfile: type: "RuntimeDefault" 1 The name to be used as the master for the VLAN interface. Apply the YAML file by running the following command: USD oc apply -f pod-a.yaml Get detailed information about the nginx-pod within the test-namespace by running the following command: USD oc describe pods nginx-pod -n test-namespace Example output Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {"default":{"ip_addresses":["10.131.0.26/23"],"mac_address":"0a:58:0a:83:00:1a","gateway_ips":["10.131.0.1"],"routes":[{"dest":"10.128.0.0... k8s.v1.cni.cncf.io/network-status: [{ "name": "ovn-kubernetes", "interface": "eth0", "ips": [ "10.131.0.26" ], "mac": "0a:58:0a:83:00:1a", "default": true, "dns": {} },{ "name": "test-namespace/sriov-network", "interface": "ext0", "mac": "6e:a7:5e:3f:49:1b", "dns": {}, "device-info": { "type": "pci", "version": "1.0.0", "pci": { "pci-address": "0000:d8:00.2" } } },{ "name": "test-namespace/vlan-100", "interface": "ext0.100", "ips": [ "1.1.1.1" ], "mac": "6e:a7:5e:3f:49:1b", "dns": {} }] k8s.v1.cni.cncf.io/networks: [ { "name": "sriov-network", "namespace": "test-namespace", "interface": "ext0" }, { "name": "vlan-100", "namespace": "test-namespace", "i... openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26 24.2.8.2. Creating a subinterface based on a bridge master interface in a container namespace You can create a subinterface based on a bridge master interface that exists in a container namespace. Creating a subinterface can be applied to other types of interfaces. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged in to the OpenShift Container Platform cluster as a user with cluster-admin privileges. Procedure Create a dedicated container namespace where you want to deploy your pod by entering the following command: USD oc new-project test-namespace Using the following YAML example, create a bridge NetworkAttachmentDefinition custom resource definition (CRD) file named bridge-nad.yaml : apiVersion: "k8s.cni.cncf.io/v1" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ "cniVersion": "0.4.0", "name": "bridge-network", "type": "bridge", "bridge": "br-001", "isGateway": true, "ipMasq": true, "hairpinMode": true, "ipam": { "type": "host-local", "subnet": "10.0.0.0/24", "routes": [{"dst": "0.0.0.0/0"}] } }' Run the following command to apply the NetworkAttachmentDefinition CRD to your OpenShift Container Platform cluster: USD oc apply -f bridge-nad.yaml Verify that you successfully created a NetworkAttachmentDefinition CRD by entering the following command: USD oc get network-attachment-definitions Example output NAME AGE bridge-network 15s Using the following YAML example, create a file named ipvlan-additional-network-configuration.yaml for the IPVLAN additional network configuration: apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ "cniVersion": "0.3.1", "name": "ipvlan-net", "type": "ipvlan", "master": "ext0", 1 "mode": "l3", "linkInContainer": true, 2 "ipam": {"type": "whereabouts", "ipRanges": [{"range": "10.0.0.0/24"}]} }' 1 Specifies the ethernet interface to associate with the network attachment. This is subsequently configured in the pod networks annotation. 2 Specifies that the master interface is in the container network namespace. Apply the YAML file by running the following command: USD oc apply -f ipvlan-additional-network-configuration.yaml Verify that the NetworkAttachmentDefinition CRD has been created successfully by running the following command: USD oc get network-attachment-definitions Example output NAME AGE bridge-network 87s ipvlan-net 9s Using the following YAML example, create a file named pod-a.yaml for the pod definition: apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "bridge-network", "interface": "ext0" 1 }, { "name": "ipvlan-net", "interface": "ext1" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL] 1 Specifies the name to be used as the master for the IPVLAN interface. Apply the YAML file by running the following command: USD oc apply -f pod-a.yaml Verify that the pod is running by using the following command: USD oc get pod -n test-namespace Example output NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s Show network interface information about the pod-a resource within the test-namespace by running the following command: USD oc exec -n test-namespace pod-a -- ip a Example output 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever This output shows that the network interface ext1 is associated with the physical interface ext0 . 24.3. About virtual routing and forwarding 24.3.1. About virtual routing and forwarding Virtual routing and forwarding (VRF) devices combined with IP rules provide the ability to create virtual routing and forwarding domains. VRF reduces the number of permissions needed by CNF, and provides increased visibility of the network topology of secondary networks. VRF is used to provide multi-tenancy functionality, for example, where each tenant has its own unique routing tables and requires different default gateways. Processes can bind a socket to the VRF device. Packets through the binded socket use the routing table associated with the VRF device. An important feature of VRF is that it impacts only OSI model layer 3 traffic and above so L2 tools, such as LLDP, are not affected. This allows higher priority IP rules such as policy based routing to take precedence over the VRF device rules directing specific traffic. 24.3.1.1. Benefits of secondary networks for pods for telecommunications operators In telecommunications use cases, each CNF can potentially be connected to multiple different networks sharing the same address space. These secondary networks can potentially conflict with the cluster's main network CIDR. Using the CNI VRF plugin, network functions can be connected to different customers' infrastructure using the same IP address, keeping different customers isolated. IP addresses are overlapped with OpenShift Container Platform IP space. The CNI VRF plugin also reduces the number of permissions needed by CNF and increases the visibility of network topologies of secondary networks. 24.4. Configuring multi-network policy As a cluster administrator, you can configure a multi-network policy for a Single-Root I/O Virtualization (SR-IOV), MAC Virtual Local Area Network (MacVLAN), or OVN-Kubernetes additional networks. MacVLAN additional networks are fully supported. Other types of additional networks, such as IP Virtual Local Area Network (IPVLAN), are not supported. Note Support for configuring multi-network policies for SR-IOV additional networks is only supported with kernel network interface controllers (NICs). SR-IOV is not supported for Data Plane Development Kit (DPDK) applications. 24.4.1. Differences between multi-network policy and network policy Although the MultiNetworkPolicy API implements the NetworkPolicy API, there are several important differences: You must use the MultiNetworkPolicy API: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy You must use the multi-networkpolicy resource name when using the CLI to interact with multi-network policies. For example, you can view a multi-network policy object with the oc get multi-networkpolicy <name> command where <name> is the name of a multi-network policy. You must specify an annotation with the name of the network attachment definition that defines the macvlan or SR-IOV additional network: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> where: <network_name> Specifies the name of a network attachment definition. 24.4.2. Enabling multi-network policy for the cluster As a cluster administrator, you can enable multi-network policy support on your cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure Create the multinetwork-enable-patch.yaml file with the following YAML: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true Configure the cluster to enable multi-network policy: USD oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml Example output network.operator.openshift.io/cluster patched 24.4.3. Supporting multi-network policies in IPv6 networks The ICMPv6 Neighbor Discovery Protocol (NDP) is a set of messages and processes that enable devices to discover and maintain information about neighboring nodes. NDP plays a crucial role in IPv6 networks, facilitating the interaction between devices on the same link. The Cluster Network Operator (CNO) deploys the iptables implementation of multi-network policy when the useMultiNetworkPolicy parameter is set to true . To support multi-network policies in IPv6 networks the Cluster Network Operator deploys the following set of rules in every pod affected by a multi-network policy: Multi-network policy custom rules kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4 1 This rule allows incoming ICMPv6 neighbor solicitation messages, which are part of the neighbor discovery protocol (NDP). These messages help determine the link-layer addresses of neighboring nodes. 2 This rule allows incoming ICMPv6 neighbor advertisement messages, which are part of NDP and provide information about the link-layer address of the sender. 3 This rule permits incoming ICMPv6 router solicitation messages. Hosts use these messages to request router configuration information. 4 This rule allows incoming ICMPv6 router advertisement messages, which give configuration information to hosts. Note You cannot edit these predefined rules. These rules collectively enable essential ICMPv6 traffic for correct network functioning, including address resolution and router communication in an IPv6 environment. With these rules in place and a multi-network policy denying traffic, applications are not expected to experience connectivity issues. 24.4.4. Working with multi-network policy As a cluster administrator, you can create, edit, view, and delete multi-network policies. 24.4.4.1. Prerequisites You have enabled multi-network policy support for your cluster. 24.4.4.2. Creating a multi-network policy using the CLI To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a multi-network policy. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the multi-network policy file name. Define a multi-network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces This is a fundamental policy, blocking all cross-pod networking other than cross-pod traffic allowed by the configuration of other Network Policies. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: [] where: <network_name> Specifies the name of a network attachment definition. Allow ingress from all pods in the same namespace apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {} where: <network_name> Specifies the name of a network attachment definition. Allow ingress traffic to one pod from a particular namespace This policy allows traffic to pods labelled pod-a from pods running in namespace-y . apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y where: <network_name> Specifies the name of a network attachment definition. Restrict traffic to a service This policy when applied ensures every pod with both labels app=bookstore and role=api can only be accessed by pods with label app=bookstore . In this example the application could be a REST API server, marked with labels app=bookstore and role=api . This example addresses the following use cases: Restricting the traffic to a service to only the other microservices that need to use it. Restricting the connections to a database to only permit the application using it. apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore where: <network_name> Specifies the name of a network attachment definition. To create the multi-network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the multi-network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created Note If you log in to the web console with cluster-admin privileges, you have a choice of creating a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 24.4.4.3. Editing a multi-network policy You can edit a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure Optional: To list the multi-network policy objects in a namespace, enter the following command: USD oc get multi-networkpolicy where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Edit the multi-network policy object. If you saved the multi-network policy definition in a file, edit the file and make any necessary changes, and then enter the following command. USD oc apply -n <namespace> -f <policy_file>.yaml where: <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. <policy_file> Specifies the name of the file containing the network policy. If you need to update the multi-network policy object directly, enter the following command: USD oc edit multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Confirm that the multi-network policy object is updated. USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of editing a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 24.4.4.4. Viewing multi-network policies using the CLI You can examine the multi-network policies in a namespace. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure List multi-network policies in a namespace: To view multi-network policy objects defined in a namespace, enter the following command: USD oc get multi-networkpolicy Optional: To examine a specific multi-network policy, enter the following command: USD oc describe multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy to inspect. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Note If you log in to the web console with cluster-admin privileges, you have a choice of viewing a network policy in any namespace in the cluster directly in YAML or from a form in the web console. 24.4.4.5. Deleting a multi-network policy using the CLI You can delete a multi-network policy in a namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace where the multi-network policy exists. Procedure To delete a multi-network policy object, enter the following command: USD oc delete multi-networkpolicy <policy_name> -n <namespace> where: <policy_name> Specifies the name of the multi-network policy. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted Note If you log in to the web console with cluster-admin privileges, you have a choice of deleting a network policy in any namespace in the cluster directly in YAML or from the policy in the web console through the Actions menu. 24.4.4.6. Creating a default deny all multi-network policy This is a fundamental policy, blocking all cross-pod networking other than network traffic allowed by the configuration of other deployed network policies. This procedure enforces a default deny-by-default policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create the following YAML that defines a deny-by-default policy to deny ingress from all pods in all namespaces. Save the YAML in the deny-by-default.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6 1 namespace: default deploys this policy to the default namespace. 2 network_name : specifies the name of a network attachment definition. 3 podSelector: is empty, this means it matches all the pods. Therefore, the policy applies to all pods in the default namespace. 4 policyTypes: a list of rule types that the NetworkPolicy relates to. 5 Specifies as Ingress only policyType . 6 There are no ingress rules specified. This causes incoming traffic to be dropped to all pods. Apply the policy by entering the following command: USD oc apply -f deny-by-default.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created 24.4.4.7. Creating a multi-network policy to allow traffic from external clients With the deny-by-default policy in place you can proceed to configure a policy that allows traffic from external clients to a pod with the label app=web . Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows external service from the public Internet directly or by using a Load Balancer to access the pod. Traffic is only allowed to a pod with the label app=web . Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from the public Internet directly or by using a load balancer to access the pod. Save the YAML in the web-allow-external.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {} Apply the policy by entering the following command: USD oc apply -f web-allow-external.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created This policy allows traffic from all resources, including external traffic as illustrated in the following diagram: 24.4.4.8. Creating a multi-network policy allowing traffic to an application from all namespaces Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic from all pods in all namespaces to a particular application. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in all namespaces to a particular application. Save the YAML in the web-allow-all-namespaces.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2 1 Applies the policy only to app:web pods in default namespace. 2 Selects all pods in all namespaces. Note By default, if you omit specifying a namespaceSelector it does not select any namespaces, which means the policy allows traffic only from the namespace the network policy is deployed to. Apply the policy by entering the following command: USD oc apply -f web-allow-all-namespaces.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to deploy an alpine image in the secondary namespace and to start a shell: USD oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 24.4.4.9. Creating a multi-network policy allowing traffic to an application from a namespace Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Follow this procedure to configure a policy that allows traffic to a pod with the label app=web from a particular namespace. You might want to do this to: Restrict traffic to a production database only to namespaces where production workloads are deployed. Enable monitoring tools deployed to a particular namespace to scrape metrics from the current namespace. Prerequisites Your cluster uses a network plugin that supports NetworkPolicy objects, such as the OVN-Kubernetes network plugin or the OpenShift SDN network plugin with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with cluster-admin privileges. You are working in the namespace that the multi-network policy applies to. Procedure Create a policy that allows traffic from all pods in a particular namespaces with a label purpose=production . Save the YAML in the web-allow-prod.yaml file: apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2 1 Applies the policy only to app:web pods in the default namespace. 2 Restricts traffic to only pods in namespaces that have the label purpose=production . Apply the policy by entering the following command: USD oc apply -f web-allow-prod.yaml Example output multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created Verification Start a web service in the default namespace by entering the following command: USD oc run web --namespace=default --image=nginx --labels="app=web" --expose --port=80 Run the following command to create the prod namespace: USD oc create namespace prod Run the following command to label the prod namespace: USD oc label namespace/prod purpose=production Run the following command to create the dev namespace: USD oc create namespace dev Run the following command to label the dev namespace: USD oc label namespace/dev purpose=testing Run the following command to deploy an alpine image in the dev namespace and to start a shell: USD oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is blocked: # wget -qO- --timeout=2 http://web.default Expected output wget: download timed out Run the following command to deploy an alpine image in the prod namespace and start a shell: USD oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh Run the following command in the shell and observe that the request is allowed: # wget -qO- --timeout=2 http://web.default Expected output <!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href="http://nginx.org/">nginx.org</a>.<br/> Commercial support is available at <a href="http://nginx.com/">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html> 24.4.5. Additional resources About network policy Understanding multiple networks Configuring a macvlan network Configuring an SR-IOV network device 24.5. Attaching a pod to an additional network As a cluster user you can attach a pod to an additional network. 24.5.1. Adding a pod to an additional network You can add a pod to an additional network. The pod continues to send normal cluster-related network traffic over the default network. When a pod is created additional networks are attached to it. However, if a pod already exists, you cannot attach additional networks to it. The pod must be in the same namespace as the additional network. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure Add an annotation to the Pod object. Only one of the following annotation formats can be used: To attach an additional network without any customization, add an annotation with the following format. Replace <network> with the name of the additional network to associate with the pod: metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1 1 To specify more than one additional network, separate each network with a comma. Do not include whitespace between the comma. If you specify the same additional network multiple times, that pod will have multiple network interfaces attached to that network. To attach an additional network with customizations, add an annotation with the following format: metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { "name": "<network>", 1 "namespace": "<namespace>", 2 "default-route": ["<default-route>"] 3 } ] 1 Specify the name of the additional network defined by a NetworkAttachmentDefinition object. 2 Specify the namespace where the NetworkAttachmentDefinition object is defined. 3 Optional: Specify an override for the default route, such as 192.168.17.1 . To create the pod, enter the following command. Replace <name> with the name of the pod. USD oc create -f <name>.yaml Optional: To Confirm that the annotation exists in the Pod CR, enter the following command, replacing <name> with the name of the pod. USD oc get pod <name> -o yaml In the following example, the example-pod pod is attached to the net1 additional network: USD oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ "name": "openshift-sdn", "interface": "eth0", "ips": [ "10.128.2.14" ], "default": true, "dns": {} },{ "name": "macvlan-bridge", "interface": "net1", "ips": [ "20.2.2.100" ], "mac": "22:2f:60:a5:f8:00", "dns": {} }] name: example-pod namespace: default spec: ... status: ... 1 The k8s.v1.cni.cncf.io/network-status parameter is a JSON array of objects. Each object describes the status of an additional network attached to the pod. The annotation value is stored as a plain text value. 24.5.1.1. Specifying pod-specific addressing and routing options When attaching a pod to an additional network, you may want to specify further properties about that network in a particular pod. This allows you to change some aspects of routing, as well as specify static IP addresses and MAC addresses. To accomplish this, you can use the JSON formatted annotations. Prerequisites The pod must be in the same namespace as the additional network. Install the OpenShift CLI ( oc ). You must log in to the cluster. Procedure To add a pod to an additional network while specifying addressing and/or routing options, complete the following steps: Edit the Pod resource definition. If you are editing an existing Pod resource, run the following command to edit its definition in the default editor. Replace <name> with the name of the Pod resource to edit. USD oc edit pod <name> In the Pod resource definition, add the k8s.v1.cni.cncf.io/networks parameter to the pod metadata mapping. The k8s.v1.cni.cncf.io/networks accepts a JSON string of a list of objects that reference the name of NetworkAttachmentDefinition custom resource (CR) names in addition to specifying additional properties. metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1 1 Replace <network> with a JSON object as shown in the following examples. The single quotes are required. In the following example the annotation specifies which network attachment will have the default route, using the default-route parameter. apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "net1" }, { "name": "net2", 1 "default-route": ["192.0.2.1"] 2 }]' spec: containers: - name: example-pod command: ["/bin/bash", "-c", "sleep 2000000000000"] image: centos/tools 1 The name key is the name of the additional network to associate with the pod. 2 The default-route key specifies a value of a gateway for traffic to be routed over if no other routing entry is present in the routing table. If more than one default-route key is specified, this will cause the pod to fail to become active. The default route will cause any traffic that is not specified in other routes to be routed to the gateway. Important Setting the default route to an interface other than the default network interface for OpenShift Container Platform may cause traffic that is anticipated for pod-to-pod traffic to be routed over another interface. To verify the routing properties of a pod, the oc command may be used to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip route Note You may also reference the pod's k8s.v1.cni.cncf.io/network-status to see which additional network has been assigned the default route, by the presence of the default-route key in the JSON-formatted list of objects. To set a static IP address or MAC address for a pod you can use the JSON formatted annotations. This requires you create networks that specifically allow for this functionality. This can be specified in a rawCNIConfig for the CNO. Edit the CNO CR by running the following command: USD oc edit networks.operator.openshift.io cluster The following YAML describes the configuration parameters for the CNO: Cluster Network Operator YAML configuration name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 ... }' type: Raw 1 Specify a name for the additional network attachment that you are creating. The name must be unique within the specified namespace . 2 Specify the namespace to create the network attachment in. If you do not specify a value, then the default namespace is used. 3 Specify the CNI plugin configuration in JSON format, which is based on the following template. The following object describes the configuration parameters for utilizing static MAC address and IP address using the macvlan CNI plugin: macvlan CNI plugin JSON configuration object using static IP and MAC address { "cniVersion": "0.3.1", "name": "<name>", 1 "plugins": [{ 2 "type": "macvlan", "capabilities": { "ips": true }, 3 "master": "eth0", 4 "mode": "bridge", "ipam": { "type": "static" } }, { "capabilities": { "mac": true }, 5 "type": "tuning" }] } 1 Specifies the name for the additional network attachment to create. The name must be unique within the specified namespace . 2 Specifies an array of CNI plugin configurations. The first object specifies a macvlan plugin configuration and the second object specifies a tuning plugin configuration. 3 Specifies that a request is made to enable the static IP address functionality of the CNI plugin runtime configuration capabilities. 4 Specifies the interface that the macvlan plugin uses. 5 Specifies that a request is made to enable the static MAC address functionality of a CNI plugin. The above network attachment can be referenced in a JSON formatted annotation, along with keys to specify which static IP and MAC address will be assigned to a given pod. Edit the pod with: USD oc edit pod <name> macvlan CNI plugin JSON configuration object using static IP and MAC address apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "<name>", 1 "ips": [ "192.0.2.205/24" ], 2 "mac": "CA:FE:C0:FF:EE:00" 3 } ]' 1 Use the <name> as provided when creating the rawCNIConfig above. 2 Provide an IP address including the subnet mask. 3 Provide the MAC address. Note Static IP addresses and MAC addresses do not have to be used at the same time, you may use them individually, or together. To verify the IP address and MAC properties of a pod with additional networks, use the oc command to execute the ip command within a pod. USD oc exec -it <pod_name> -- ip a 24.6. Removing a pod from an additional network As a cluster user you can remove a pod from an additional network. 24.6.1. Removing a pod from an additional network You can remove a pod from an additional network only by deleting the pod. Prerequisites An additional network is attached to the pod. Install the OpenShift CLI ( oc ). Log in to the cluster. Procedure To delete the pod, enter the following command: USD oc delete pod <name> -n <namespace> <name> is the name of the pod. <namespace> is the namespace that contains the pod. 24.7. Editing an additional network As a cluster administrator you can modify the configuration for an existing additional network. 24.7.1. Modifying an additional network attachment definition As a cluster administrator, you can make changes to an existing additional network. Any existing pods attached to the additional network will not be updated. Prerequisites You have configured an additional network for your cluster. Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To edit an additional network for your cluster, complete the following steps: Run the following command to edit the Cluster Network Operator (CNO) CR in your default text editor: USD oc edit networks.operator.openshift.io cluster In the additionalNetworks collection, update the additional network with your changes. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the CNO updated the NetworkAttachmentDefinition object by running the following command. Replace <network-name> with the name of the additional network to display. There might be a delay before the CNO updates the NetworkAttachmentDefinition object to reflect your changes. USD oc get network-attachment-definitions <network-name> -o yaml For example, the following console output displays a NetworkAttachmentDefinition object that is named net1 : USD oc get network-attachment-definitions net1 -o go-template='{{printf "%s\n" .spec.config}}' { "cniVersion": "0.3.1", "type": "macvlan", "master": "ens5", "mode": "bridge", "ipam": {"type":"static","routes":[{"dst":"0.0.0.0/0","gw":"10.128.2.1"}],"addresses":[{"address":"10.128.2.100/23","gateway":"10.128.2.1"}],"dns":{"nameservers":["172.30.0.10"],"domain":"us-west-2.compute.internal","search":["us-west-2.compute.internal"]}} } 24.8. Removing an additional network As a cluster administrator you can remove an additional network attachment. 24.8.1. Removing an additional network attachment definition As a cluster administrator, you can remove an additional network from your OpenShift Container Platform cluster. The additional network is not removed from any pods it is attached to. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure To remove an additional network from your cluster, complete the following steps: Edit the Cluster Network Operator (CNO) in your default text editor by running the following command: USD oc edit networks.operator.openshift.io cluster Modify the CR by removing the configuration from the additionalNetworks collection for the network attachment definition you are removing. apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1 1 If you are removing the configuration mapping for the only additional network attachment definition in the additionalNetworks collection, you must specify an empty collection. Save your changes and quit the text editor to commit your changes. Optional: Confirm that the additional network CR was deleted by running the following command: USD oc get network-attachment-definition --all-namespaces 24.9. Assigning a secondary network to a VRF As a cluster administrator, you can configure an additional network for a virtual routing and forwarding (VRF) domain by using the CNI VRF plugin. The virtual network that this plugin creates is associated with the physical interface that you specify. Using a secondary network with a VRF instance has the following advantages: Workload isolation Isolate workload traffic by configuring a VRF instance for the additional network. Improved security Enable improved security through isolated network paths in the VRF domain. Multi-tenancy support Support multi-tenancy through network segmentation with a unique routing table in the VRF domain for each tenant. Note Applications that use VRFs must bind to a specific device. The common usage is to use the SO_BINDTODEVICE option for a socket. The SO_BINDTODEVICE option binds the socket to the device that is specified in the passed interface name, for example, eth1 . To use the SO_BINDTODEVICE option, the application must have CAP_NET_RAW capabilities. Using a VRF through the ip vrf exec command is not supported in OpenShift Container Platform pods. To use VRF, bind applications directly to the VRF interface. Additional resources About virtual routing and forwarding 24.9.1. Creating an additional network attachment with the CNI VRF plugin The Cluster Network Operator (CNO) manages additional network definitions. When you specify an additional network to create, the CNO creates the NetworkAttachmentDefinition custom resource (CR) automatically. Note Do not edit the NetworkAttachmentDefinition CRs that the Cluster Network Operator manages. Doing so might disrupt network traffic on your additional network. To create an additional network attachment with the CNI VRF plugin, perform the following procedure. Prerequisites Install the OpenShift Container Platform CLI (oc). Log in to the OpenShift cluster as a user with cluster-admin privileges. Procedure Create the Network custom resource (CR) for the additional network attachment and insert the rawCNIConfig configuration for the additional network, as in the following example CR. Save the YAML as the file additional-network-attachment.yaml . apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ "cniVersion": "0.3.1", "name": "macvlan-vrf", "plugins": [ 1 { "type": "macvlan", "master": "eth1", "ipam": { "type": "static", "addresses": [ { "address": "191.168.1.23/24" } ] } }, { "type": "vrf", 2 "vrfname": "vrf-1", 3 "table": 1001 4 }] }' 1 plugins must be a list. The first item in the list must be the secondary network underpinning the VRF network. The second item in the list is the VRF plugin configuration. 2 type must be set to vrf . 3 vrfname is the name of the VRF that the interface is assigned to. If it does not exist in the pod, it is created. 4 Optional. table is the routing table ID. By default, the tableid parameter is used. If it is not specified, the CNI assigns a free routing table ID to the VRF. Note VRF functions correctly only when the resource is of type netdevice . Create the Network resource: USD oc create -f additional-network-attachment.yaml Confirm that the CNO created the NetworkAttachmentDefinition CR by running the following command. Replace <namespace> with the namespace that you specified when configuring the network attachment, for example, additional-network-1 . USD oc get network-attachment-definitions -n <namespace> Example output NAME AGE additional-network-1 14m Note There might be a delay before the CNO creates the CR. Verification Create a pod and assign it to the additional network with the VRF instance: Create a YAML file that defines the Pod resource: Example pod-additional-net.yaml file apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { "name": "test-network-1" 1 } ]' spec: containers: - name: example-pod-1 command: ["/bin/bash", "-c", "sleep 9000000"] image: centos:8 1 Specify the name of the additional network with the VRF instance. Create the Pod resource by running the following command: USD oc create -f pod-additional-net.yaml Example output pod/test-pod created Verify that the pod network attachment is connected to the VRF additional network. Start a remote session with the pod and run the following command: USD ip vrf show Example output Name Table ----------------------- vrf-1 1001 Confirm that the VRF interface is the controller for the additional interface: USD ip link Example output 5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode
[ "openstack subnet set --dns-nameserver 0.0.0.0 <subnet_id>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: 1 - name: <name> 2 namespace: <namespace> 3 rawCNIConfig: |- 4 { } type: Raw", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: <name> 1 spec: config: |- 2 { }", "bridge vlan add vid VLAN_ID dev DEV", "{ \"cniVersion\": \"0.3.1\", \"name\": \"bridge-net\", \"type\": \"bridge\", \"isGateway\": true, \"vlan\": 2, \"ipam\": { \"type\": \"dhcp\" } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"hostdev-net\", \"type\": \"host-device\", \"device\": \"eth1\" }", "{ \"name\": \"vlan-net\", \"cniVersion\": \"0.3.1\", \"type\": \"vlan\", \"master\": \"eth0\", \"mtu\": 1500, \"vlanId\": 5, \"linkInContainer\": false, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.1.1.0/24\" }, \"dns\": { \"nameservers\": [ \"10.1.1.1\", \"8.8.8.8\" ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"l3\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.10.10/24\" } ] } }", "{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-net\", \"type\": \"macvlan\", \"master\": \"eth1\", \"linkInContainer\": false, \"mode\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } }", "{ \"name\": \"mynet\", \"cniVersion\": \"0.3.1\", \"type\": \"tap\", \"mac\": \"00:11:22:33:44:55\", \"mtu\": 1500, \"selinuxcontext\": \"system_u:system_r:container_t:s0\", \"multiQueue\": true, \"owner\": 0, \"group\": 0 \"bridge\": \"br1\" }", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: worker name: 99-worker-setsebool spec: config: ignition: version: 3.2.0 systemd: units: - enabled: true name: setsebool.service contents: | [Unit] Description=Set SELinux boolean for the TAP CNI plugin Before=kubelet.service [Service] Type=oneshot ExecStart=/usr/sbin/setsebool container_use_devices=on RemainAfterExit=true [Install] WantedBy=multi-user.target graphical.target", "oc apply -f setsebool-container-use-devices.yaml", "oc get machineconfigpools", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE master rendered-master-e5e0c8e8be9194e7c5a882e047379cfa True False False 3 3 3 0 7d2h worker rendered-worker-d6c9ca107fba6cd76cdcbfcedcafa0f2 True False False 3 3 3 0 7d", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: blue2 spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: ingress-ipblock annotations: k8s.v1.cni.cncf.io/policy-for: default/flatl2net spec: podSelector: matchLabels: name: access-control policyTypes: - Ingress ingress: - from: - ipBlock: cidr: 10.200.0.0/30", "{ \"cniVersion\": \"0.3.1\", \"name\": \"l2-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"layer2\", \"subnets\": \"10.100.200.0/24\", \"mtu\": 1300, \"netAttachDefName\": \"ns1/l2-network\", \"excludeSubnets\": \"10.100.200.0/29\" }", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: mapping 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: ovn: bridge-mappings: - localnet: localnet1 3 bridge: br-ex 4 state: present 5", "apiVersion: nmstate.io/v1 kind: NodeNetworkConfigurationPolicy metadata: name: ovs-br1-multiple-networks 1 spec: nodeSelector: node-role.kubernetes.io/worker: '' 2 desiredState: interfaces: - name: ovs-br1 3 description: |- A dedicated OVS bridge with eth1 as a port allowing all VLANs and untagged traffic type: ovs-bridge state: up bridge: allow-extra-patch-ports: true options: stp: false port: - name: eth1 4 ovn: bridge-mappings: - localnet: localnet2 5 bridge: ovs-br1 6 state: present 7", "{ \"cniVersion\": \"0.3.1\", \"name\": \"ns1-localnet-network\", \"type\": \"ovn-k8s-cni-overlay\", \"topology\":\"localnet\", \"subnets\": \"202.10.130.112/28\", \"vlanID\": 33, \"mtu\": 1500, \"netAttachDefName\": \"ns1/localnet-network\" \"excludeSubnets\": \"10.100.200.0/29\" }", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: l2-network name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container", "apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"l2-network\", 1 \"mac\": \"02:03:04:05:06:07\", 2 \"interface\": \"myiface1\", 3 \"ips\": [ \"192.0.2.20/24\" ] 4 } ]' name: tinypod namespace: ns1 spec: containers: - args: - pause image: k8s.gcr.io/e2e-test-images/agnhost:2.36 imagePullPolicy: IfNotPresent name: agnhost-container", "{ \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.7/24\" } ] } }", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: dhcp-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"dhcp-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"dhcp\" } } #", "{ \"ipam\": { \"type\": \"dhcp\" } }", "{ \"ipam\": { \"type\": \"whereabouts\", \"range\": \"192.0.2.192/27\", \"exclude\": [ \"192.0.2.192/30\", \"192.0.2.196/32\" ] } }", "oc edit network.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default rawCNIConfig: |- { \"name\": \"whereabouts-shim\", \"cniVersion\": \"0.3.1\", \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\" } } type: Raw", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-jnp6g 1/1 Running 0 6s pod/whereabouts-reconciler-k76gg 1/1 Running 0 6s pod/whereabouts-reconciler-k86t9 1/1 Running 0 6s pod/whereabouts-reconciler-p4sxw 1/1 Running 0 6s pod/whereabouts-reconciler-rvfdv 1/1 Running 0 6s pod/whereabouts-reconciler-svzw9 1/1 Running 0 6s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 6s", "oc create configmap whereabouts-config -n openshift-multus --from-literal=reconciler_cron_expression=\"*/15 * * * *\"", "oc get all -n openshift-multus | grep whereabouts-reconciler", "pod/whereabouts-reconciler-2p7hw 1/1 Running 0 4m14s pod/whereabouts-reconciler-76jk7 1/1 Running 0 4m14s pod/whereabouts-reconciler-94zw6 1/1 Running 0 4m14s pod/whereabouts-reconciler-mfh68 1/1 Running 0 4m14s pod/whereabouts-reconciler-pgshz 1/1 Running 0 4m14s pod/whereabouts-reconciler-xn5xz 1/1 Running 0 4m14s daemonset.apps/whereabouts-reconciler 6 6 6 6 6 kubernetes.io/os=linux 4m16s", "oc -n openshift-multus logs whereabouts-reconciler-2p7hw", "2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_33_54.1375928161\": CHMOD 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..data_tmp\": RENAME 2024-02-02T16:33:54Z [verbose] using expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] configuration updated to file \"/cron-schedule/..data\". New cron expression: */15 * * * * 2024-02-02T16:33:54Z [verbose] successfully updated CRON configuration id \"00c2d1c9-631d-403f-bb86-73ad104a6817\" - new cron expression: */15 * * * * 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/config\": CREATE 2024-02-02T16:33:54Z [debug] event not relevant: \"/cron-schedule/..2024_02_02_16_26_17.3874177937\": REMOVE 2024-02-02T16:45:00Z [verbose] starting reconciler run 2024-02-02T16:45:00Z [debug] NewReconcileLooper - inferred connection data 2024-02-02T16:45:00Z [debug] listing IP pools 2024-02-02T16:45:00Z [debug] no IP addresses to cleanup 2024-02-02T16:45:00Z [verbose] reconciler success", "cniVersion: operator.openshift.io/v1 kind: Network =metadata: name: cluster spec: additionalNetworks: - name: whereabouts-shim namespace: default type: Raw rawCNIConfig: |- { \"name\": \"whereabouts-dual-stack\", \"cniVersion\": \"0.3.1, \"type\": \"bridge\", \"ipam\": { \"type\": \"whereabouts\", \"ipRanges\": [ {\"range\": \"192.168.10.0/24\"}, {\"range\": \"2001:db8::/64\"} ] } }", "oc exec -it mypod -- ip a", "oc create namespace <namespace_name>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: # additionalNetworks: - name: tertiary-net namespace: namespace2 type: Raw rawCNIConfig: |- { \"cniVersion\": \"0.3.1\", \"name\": \"tertiary-net\", \"type\": \"ipvlan\", \"master\": \"eth1\", \"mode\": \"l2\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"192.168.1.23/24\" } ] } }", "oc get network-attachment-definitions -n <namespace>", "NAME AGE test-network-1 14m", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: next-net spec: config: |- { \"cniVersion\": \"0.3.1\", \"name\": \"work-network\", \"type\": \"host-device\", \"device\": \"eth1\", \"ipam\": { \"type\": \"dhcp\" } }", "oc apply -f <file>.yaml", "oc new-project test-namespace", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetworkNodePolicy metadata: name: sriovnic namespace: openshift-sriov-network-operator spec: deviceType: netdevice isRdma: false needVhostNet: true nicSelector: vendor: \"15b3\" 1 deviceID: \"101b\" 2 rootDevices: [\"00:05.0\"] numVfs: 10 priority: 99 resourceName: sriovnic nodeSelector: feature.node.kubernetes.io/network-sriov.capable: \"true\"", "oc apply -f sriov-node-network-policy.yaml", "apiVersion: sriovnetwork.openshift.io/v1 kind: SriovNetwork metadata: name: sriov-network namespace: openshift-sriov-network-operator spec: networkNamespace: test-namespace resourceName: sriovnic spoofChk: \"off\" trust: \"on\"", "oc apply -f sriov-network-attachment.yaml", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: vlan-100 namespace: test-namespace spec: config: | { \"cniVersion\": \"0.4.0\", \"name\": \"vlan-100\", \"plugins\": [ { \"type\": \"vlan\", \"master\": \"ext0\", 1 \"mtu\": 1500, \"vlanId\": 100, \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"1.1.1.0/24\"}]} } ] }", "oc apply -f vlan100-additional-network-configuration.yaml", "apiVersion: v1 kind: Namespace metadata: name: test-namespace labels: pod-security.kubernetes.io/enforce: privileged pod-security.kubernetes.io/audit: privileged pod-security.kubernetes.io/warn: privileged security.openshift.io/scc.podSecurityLabelSync: \"false\" --- apiVersion: v1 kind: Pod metadata: name: nginx-pod namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" 1 }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"interface\": \"ext0.100\" } ]' spec: securityContext: runAsNonRoot: true containers: - name: nginx-container image: nginxinc/nginx-unprivileged:latest securityContext: allowPrivilegeEscalation: false capabilities: drop: [\"ALL\"] ports: - containerPort: 80 seccompProfile: type: \"RuntimeDefault\"", "oc apply -f pod-a.yaml", "oc describe pods nginx-pod -n test-namespace", "Name: nginx-pod Namespace: test-namespace Priority: 0 Node: worker-1/10.46.186.105 Start Time: Mon, 14 Aug 2023 16:23:13 -0400 Labels: <none> Annotations: k8s.ovn.org/pod-networks: {\"default\":{\"ip_addresses\":[\"10.131.0.26/23\"],\"mac_address\":\"0a:58:0a:83:00:1a\",\"gateway_ips\":[\"10.131.0.1\"],\"routes\":[{\"dest\":\"10.128.0.0 k8s.v1.cni.cncf.io/network-status: [{ \"name\": \"ovn-kubernetes\", \"interface\": \"eth0\", \"ips\": [ \"10.131.0.26\" ], \"mac\": \"0a:58:0a:83:00:1a\", \"default\": true, \"dns\": {} },{ \"name\": \"test-namespace/sriov-network\", \"interface\": \"ext0\", \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {}, \"device-info\": { \"type\": \"pci\", \"version\": \"1.0.0\", \"pci\": { \"pci-address\": \"0000:d8:00.2\" } } },{ \"name\": \"test-namespace/vlan-100\", \"interface\": \"ext0.100\", \"ips\": [ \"1.1.1.1\" ], \"mac\": \"6e:a7:5e:3f:49:1b\", \"dns\": {} }] k8s.v1.cni.cncf.io/networks: [ { \"name\": \"sriov-network\", \"namespace\": \"test-namespace\", \"interface\": \"ext0\" }, { \"name\": \"vlan-100\", \"namespace\": \"test-namespace\", \"i openshift.io/scc: privileged Status: Running IP: 10.131.0.26 IPs: IP: 10.131.0.26", "oc new-project test-namespace", "apiVersion: \"k8s.cni.cncf.io/v1\" kind: NetworkAttachmentDefinition metadata: name: bridge-network spec: config: '{ \"cniVersion\": \"0.4.0\", \"name\": \"bridge-network\", \"type\": \"bridge\", \"bridge\": \"br-001\", \"isGateway\": true, \"ipMasq\": true, \"hairpinMode\": true, \"ipam\": { \"type\": \"host-local\", \"subnet\": \"10.0.0.0/24\", \"routes\": [{\"dst\": \"0.0.0.0/0\"}] } }'", "oc apply -f bridge-nad.yaml", "oc get network-attachment-definitions", "NAME AGE bridge-network 15s", "apiVersion: k8s.cni.cncf.io/v1 kind: NetworkAttachmentDefinition metadata: name: ipvlan-net namespace: test-namespace spec: config: '{ \"cniVersion\": \"0.3.1\", \"name\": \"ipvlan-net\", \"type\": \"ipvlan\", \"master\": \"ext0\", 1 \"mode\": \"l3\", \"linkInContainer\": true, 2 \"ipam\": {\"type\": \"whereabouts\", \"ipRanges\": [{\"range\": \"10.0.0.0/24\"}]} }'", "oc apply -f ipvlan-additional-network-configuration.yaml", "oc get network-attachment-definitions", "NAME AGE bridge-network 87s ipvlan-net 9s", "apiVersion: v1 kind: Pod metadata: name: pod-a namespace: test-namespace annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"bridge-network\", \"interface\": \"ext0\" 1 }, { \"name\": \"ipvlan-net\", \"interface\": \"ext1\" } ]' spec: securityContext: runAsNonRoot: true seccompProfile: type: RuntimeDefault containers: - name: test-pod image: quay.io/openshifttest/hello-sdn@sha256:c89445416459e7adea9a5a416b3365ed3d74f2491beb904d61dc8d1eb89a72a4 securityContext: allowPrivilegeEscalation: false capabilities: drop: [ALL]", "oc apply -f pod-a.yaml", "oc get pod -n test-namespace", "NAME READY STATUS RESTARTS AGE pod-a 1/1 Running 0 2m36s", "oc exec -n test-namespace pod-a -- ip a", "1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 3: eth0@if105: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1400 qdisc noqueue state UP group default link/ether 0a:58:0a:d9:00:5d brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.217.0.93/23 brd 10.217.1.255 scope global eth0 valid_lft forever preferred_lft forever inet6 fe80::488b:91ff:fe84:a94b/64 scope link valid_lft forever preferred_lft forever 4: ext0@if107: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff link-netnsid 0 inet 10.0.0.2/24 brd 10.0.0.255 scope global ext0 valid_lft forever preferred_lft forever inet6 fe80::bcda:bdff:fe7e:f437/64 scope link valid_lft forever preferred_lft forever 5: ext1@ext0: <BROADCAST,MULTICAST,NOARP,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN group default link/ether be:da:bd:7e:f4:37 brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 brd 10.0.0.255 scope global ext1 valid_lft forever preferred_lft forever inet6 fe80::beda:bd00:17e:f437/64 scope link valid_lft forever preferred_lft forever", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: annotations: k8s.v1.cni.cncf.io/policy-for: <network_name>", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: useMultiNetworkPolicy: true", "oc patch network.operator.openshift.io cluster --type=merge --patch-file=multinetwork-enable-patch.yaml", "network.operator.openshift.io/cluster patched", "kind: ConfigMap apiVersion: v1 metadata: name: multi-networkpolicy-custom-rules namespace: openshift-multus data: custom-v6-rules.txt: | # accept NDP -p icmpv6 --icmpv6-type neighbor-solicitation -j ACCEPT 1 -p icmpv6 --icmpv6-type neighbor-advertisement -j ACCEPT 2 # accept RA/RS -p icmpv6 --icmpv6-type router-solicitation -j ACCEPT 3 -p icmpv6 --icmpv6-type router-advertisement -j ACCEPT 4", "touch <policy_name>.yaml", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default annotations: k8s.v1.cni.cncf.io/policy-for:<namespace_name>/<network_name> spec: podSelector: {} policyTypes: - Ingress ingress: []", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-same-namespace annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: ingress: - from: - podSelector: {}", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: allow-traffic-pod annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: pod: pod-a policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: kubernetes.io/metadata.name: namespace-y", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: api-allow annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: bookstore role: api ingress: - from: - podSelector: matchLabels: app: bookstore", "oc apply -f <policy_name>.yaml -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "oc get multi-networkpolicy", "oc apply -n <namespace> -f <policy_file>.yaml", "oc edit multi-networkpolicy <policy_name> -n <namespace>", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc get multi-networkpolicy", "oc describe multi-networkpolicy <policy_name> -n <namespace>", "oc delete multi-networkpolicy <policy_name> -n <namespace>", "multinetworkpolicy.k8s.cni.cncf.io/default-deny deleted", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: deny-by-default namespace: default 1 annotations: k8s.v1.cni.cncf.io/policy-for: <namespace_name>/<network_name> 2 spec: podSelector: {} 3 policyTypes: 4 - Ingress 5 ingress: [] 6", "oc apply -f deny-by-default.yaml", "multinetworkpolicy.k8s.cni.cncf.io/deny-by-default created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-external namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: policyTypes: - Ingress podSelector: matchLabels: app: web ingress: - {}", "oc apply -f web-allow-external.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-external created", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-all-namespaces namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: {} 2", "oc apply -f web-allow-all-namespaces.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-all-namespaces created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc run test-USDRANDOM --namespace=secondary --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "apiVersion: k8s.cni.cncf.io/v1beta1 kind: MultiNetworkPolicy metadata: name: web-allow-prod namespace: default annotations: k8s.v1.cni.cncf.io/policy-for: <network_name> spec: podSelector: matchLabels: app: web 1 policyTypes: - Ingress ingress: - from: - namespaceSelector: matchLabels: purpose: production 2", "oc apply -f web-allow-prod.yaml", "multinetworkpolicy.k8s.cni.cncf.io/web-allow-prod created", "oc run web --namespace=default --image=nginx --labels=\"app=web\" --expose --port=80", "oc create namespace prod", "oc label namespace/prod purpose=production", "oc create namespace dev", "oc label namespace/dev purpose=testing", "oc run test-USDRANDOM --namespace=dev --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "wget: download timed out", "oc run test-USDRANDOM --namespace=prod --rm -i -t --image=alpine -- sh", "wget -qO- --timeout=2 http://web.default", "<!DOCTYPE html> <html> <head> <title>Welcome to nginx!</title> <style> html { color-scheme: light dark; } body { width: 35em; margin: 0 auto; font-family: Tahoma, Verdana, Arial, sans-serif; } </style> </head> <body> <h1>Welcome to nginx!</h1> <p>If you see this page, the nginx web server is successfully installed and working. Further configuration is required.</p> <p>For online documentation and support please refer to <a href=\"http://nginx.org/\">nginx.org</a>.<br/> Commercial support is available at <a href=\"http://nginx.com/\">nginx.com</a>.</p> <p><em>Thank you for using nginx.</em></p> </body> </html>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: <network>[,<network>,...] 1", "metadata: annotations: k8s.v1.cni.cncf.io/networks: |- [ { \"name\": \"<network>\", 1 \"namespace\": \"<namespace>\", 2 \"default-route\": [\"<default-route>\"] 3 } ]", "oc create -f <name>.yaml", "oc get pod <name> -o yaml", "oc get pod example-pod -o yaml apiVersion: v1 kind: Pod metadata: annotations: k8s.v1.cni.cncf.io/networks: macvlan-bridge k8s.v1.cni.cncf.io/network-status: |- 1 [{ \"name\": \"openshift-sdn\", \"interface\": \"eth0\", \"ips\": [ \"10.128.2.14\" ], \"default\": true, \"dns\": {} },{ \"name\": \"macvlan-bridge\", \"interface\": \"net1\", \"ips\": [ \"20.2.2.100\" ], \"mac\": \"22:2f:60:a5:f8:00\", \"dns\": {} }] name: example-pod namespace: default spec: status:", "oc edit pod <name>", "metadata: annotations: k8s.v1.cni.cncf.io/networks: '[<network>[,<network>,...]]' 1", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"net1\" }, { \"name\": \"net2\", 1 \"default-route\": [\"192.0.2.1\"] 2 }]' spec: containers: - name: example-pod command: [\"/bin/bash\", \"-c\", \"sleep 2000000000000\"] image: centos/tools", "oc exec -it <pod_name> -- ip route", "oc edit networks.operator.openshift.io cluster", "name: <name> 1 namespace: <namespace> 2 rawCNIConfig: '{ 3 }' type: Raw", "{ \"cniVersion\": \"0.3.1\", \"name\": \"<name>\", 1 \"plugins\": [{ 2 \"type\": \"macvlan\", \"capabilities\": { \"ips\": true }, 3 \"master\": \"eth0\", 4 \"mode\": \"bridge\", \"ipam\": { \"type\": \"static\" } }, { \"capabilities\": { \"mac\": true }, 5 \"type\": \"tuning\" }] }", "oc edit pod <name>", "apiVersion: v1 kind: Pod metadata: name: example-pod annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"<name>\", 1 \"ips\": [ \"192.0.2.205/24\" ], 2 \"mac\": \"CA:FE:C0:FF:EE:00\" 3 } ]'", "oc exec -it <pod_name> -- ip a", "oc delete pod <name> -n <namespace>", "oc edit networks.operator.openshift.io cluster", "oc get network-attachment-definitions <network-name> -o yaml", "oc get network-attachment-definitions net1 -o go-template='{{printf \"%s\\n\" .spec.config}}' { \"cniVersion\": \"0.3.1\", \"type\": \"macvlan\", \"master\": \"ens5\", \"mode\": \"bridge\", \"ipam\": {\"type\":\"static\",\"routes\":[{\"dst\":\"0.0.0.0/0\",\"gw\":\"10.128.2.1\"}],\"addresses\":[{\"address\":\"10.128.2.100/23\",\"gateway\":\"10.128.2.1\"}],\"dns\":{\"nameservers\":[\"172.30.0.10\"],\"domain\":\"us-west-2.compute.internal\",\"search\":[\"us-west-2.compute.internal\"]}} }", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: [] 1", "oc get network-attachment-definition --all-namespaces", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: additionalNetworks: - name: test-network-1 namespace: additional-network-1 type: Raw rawCNIConfig: '{ \"cniVersion\": \"0.3.1\", \"name\": \"macvlan-vrf\", \"plugins\": [ 1 { \"type\": \"macvlan\", \"master\": \"eth1\", \"ipam\": { \"type\": \"static\", \"addresses\": [ { \"address\": \"191.168.1.23/24\" } ] } }, { \"type\": \"vrf\", 2 \"vrfname\": \"vrf-1\", 3 \"table\": 1001 4 }] }'", "oc create -f additional-network-attachment.yaml", "oc get network-attachment-definitions -n <namespace>", "NAME AGE additional-network-1 14m", "apiVersion: v1 kind: Pod metadata: name: pod-additional-net annotations: k8s.v1.cni.cncf.io/networks: '[ { \"name\": \"test-network-1\" 1 } ]' spec: containers: - name: example-pod-1 command: [\"/bin/bash\", \"-c\", \"sleep 9000000\"] image: centos:8", "oc create -f pod-additional-net.yaml", "pod/test-pod created", "ip vrf show", "Name Table ----------------------- vrf-1 1001", "ip link", "5: net1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master red state UP mode" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.15/html/networking/multiple-networks
Chapter 5. Installing a cluster on OpenStack in a restricted network
Chapter 5. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.16, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.16 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 5.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 5.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 5.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 5.7. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 5.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.16 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 5.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 5.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.9.2. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 5.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 5.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 5.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 5.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 5.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.15. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "openstack role add --user <user> --project <project> swiftoperator", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "file <name_of_downloaded_file>", "openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}", "./openshift-install create install-config --dir <installation_directory> 1", "platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_on_openstack/installing-openstack-installer-restricted
Chapter 43. Json Serialize Action
Chapter 43. Json Serialize Action Serialize payload to JSON 43.1. Configuration Options The json-serialize-action Kamelet does not specify any configuration option. 43.2. Dependencies At runtime, the json-serialize-action Kamelet relies upon the presence of the following dependencies: camel:kamelet camel:core camel:jackson 43.3. Usage This section describes how you can use the json-serialize-action . 43.3.1. Knative Action You can use the json-serialize-action Kamelet as an intermediate step in a Knative binding. json-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 43.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 43.3.1.2. Procedure for using the cluster CLI Save the json-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f json-serialize-action-binding.yaml 43.3.1.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 43.3.2. Kafka Action You can use the json-serialize-action Kamelet as an intermediate step in a Kafka binding. json-serialize-action-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: "Hello" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 43.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 43.3.2.2. Procedure for using the cluster CLI Save the json-serialize-action-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the action by using the following command: oc apply -f json-serialize-action-binding.yaml 43.3.2.3. Procedure for using the Kamel CLI Configure and run the action by using the following command: kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 43.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/json-serialize-action.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel", "apply -f json-serialize-action-binding.yaml", "kamel bind timer-source?message=Hello --step json-serialize-action channel:mychannel", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: json-serialize-action-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: timer-source properties: message: \"Hello\" steps: - ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: json-serialize-action sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic", "apply -f json-serialize-action-binding.yaml", "kamel bind timer-source?message=Hello --step json-serialize-action kafka.strimzi.io/v1beta1:KafkaTopic:my-topic" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/json-serialize-action
Chapter 3. Unsupported features and deprecated features
Chapter 3. Unsupported features and deprecated features 3.1. Unsupported features Support for some technologies is removed due to the high maintenance cost, low community interest, and better alternative solutions. The following features are not supported in JBoss EAP XP 4.0.0: Platforms and features Oracle Solaris JBoss EAP deprecated the following platforms in version 7.1. These platforms are not tested in JBoss EAP 7.4. Oracle Solaris on x86_64 Oracle Solaris on SPARCv9 JBoss EAP 7.4 does not include the WildFly SSL natives for these platforms. As a result, SSL operations in Oracle Solaris platforms might be slower than they were on versions of JBoss EAP. Java Development Kits Since JBoss EAP XP 4.0.0, Java Development Kit 8 (JDK 8) is now unsupported. Note JBoss EAP XP 3.0.0 will be supported for 3 months or 2 cumulative patches after JBoss EAP XP 4.0.0 is released. RESTEasy parameters RESTEasy provides a Servlet 3.0 ServletContainerInitializer integration interface that performs an automatic scan of resources and providers for a servlet. Containers can use this integration interface to start an application. Therefore, use of the following RESTEasy parameters is no longer supported: resteasy.scan resteasy.scan.providers resteasy.scan.resources Red Hat JBoss Operations Network Using Red Hat JBoss Operations Network (JON) for JBoss EAP management is deprecated since JBoss EAP version 7.2. For JBoss EAP 7.4, support for Red Hat JON for JBoss EAP management is deprecated. MS SQL Server 2017 MS SQL Server 2017 is not supported in JBoss EAP 7.4. For a complete list of unsupported features in JBoss EAP 7.4, see the Unsupported features section in JBoss EAP 7.4 Release Notes. 3.2. Deprecated features Some features have been deprecated with this release. This means that no enhancements are made to these features, and they might be removed in the future, usually the major release. Red Hat continues to provide full support and bug fixes under our standard support terms and conditions. For more information about the Red Hat support policy for JBoss EAP XP, see the Red Hat JBoss Enterprise Application Platform expansion pack life cycle and support policies located on the Red Hat Customer Portal. Keycloak OIDC client adapter The keycloak-client-oidc layer is deprecated and has been replaced with the new elytron-oidc-client subsystem. MicroProfile MicroProfile Metrics MicroProfile OpenTracing Note MicroProfile Metrics and OpenTracing are being deprecated because it might be removed or updated by the Eclipse MicroProfile community. Galleon layers The jms-activemq decorator layer is deprecated, and this layer has been replaced with the messaging-activemq layer. Operating systems Microsoft Windows Server on i686 Red Hat Enterprise Linux (RHEL) 6 on i686 Databases and database connectors IBM DB2 11.1 PostgreSQL / EnterpriseDB 11 MariaDB 10.1 MS SQL 2017 Server Side JavaScript JBoss EAP Server Side JavaScript support, which was provided as a Technology Preview functionality, is deprecated. Lightweight Directory Access Protocol (LDAP) servers Red Hat Directory Server 10.0 Red Hat Directory Server 10.1 Spring BOM The following Spring BOM that is located in the Red Hat Maven repository is now deprecated: jboss-eap-jakartaee8-with-spring4 Although Red Hat tests that Spring applications run on JBoss EAP XP 4.0.0, you must use the latest version of the Spring Framework and its BOMs (for example, x.y.z.RELEASE ) for developing your applications on JBoss EAP XP 4.0.0. For more information about versions of the Spring Framework, see Spring Framework Versions on GitHub . Java Development Kits Java Development Kit 11 (JDK 11) Note In future major JBoss EAP releases, Java SE requirements will be reevaluated based on the industry (for example, Jakarta EE, MicroProfile and so on) and market needs. JBoss EAP OpenShift templates JBoss EAP templates for OpenShift are deprecated. .json templates The eap-xp2-third-party-db-s2i.json template is deprecated and removed in JBoss EAP XP 4.0.0. The eap74-beta-starter-s2i.json and eap74-beta-third-party-db-s2i.json templates are deprecated and are removed in JBoss EAP 7.4.0. Legacy security subsystem The org.jboss.as.security extension and the legacy security subsystem it supports are now deprecated. Migrate your security implementations from the security subsystem to the elytron subsystem. PicketLink The org.wildfly.extension.picketlink extension, and the picketlink-federation and picketlink-identity-management subsystems this extension supports, are now deprecated. Migrate your single sign-on implementation to Red Hat Single Sign-On. PicketBox-based security vault PicketBox-based security vault, both through the legacy security subsystem and the core-service=vault kernel management resources is deprecated. Managed domain support for versions of JBoss EAP Support for hosts running JBoss EAP 7.3 and earlier versions in a JBoss EAP 7.4 managed domain is deprecated. Migrate the hosts in your managed domains to JBoss EAP 7.4. Server configuration files using namespaces from JBoss EAP 7.3 and earlier Using server configuration files ( standalone.xml , host.xml , and domain.xml ) that include namespaces from JBoss EAP 7.3 and earlier is deprecated in this release. Update your server configuration files to use JBoss EAP 7.4 namespaces. Agroal subsystem The Agroal subsystem is deprecated. application-security-domain resources The application-security-domain resources in ejb3 and undertow subsystems are deprecated. Resources in the clustering subsystems The following resources in the clustering subsystems are deprecated: The infinispan subsystem /subsystem=infinispan /remote-cache-container=*/component=transaction /subsystem=infinispan /remote-cache-container= /near-cache= The jgroups subsystem /subsystem=jgroups /stack=*/protocol=S3_PING /subsystem=jgroups /stack=*/protocol=GOOGLE_PING The modcluster subsystem Codehaus Jackson The Codehaus Jackson 1.x module, which is currently unsupported, is deprecated in JBoss EAP 7.4. SCRAM mechanisms The following SCRAM mechanisms and their channel-binding variants are deprecated: SCRAM-SHA-512 SCRAM-SHA-384 Hibernate ORM 5.1 The Hibernate ORM 5.1 native API bytecode transformer has always been deprecated since it was originally introduced. HornetQ client The HornetQ client module is deprecated. For a complete list of functionalities deprecated in JBoss EAP 7.4, see the Deprecated features section in JBoss EAP 7.4 Release Notes. Legacy patching for bootable jar The legacy patching feature for bootable jar is deprecated in JBoss EAP XP 4.0.0.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/red_hat_jboss_eap_xp_4.0.0_release_notes/unsupported_features_and_deprecated_features
Chapter 4. Customizing the Block Storage backup service
Chapter 4. Customizing the Block Storage backup service When you have deployed the backup service for the Block Storage service (cinder), you can change the default parameters. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. 4.1. Authenticating volume owners for access to volume backups Administrators can back up any volume belonging to the project. To ensure that the volume owner can also access the volume backup, administrators must provide arguments to authenticate the volume owner when backing up the volume. Procedure Provide the following arguments to authenticate a volume owner for access to volume backups: Replace <projectname> with the name of the project (tenant) of the owner of the volume. Replace <username> and <password> with the username and password credentials of the user that is the owner of the volume within this project. Note [--name <backup_name>] <volume> are the typical arguments when creating a volume backup. 4.2. Viewing and modifying project backup quotas You can change or view the limits of the following resource quotas that apply to Block Storage volume backups that users can create for each project (tenant): backups , specify the maximum number of Block Storage volume backups that users of this project can create. By default, this limit is set to 10. backup-gigabytes , specify the total size, in gigabytes, of all the Block Storage volume backups that users of this project can create. By default, this limit is set to 1000. You can also view the usage of these Block Storage backup resource quotas for each project. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Note If the cloudrc file does not exist, then type in the exit command and create this file. For more information, see Creating the cloudrc file . Optional: List the projects to obtain the ID or name of the required project: View the current limits of the backup quotas for a specific project: Replace <project> with the ID or name of the required project. This provides the limits of all the resource quotas for the specified project. Take note of the Limit column for the backup-gigabytes and backups fields in the table. For example: Optional: Modify the total size of all the Block Storage volume backups that users can create for a project: Replace <totalgb> with the total size, in gigabytes, of the backups that users can create for this project. Optional: Modify the maximum number of the Block Storage volume backups that users can create for a project: Replace <maxnum> with the maximum number of backups that users can create for this project. Optional: View the usage of these Block Storage volume backup quotas and, if necessary, review any changes to their limits for a specific project: Replace <project_id> with the ID of the project. The first two rows in this table specify the backup quotas for the specified project. Look at the following columns in this table: The In_use column indicates how much of each resource has been used. The Limit column indicates whether the quota limits have been adjusted from their default settings, in this example both have been adjusted. Exit the openstackclient pod: USD exit
[ "openstack --os-project-name <projectname> --os-username <username> --os-password <password> volume backup create [--name <backup_name>] <volume>", "oc rsh -n openstack openstackclient source ./cloudrc", "openstack project list", "openstack quota show <project>", "openstack quota show c2c1da89ed1648fc8b4f35a045f8d34c +-----------------------+-------+ | Resource | Limit | +-----------------------+-------+ | backups | 10 | | backup-gigabytes | 1000 | +-----------------------+-------+", "openstack quota set --backup-gigabytes <totalgb> <project>", "openstack quota set --backups <maxnum> <project>", "cinder quota-usage <project_id>", "cinder quota-usage c2c1da89ed1648fc8b4f35a045f8d34c +-----------------------+--------+----------+-------+-----------+ | Type | In_use | Reserved | Limit | Allocated | +-----------------------+--------+----------+-------+-----------+ | backup_gigabytes | 235 | 0 | 500 | | | backups | 7 | 0 | 12 | |", "exit" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/customizing_persistent_storage/assembly_backing-up-cinder_customizing-cinder
22.6. The Engine Vacuum Tool
22.6. The Engine Vacuum Tool 22.6.1. The Engine Vacuum Tool The Engine Vacuum tool maintains PostgreSQL databases by updating tables and removing dead rows, allowing disk space to be reused. See the PostgreSQL documentation for information about the VACUUM command and its parameters. The Engine Vacuum command is engine-vacuum . You must log in as the root user and provide the administration credentials for the Red Hat Virtualization environment. Alternatively, the Engine Vacuum tool can be run while using the engine-setup command to customize an existing installation: The Yes option runs the Engine Vacuum tool in full vacuum verbose mode. 22.6.2. Engine Vacuum Modes Engine Vacuum has two modes: Standard Vacuum Frequent standard vacuuming is recommended. Standard vacuum removes dead row versions in tables and indexes and marks the space as available for future reuse. Frequently updated tables should be vacuumed on a regular basis. However, standard vacuum does not return the space to the operating system. Standard vacuum, with no parameters, processes every table in the current database. Full Vacuum Full vacuum is not recommended for routine use, but should only be run when a significant amount of space needs to be reclaimed from within the table. Full vacuum compacts the tables by writing a new copy of the table file with no dead space, thereby enabling the operating system to reclaim the space. Full vacuum can take a long time. Full vacuum requires extra disk space for the new copy of the table, until the operation completes and the old copy is deleted. Because full vacuum requires an exclusive lock on the table, it cannot be run in parallel with other uses of the table. 22.6.3. Syntax for the engine-vacuum Command The basic syntax for the engine-vacuum command is: Running the engine-vacuum command with no options performs a standard vacuum. There are several parameters to further refine the engine-vacuum command. General Options -h --help Displays information on how to use the engine-vacuum command. -a Runs a standard vacuum, analyzes the database, and updates the optimizer statistics. -A Analyzes the database and updates the optimizer statistics, without vacuuming. -f Runs a full vacuum. -v Runs in verbose mode, providing more console output. -t table_name Vacuums a specific table or tables.
[ "engine-setup [ INFO ] Stage: Environment customization Perform full vacuum on the engine database engine@localhost? This operation may take a while depending on this setup health and the configuration of the db vacuum process. See https://www.postgresql.org/docs/10/static/sql-vacuum.html (Yes, No) [No]:", "engine-vacuum", "engine-vacuum option", "engine-vacuum -f -v -t vm_dynamic -t vds_dynamic" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-the_engine_vacuum_tool
Chapter 12. Configuring Policy-based Routing to Define Alternative Routes
Chapter 12. Configuring Policy-based Routing to Define Alternative Routes By default, the kernel in RHEL decides where to forward network packets based on the destination address using a routing table. Policy-based routing enables you to configure complex routing scenarios. For example, you can route packets based on various criteria, such as the source address, packet metadata, or protocol. This section describes of how to configure policy-based routing using NetworkManager. Note On systems that use NetworkManager, only the nmcli utility supports setting routing rules and assigning routes to specific tables. 12.1. Routing Traffic from a Specific Subnet to a Different Default Gateway This section describes how to configure RHEL as a router that, by default, routes all traffic to internet provider A using the default route. Using policy-based routing, RHEL routes traffic received from the internal workstations subnet to provider B. The procedure assumes the following network topology: Figure 12.1. Activate a Connection Prerequisites The RHEL router you want to set up in the procedure has four network interfaces: The enp7s0 interface is connected to the network of provider A. The gateway IP in the provider's network is 198.51.100.2 , and the network uses a /30 network mask. The enp1s0 interface is connected to the network of provider B. The gateway IP in the provider's network is 192.0.2.2 , and the network uses a /30 network mask. The enp8s0 interface is connected to the 10.0.0.0/24 subnet with internal workstations. The enp9s0 interface is connected to the 203.0.113.0/24 subnet with the company's servers. Hosts in the internal workstations subnet use 10.0.0.1 as default gateway. In the procedure, you assign this IP address to the enp8s0 network interface of the router. Hosts in the server subnet use 203.0.113.1 as default gateway. In the procedure, you assign this IP address to the enp9s0 network interface of the router. The firewalld service is enabled and active, which is the default. Procedure Configure the network interface to provider A: The nmcli connection add command creates a NetworkManager connection profile. The following list describes the options of the command: type ethernet : Defines that the connection type is Ethernet. con-name connection_name : Sets the name of the profile. Use a meaningful name to avoid confusion. ifname network_device : Sets the network interface. ipv4.method manual : Enables to configure a static IP address. ipv4.addresses IP_address / subnet_mask : Sets the IPv4 addresses and subnet mask. ipv4.gateway IP_address : Sets the default gateway address. ipv4.dns IP_of_DNS_server : Sets the IPv4 address of the DNS server. connection.zone firewalld_zone : Assigns the network interface to the defined firewalld zone. Note that firewalld automatically enables masquerading interfaces assigned to the external zone. Configure the network interface to provider B: This command uses the ipv4.routes parameter instead of ipv4.gateway to set the default gateway. This is required to assign the default gateway for this connection to a different routing table ( 5000 ) than the default. NetworkManager automatically creates this new routing table when the connection is activated. Note The nmcli utility does not support using 0.0.0.0/0 for the default gateway in ipv4.gateway . To work around this problem, the command creates separate routes for both the 0.0.0.0/1 and 128.0.0.0/1 subnets, which covers also the full IPv4 address space. Configure the network interface to the internal workstations subnet: This command uses the ipv4.routes parameter to add a static route to the routing table with ID 5000 . This static route for the 10.0.0.0/24 subnet uses the IP of the local network interface to provider B ( 192.0.2.1 ) as hop. Additionally, the command uses the ipv4.routing-rules parameter to add a routing rule with priority 5 that routes traffic from the 10.0.0.0/24 subnet to table 5000 . Low values have a high priority. Note that the syntax in the ipv4.routing-rules parameter is the same as in an ip route add command, except that ipv4.routing-rules always requires specifying a priority. Configure the network interface to the server subnet: Verification Steps On a RHEL host in the internal workstation subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 192.0.2.1 , which is the network of provider B. On a RHEL host in the server subnet: Install the traceroute package: Use the traceroute utility to display the route to a host on the internet: The output of the command displays that the router sends packets over 198.51.100.2 , which is the network of provider A. Troubleshooting Steps On the RHEL router: Display the rule list: Display the routes in table 5000 : Display which interfaces are assigned to which firewall zones: Verify that the external zone has masquerading enabled: Additional Resources For further details about the ipv4.* parameters you can set in the nmcli connection add command, see the IPv4 settings section in the nm-settings (5) man page. For further details about the connection.* parameters you can set in the nmcli connection add command, see the Connection settings section in the nm-settings (5) man page. For further details about managing NetworkManager connections using nmcli , see the Connection management commands section in the nmcli (1) man page.
[ "nmcli connection add type ethernet con-name Provider-A ifname enp7s0 ipv4.method manual ipv4.addresses 198.51.100.1/30 ipv4.gateway 198.51.100.2 ipv4.dns 198.51.100.200 connection.zone external", "nmcli connection add type ethernet con-name Provider-B ifname enp1s0 ipv4.method manual ipv4.addresses 192.0.2.1/30 ipv4.routes \"0.0.0.0/1 192.0.2.2 table=5000, 128.0.0.0/1 192.0.2.2 table=5000\" connection.zone external", "nmcli connection add type ethernet con-name Internal-Workstations ifname enp8s0 ipv4.method manual ipv4.addresses 10.0.0.1/24 ipv4.routes \"10.0.0.0/24 src=192.0.2.1 table=5000\" ipv4.routing-rules \"priority 5 from 10.0.0.0/24 table 5000\" connection.zone trusted", "nmcli connection add type ethernet con-name Servers ifname enp9s0 ipv4.method manual ipv4.addresses 203.0.113.1/24 connection.zone trusted", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 10.0.0.1 (10.0.0.1) 0.337 ms 0.260 ms 0.223 ms 2 192.0.2.1 (192.0.2.1) 0.884 ms 1.066 ms 1.248 ms", "yum install traceroute", "traceroute redhat.com traceroute to redhat.com (209.132.183.105), 30 hops max, 60 byte packets 1 203.0.113.1 (203.0.113.1) 2.179 ms 2.073 ms 1.944 ms 2 198.51.100.2 (198.51.100.2) 1.868 ms 1.798 ms 1.549 ms", "ip rule list 0: from all lookup local 5: from 10.0.0.0/24 lookup 5000 32766: from all lookup main 32767: from all lookup default", "ip route list table 5000 0.0.0.0/1 via 192.0.2.2 dev enp1s0 proto static metric 100 10.0.0.0/24 dev enp8s0 proto static scope link src 192.0.2.1 metric 102 128.0.0.0/1 via 192.0.2.2 dev enp1s0 proto static metric 100", "firewall-cmd --get-active-zones external interfaces: enp1s0 enp7s0 trusted interfaces: enp8s0 enp9s0", "firewall-cmd --info-zone=external external (active) target: default icmp-block-inversion: no interfaces: enp1s0 enp7s0 sources: services: ssh ports: protocols: masquerade: yes" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/configuring-policy-based-routing-to-define-alternative-routes
Chapter 9. Federal Information Processing Standard on Red Hat OpenStack Platform
Chapter 9. Federal Information Processing Standard on Red Hat OpenStack Platform The Federal Information Processing Standards (FIPS) is a set of security requirements developed by the National Institute of Standards and Technology (NIST). In Red Hat Enterprise Linux 9, the supported standard is FIPS publication 140-3: Security Requirements for Cryptographic Modules . For details about the supported standard, see the Federal Information Processing Standards Publication 140-3 . These security requirements define acceptable cryptographic algorithms and the use of those cryptographic algorithms, including security modules. FIPS 140-3 validation is achieved by using only those cryptographic algorithms approved through FIPS, in the manner prescribed, and through validated modules. FIPS 140-3 compatibility is achieved by using only those cryptographic algorithms approved through FIPS. Red Hat OpenStack Platform 17 is FIPS 140-3 compatible . You can take advantage of FIPS compatibility by using images provided by Red Hat to deploy your overcloud. Note OpenStack 17.1 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 has not yet been submitted for FIPS validation. Red Hat expects, though cannot commit to a specific timeframe, to obtain FIPS validation for RHEL 9.0 and RHEL 9.2 modules, and later even minor releases of RHEL 9.x. Updates will be available in Compliance Activities and Government Standards . 9.1. Enabling FIPS When you enable FIPS, you must complete a series of steps during the installation of the undercloud and overcloud. Prerequisites You have installed Red Hat Enterprise Linux and are prepared to begin the installation of Red Hat OpenStack Platform director. Red Hat Ceph Storage 6 or later deployed, if you are using Red Hat Ceph Storage as the storage backend. Procedure Enable FIPS on the undercloud: Enable FIPS on the system on which you plan to install the undercloud: Note This step will add the fips=1 kernel parameter to your GRUB configuration file. As a result, only cryptographic algorithms modules used by Red Hat Enterprise Linux are in FIPS mode and only cryptographic algorithms approved by the standard are used. Reboot the system. Verify that FIPS is enabled: Install and configure Red Hat OpenStack Platform director. For more information see: Installing director on the undercloud . Prepare FIPS-enabled images for the overcloud. Install images for the overcloud: Create the images directory in the home directory of the stack user: Extract the images to your home directory: You must create symlinks before uploading the images: Upload the FIPS-enabled overcloud images to the Image service: Note You must use the --update-existing flag even if there are no images currently in the OpenStack Image service. Enable FIPS on the overcloud. Configure templates for an overcloud deployment specific to your environment. Include all configuration templates in the deployment command, including fips.yaml:
[ "fips-mode-setup --enable", "fips-mode-setup --check", "sudo dnf -y install rhosp-director-images-uefi-fips-x86_64", "mkdir /home/stack/images cd /home/stack/images", "for i in /usr/share/rhosp-director-images/*fips*.tar; do tar -xvf USDi; done", "ln -s ironic-python-agent-fips.initramfs ironic-python-agent.initramfs ln -s ironic-python-agent-fips.kernel ironic-python-agent.kernel ln -s overcloud-hardened-uefi-full-fips.qcow2 overcloud-hardened-uefi-full.qcow2", "openstack overcloud image upload --update-existing --whole-disk", "openstack overcloud deploy -e /usr/share/openstack-tripleo-heat-templates/environments/fips.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/hardening_red_hat_openstack_platform/assembly-fips_security_and_hardening
Chapter 6. Configuring smart card authentication with local certificates
Chapter 6. Configuring smart card authentication with local certificates To configure smart card authentication with local certificates: The host is not connected to a domain. You want to authenticate with a smart card on this host. You want to configure SSH access using smart card authentication. You want to configure the smart card with authselect . Use the following configuration to accomplish this scenario: Obtain a user certificate for the user who wants to authenticate with a smart card. The certificate should be generated by a trustworthy Certification Authority used in the domain. If you cannot get the certificate, you can generate a user certificate signed by a local certificate authority for testing purposes, Store the certificate and private key in a smart card. Configure the smart card authentication for SSH access. Important If a host can be part of the domain, add the host to the domain and use certificates generated by Active Directory or Identity Management Certification Authority. For details about how to create IdM certificates for a smart card, see Configuring Identity Management for smart card authentication . Prerequisites Authselect installed The authselect tool configures user authentication on Linux hosts and you can use it to configure smart card authentication parameters. For details about authselect, see Explaining authselect . Smart Card or USB devices supported by RHEL 9 For details, see Smart Card support in RHEL9 . 6.1. Creating local certificates Follow this procedure to perform the following tasks: Generate the OpenSSL certificate authority Create a certificate signing request Warning The following steps are intended for testing purposes only. Certificates generated by a local self-signed Certificate Authority are not as secure as using AD, IdM, or RHCS Certification Authority. You should use a certificate generated by your enterprise Certification Authority even if the host is not part of the domain. Procedure Create a directory where you can generate the certificate, for example: Set up the certificate (copy this text to your command line in the ca directory): Create the following directories: Create the following files: Write the number 01 in the serial file: This command writes a number 01 in the serial file. It is a serial number of the certificate. With each new certificate released by this CA the number increases by one. Create an OpenSSL root CA key: Create a self-signed root Certification Authority certificate: Create the key for your username: This key is generated in the local system which is not secure, therefore, remove the key from the system when the key is stored in the card. You can create a key directly in the smart card as well. For doing this, follow instructions created by the manufacturer of your smart card. Create the certificate signing request configuration file (copy this text to your command line in the ca directory): Create a certificate signing request for your example.user certificate: Configure the new certificate. Expiration period is set to 1 year: At this point, the certification authority and certificates are successfully generated and prepared for import into a smart card. 6.2. Copying certificates to the SSSD directory GNOME Desktop Manager (GDM) requires SSSD. If you use GDM, you need to copy the PEM certificate to the /etc/sssd/pki directory. Prerequisites The local CA authority and certificates have been generated Procedure Ensure that you have SSSD installed on the system. Create a /etc/sssd/pki directory: Copy the rootCA.crt as a PEM file in the /etc/sssd/pki/ directory: Now you have successfully generated the certificate authority and certificates, and you have saved them in the /etc/sssd/pki directory. Note If you want to share the Certificate Authority certificates with another application, you can change the location in sssd.conf: SSSD PAM responder: pam_cert_db_path in the [pam] section SSSD ssh responder: ca_db in the [ssh] section For details, see man page for sssd.conf . Red Hat recommends keeping the default path and using a dedicated Certificate Authority certificate file for SSSD to make sure that only Certificate Authorities trusted for authentication are listed here. 6.3. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 6.4. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 6.5. Configuring SSH access using smart card authentication SSH connections require authentication. You can use a password or a certificate. Follow this procedure to enable authentication using a certificate stored on a smart card. For details about configuring smart cards with authselect , see Configuring smart cards using authselect . Prerequisites The smart card contains your certificate and private key. The card is inserted in the reader and connected to the computer. The pcscd service is running on your local machine. For details, see Installing tools for managing and using smart cards . Procedure Create a new directory for SSH keys in the home directory of the user who uses smart card authentication: Run the ssh-keygen -D command with the opensc library to retrieve the existing public key paired with the private key on the smart card, and add it to the authorized_keys list of the user's SSH keys directory to enable SSH access with smart card authentication. SSH requires access right configuration for the /.ssh directory and the authorized_keys file. To set or change the access rights, enter: Verification Display the keys: The terminal displays the keys. You can verify the SSH access with the following command: If the configuration is successful, you are prompted to enter the smart card PIN. The configuration works now locally. Now you can copy the public key and distribute it to authorized_keys files located on all servers on which you want to use SSH. 6.6. Creating certificate mapping rules when using smart cards You need to create certificate mapping rules in order to log in using the certificate stored on a smart card. Prerequisites The smart card contains your certificate and private key. The card is inserted in the reader and connected to the computer. The pcscd service is running on your local machine. Procedure Create a certificate mapping configuration file, such as /etc/sssd/conf.d/sssd_certmap.conf . Add certificate mapping rules to the sssd_certmap.conf file: Note that you must define each certificate mapping rule in separate sections. Define each section as follows: If SSSD is configured to use the proxy provider to allow smart card authentication for local users instead of AD, IPA, or LDAP, the <RULE_NAME> can simply be the username of the user with the card matching the data provided in the matchrule . Verification Note that to verify SSH access with a smart card, SSH access must be configured. For more information, see Configuring SSH access using smart card authentication . You can verify the SSH access with the following command: If the configuration is successful, you are prompted to enter the smart card PIN.
[ "mkdir /tmp/ca cd /tmp/ca", "cat > ca.cnf <<EOF [ ca ] default_ca = CA_default [ CA_default ] dir = . database = \\USDdir/index.txt new_certs_dir = \\USDdir/newcerts certificate = \\USDdir/rootCA.crt serial = \\USDdir/serial private_key = \\USDdir/rootCA.key RANDFILE = \\USDdir/rand default_days = 365 default_crl_days = 30 default_md = sha256 policy = policy_any email_in_dn = no name_opt = ca_default cert_opt = ca_default copy_extensions = copy [ usr_cert ] authorityKeyIdentifier = keyid, issuer [ v3_ca ] subjectKeyIdentifier = hash authorityKeyIdentifier = keyid:always,issuer:always basicConstraints = CA:true keyUsage = critical, digitalSignature, cRLSign, keyCertSign [ policy_any ] organizationName = supplied organizationalUnitName = supplied commonName = supplied emailAddress = optional [ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] O = Example OU = Example Test CN = Example Test CA EOF", "mkdir certs crl newcerts", "touch index.txt crlnumber index.txt.attr", "echo 01 > serial", "openssl genrsa -out rootCA.key 2048", "openssl req -batch -config ca.cnf -x509 -new -nodes -key rootCA.key -sha256 -days 10000 -set_serial 0 -extensions v3_ca -out rootCA.crt", "openssl genrsa -out example.user.key 2048", "cat > req.cnf <<EOF [ req ] distinguished_name = req_distinguished_name prompt = no [ req_distinguished_name ] O = Example OU = Example Test CN = testuser [ req_exts ] basicConstraints = CA:FALSE nsCertType = client, email nsComment = \"testuser\" subjectKeyIdentifier = hash keyUsage = critical, nonRepudiation, digitalSignature, keyEncipherment extendedKeyUsage = clientAuth, emailProtection, msSmartcardLogin subjectAltName = otherName:msUPN;UTF8:[email protected], email:[email protected] EOF", "openssl req -new -nodes -key example.user.key -reqexts req_exts -config req.cnf -out example.user.csr", "openssl ca -config ca.cnf -batch -notext -keyfile rootCA.key -in example.user.csr -days 365 -extensions usr_cert -out example.user.crt", "rpm -q sssd sssd-2.0.0.43.el8_0.3.x86_64", "file /etc/sssd/pki /etc/sssd/pki/: directory", "cp /tmp/ca/rootCA.crt /etc/sssd/pki/sssd_auth_ca_db.pem", "dnf -y install opensc gnutls-utils", "systemctl start pcscd", "systemctl status pcscd", "pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:", "pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name", "pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name", "pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init -F", "mkdir /home/example.user/.ssh", "ssh-keygen -D /usr/lib64/pkcs11/opensc-pkcs11.so >> ~example.user/.ssh/authorized_keys", "chown -R example.user:example.user ~example.user/.ssh/ chmod 700 ~example.user/.ssh/ chmod 600 ~example.user/.ssh/authorized_keys", "cat ~example.user/.ssh/authorized_keys", "ssh -I /usr/lib64/opensc-pkcs11.so -l example.user localhost hostname", "[certmap/shadowutils/otheruser] matchrule = <SUBJECT>.*CN=certificate_user.*<ISSUER>^CN=Example Test CA,OU=Example Test,O=EXAMPLEUSD", "[certmap/<DOMAIN_NAME>/<RULE_NAME>]", "ssh -I /usr/lib64/opensc-pkcs11.so -l otheruser localhost hostname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_smart_card_authentication/configuring-and-importing-local-certificates-to-a-smart-card_managing-smart-card-authentication
Chapter 9. Configuring an Ingress Controller for manual DNS Management
Chapter 9. Configuring an Ingress Controller for manual DNS Management As a cluster administrator, when you create an Ingress Controller, the Operator manages the DNS records automatically. This has some limitations when the required DNS zone is different from the cluster DNS zone or when the DNS zone is hosted outside the cloud provider. As a cluster administrator, you can configure an Ingress Controller to stop automatic DNS management and start manual DNS management. Set dnsManagementPolicy to specify when it should be automatically or manually managed. When you change an Ingress Controller from Managed to Unmanaged DNS management policy, the Operator does not clean up the wildcard DNS record provisioned on the cloud. When you change an Ingress Controller from Unmanaged to Managed DNS management policy, the Operator attempts to create the DNS record on the cloud provider if it does not exist or updates the DNS record if it already exists. Important When you set dnsManagementPolicy to unmanaged , you have to manually manage the lifecycle of the wildcard DNS record on the cloud provider. 9.1. Managed DNS management policy The Managed DNS management policy for Ingress Controllers ensures that the lifecycle of the wildcard DNS record on the cloud provider is automatically managed by the Operator. 9.2. Unmanaged DNS management policy The Unmanaged DNS management policy for Ingress Controllers ensures that the lifecycle of the wildcard DNS record on the cloud provider is not automatically managed, instead it becomes the responsibility of the cluster administrator. Note On the AWS cloud platform, if the domain on the Ingress Controller does not match with dnsConfig.Spec.BaseDomain then the DNS management policy is automatically set to Unmanaged . 9.3. Creating a custom Ingress Controller with the Unmanaged DNS management policy As a cluster administrator, you can create a new custom Ingress Controller with the Unmanaged DNS management policy. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a custom resource (CR) file named sample-ingress.yaml containing the following: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 dnsManagementPolicy: Unmanaged 4 1 Specify the <name> with a name for the IngressController object. 2 Specify the domain based on the DNS record that was created as a prerequisite. 3 Specify the scope as External to expose the load balancer externally. 4 dnsManagementPolicy indicates if the Ingress Controller is managing the lifecycle of the wildcard DNS record associated with the load balancer. The valid values are Managed and Unmanaged . The default value is Managed . Save the file to apply the changes. oc apply -f <name>.yaml 1 9.4. Modifying an existing Ingress Controller As a cluster administrator, you can modify an existing Ingress Controller to manually manage the DNS record lifecycle. Prerequisites Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Modify the chosen IngressController to set dnsManagementPolicy : SCOPE=USD(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath="{.status.endpointPublishingStrategy.loadBalancer.scope}") oc -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{"spec":{"endpointPublishingStrategy":{"type":"LoadBalancerService","loadBalancer":{"dnsManagementPolicy":"Unmanaged", "scope":"USD{SCOPE}"}}}}' Optional: You can delete the associated DNS record in the cloud provider. 9.5. Additional resources Ingress Controller configuration parameters
[ "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: namespace: openshift-ingress-operator name: <name> 1 spec: domain: <domain> 2 endpointPublishingStrategy: type: LoadBalancerService loadBalancer: scope: External 3 dnsManagementPolicy: Unmanaged 4", "apply -f <name>.yaml 1", "SCOPE=USD(oc -n openshift-ingress-operator get ingresscontroller <name> -o=jsonpath=\"{.status.endpointPublishingStrategy.loadBalancer.scope}\") -n openshift-ingress-operator patch ingresscontrollers/<name> --type=merge --patch='{\"spec\":{\"endpointPublishingStrategy\":{\"type\":\"LoadBalancerService\",\"loadBalancer\":{\"dnsManagementPolicy\":\"Unmanaged\", \"scope\":\"USD{SCOPE}\"}}}}'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/networking/ingress-controller-dnsmgt
Chapter 3. Major differences between Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 17
Chapter 3. Major differences between Red Hat build of OpenJDK 11 and Red Hat build of OpenJDK 17 If you are migrating your Java applications from Red Hat build of OpenJDK 11 or earlier, first ensure that you familiarize yourself with the changes that were introduced in Red Hat build of OpenJDK 17. These changes might require that you reconfigure your existing Red Hat build of OpenJDK installation before you migrate to Red Hat build of OpenJDK 21. Note This chapter is relevant only if you currently use Red Hat build of OpenJDK 11 or earlier. You can ignore this chapter if you already use Red Hat build of OpenJDK 17. 3.1. Removal of Concurrent Mark Sweep garbage collector Red Hat build of OpenJDK 17 no longer includes the Concurrent Mark Sweep (CMS) garbage collector, which was commonly used in earlier releases for workloads sensitive to pause times and latency. If you have been using the CMS collector, switch to one of the following collectors based on your workload before migrating to Red Hat build of OpenJDK 17 or later. The Garbage-First (G1) collector balances performance and latency. G1 is a generational collector that offers a high ephemeral object allocation rate with typical pause times of a few hundred milliseconds. G1 is enabled by default, but you can manually enable this collector by setting the -XX:+UseG1GC JVM option. The Shenandoah collector is a low-latency collector with typical pause times of a few milliseconds. Shenandoah is not a generational collector and might exhibit worse ephemeral object allocation rates than the G1 collector. If you want to enable the Shenandoah collector, set the -XX:+UseShenandoahGC JVM option. The Z Garbage Collector (ZGC) is another low-latency collector. Unlike the Shenandoah collector, ZGC does not support compressed ordinary object pointers (OOPs) (that is, heap references). Compressed OOPs help to save heap memory and improve performance for heap sizes up to 32 GB. This means that ZGC might exhibit worse resident memory sizes than the Shenandoah collector, especially on small heap sizes. If you want to enable the ZGC collector, set the -XX:+UseZGC JVM option. For more information, see JEP 363: Remove the Concurrent Mark Sweep (CMS) Garbage Collector . 3.2. Removal of pack200 tools and API Red Hat build of OpenJDK 17 no longer includes any of the following features: The pack200 tool The unpack200 tool The java.util.jar.Pack200 API The java.util.jar.Pack200.Packer API The java.util.jar.Pack200.Unpacker API The use of these tools and APIs has been limited since the introduction of the JMOD module format in OpenJDK 9. For more information, see JEP 367: Remove the Pack200 Tools and API . 3.3. Removal of Nashorn JavaScript engine Red Hat build of OpenJDK 17 no longer includes any of the following features: The Nashorn JavaScript engine The jjs command-line tool The jdk.scripting.nashorn module The jdk.scripting.nashorn.shell module The scripting API, javax.script , is still available in Red Hat build of OpenJDK 17 or later. Similar to releases before OpenJDK 8, you can use the javax.script API with a JavaScript engine of your choice, such as Rhino or the now externally maintained Nashorn JavaScript engine. For more information, see JEP 372: Remove the Nashorn JavaScript Engine . 3.4. Strong encapsulation of JDK internal elements Red Hat build of OpenJDK 17 introduces strong encapsulation of all internal elements of the JDK, apart from critical internal APIs such as sun.misc.Unsafe . From Red Hat build of OpenJDK 17 onward, you cannot relax the strong encapsulation of internal elements by using a single command-line option. This means that Red Hat build of OpenJDK 17 and later versions prevent reflective access to JDK internal types apart from critical internal APIs. For more information, see JEP 403: Strongly Encapsulate JDK Internals . 3.5. Biased locking disabled by default Red Hat build of OpenJDK 17 disables biased locking by default. In Red Hat build of OpenJDK 17, you can enable biased locking by setting the -XX:+UseBiasedLocking JVM option at startup. However, the -XX:+UseBiasedLocking option is deprecated in Red Hat build of OpenJDK 17 and planned for removal in OpenJDK 18. For more information, see JEP 374: Deprecate and Disable Biased Locking . 3.6. Removal of RMI activation Red Hat build of OpenJDK 17 removes the java.rmi.activation package and its associated rmid activation daemon for Java remote method invocation (RMI). Other RMI features are still available in Red Hat build of OpenJDK 17 and later versions. For more information, see JEP 407: Remove RMI Activation . 3.7. Removal of the Graal compiler Red Hat build of OpenJDK 17 removes the Graal compiler, which comprises the jaotc tool and the jdk.internal.vm.compiler and jdk.internal.vm.compiler.management modules. From Red Hat build of OpenJDK 17 onward, if you want to use ahead-of-time (AOT) compilation, you can use GraalVM. For more information, see JEP 410: Remove the Experimental AOT and JIT Compiler . 3.8. Additional resources (or steps) OpenJDK: JEPs in JDK 17 integrated since JDK 11 Major differences between Red Hat build of OpenJDK 17 and Red Hat build of OpenJDK 21
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/migrating_to_red_hat_build_of_openjdk_21_from_earlier_versions/differences_11_17
Chapter 26. Exchange Property
Chapter 26. Exchange Property Overview The exchange property language provides a convenient way of accessing exchange properties . When you supply a key that matches one of the exchange property names, the exchange property language returns the corresponding value. The exchange property language is part of camel-core . XML example For example, to implement the recipient list pattern when the listOfEndpoints exchange property contains the recipient list, you could define a route as follows: Java example The same recipient list example can be implemented in Java as follows:
[ "<camelContext> <route> <from uri=\"direct:a\"/> <recipientList> <exchangeProperty>listOfEndpoints</exchangeProperty> </recipientList> </route> </camelContext>", "from(\"direct:a\").recipientList(exchangeProperty(\"listOfEndpoints\"));" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_development_guide/property
Machine APIs
Machine APIs OpenShift Container Platform 4.16 Reference guide for machine APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/machine_apis/index
Chapter 34. federation
Chapter 34. federation This chapter describes the commands under the federation command. 34.1. federation domain list List accessible domains Usage: Table 34.1. Optional Arguments Value Summary -h, --help Show this help message and exit Table 34.2. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 34.3. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.4. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 34.5. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.2. federation project list List accessible projects Usage: Table 34.6. Optional Arguments Value Summary -h, --help Show this help message and exit Table 34.7. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 34.8. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.9. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 34.10. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.3. federation protocol create Create new federation protocol Usage: Table 34.11. Positional Arguments Value Summary <name> New federation protocol name (must be unique per identity provider) Table 34.12. Optional Arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that will support the new federation protocol (name or ID) (required) --mapping <mapping> Mapping that is to be used (name or id) (required) Table 34.13. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 34.14. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 34.15. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 34.16. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.4. federation protocol delete Delete federation protocol(s) Usage: Table 34.17. Positional Arguments Value Summary <federation-protocol> Federation protocol(s) to delete (name or id) Table 34.18. Optional Arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) 34.5. federation protocol list List federation protocols Usage: Table 34.19. Optional Arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider to list (name or id) (required) Table 34.20. Output Formatters Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 34.21. CSV Formatter Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 34.22. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 34.23. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 34.6. federation protocol set Set federation protocol properties Usage: Table 34.24. Positional Arguments Value Summary <name> Federation protocol to modify (name or id) Table 34.25. Optional Arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) --mapping <mapping> Mapping that is to be used (name or id) 34.7. federation protocol show Display federation protocol details Usage: Table 34.26. Positional Arguments Value Summary <federation-protocol> Federation protocol to display (name or id) Table 34.27. Optional Arguments Value Summary -h, --help Show this help message and exit --identity-provider <identity-provider> Identity provider that supports <federation-protocol> (name or ID) (required) Table 34.28. Output Formatters Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 34.29. JSON Formatter Value Summary --noindent Whether to disable indenting the json Table 34.30. Shell Formatter Value Summary --prefix PREFIX Add a prefix to all variable names Table 34.31. Table Formatter Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack federation domain list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack federation project list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN]", "openstack federation protocol create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --identity-provider <identity-provider> --mapping <mapping> <name>", "openstack federation protocol delete [-h] --identity-provider <identity-provider> <federation-protocol> [<federation-protocol> ...]", "openstack federation protocol list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] --identity-provider <identity-provider>", "openstack federation protocol set [-h] --identity-provider <identity-provider> [--mapping <mapping>] <name>", "openstack federation protocol show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] --identity-provider <identity-provider> <federation-protocol>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/command_line_interface_reference/federation
3.3. ext4
3.3. ext4 3.3.1. Migration from ext3 Moving to ext4 must be done with a freshly formatted ext4 file system. Migrating in place from ext3 to ext4 is not supported and will not produce many of the benefits ext4 offers, since the data currently residing on the partition will not make use of the extents features and other changes. Red Hat recommends that customers who cannot migrate to a cleanly formatted ext4 file system remain on their existing ext3 file system.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/migration_planning_guide/sect-migration_guide-file_systems-ext4
Chapter 7. Configure storage for OpenShift Container Platform services
Chapter 7. Configure storage for OpenShift Container Platform services You can use OpenShift Data Foundation to provide storage for OpenShift Container Platform services such as image registry, monitoring, and logging. The process for configuring storage for these services depends on the infrastructure used in your OpenShift Data Foundation deployment. Warning Always ensure that you have plenty of storage capacity for these services. If the storage for these critical services runs out of space, the cluster becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Configuring the Curator schedule and the Modifying retention time for Prometheus metrics data sub section of Configuring persistent storage in the OpenShift Container Platform documentation for details. If you do run out of storage space for these services, contact Red Hat Customer Support. 7.1. Configuring Image Registry to use OpenShift Data Foundation OpenShift Container Platform provides a built in Container Image Registry which runs as a standard workload on the cluster. A registry is typically used as a publication target for images built on the cluster as well as a source of images for workloads running on the cluster. Warning This process does not migrate data from an existing image registry to the new image registry. If you already have container images in your existing registry, back up your registry before you complete this process, and re-register your images when this process is complete. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In OpenShift Web Console, click Operators Installed Operators to view installed operators. Image Registry Operator is installed and running in the openshift-image-registry namespace. In OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.cephfs.csi.ceph.com is available. In OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure Create a Persistent Volume Claim for the Image Registry to use. In the OpenShift Web Console, click Storage Persistent Volume Claims . Set the Project to openshift-image-registry . Click Create Persistent Volume Claim . From the list of available storage classes retrieved above, specify the Storage Class with the provisioner openshift-storage.cephfs.csi.ceph.com . Specify the Persistent Volume Claim Name , for example, ocs4registry . Specify an Access Mode of Shared Access (RWX) . Specify a Size of at least 100 GB. Click Create . Wait until the status of the new Persistent Volume Claim is listed as Bound . Configure the cluster's Image Registry to use the new Persistent Volume Claim. Click Administration Custom Resource Definitions . Click the Config custom resource definition associated with the imageregistry.operator.openshift.io group. Click the Instances tab. Beside the cluster instance, click the Action Menu (...) Edit Config . Add the new Persistent Volume Claim as persistent storage for the Image Registry. Add the following under spec: , replacing the existing storage: section if necessary. For example: Click Save . Verify that the new configuration is being used. Click Workloads Pods . Set the Project to openshift-image-registry . Verify that the new image-registry-* pod appears with a status of Running , and that the image-registry-* pod terminates. Click the new image-registry-* pod to view pod details. Scroll down to Volumes and verify that the registry-storage volume has a Type that matches your new Persistent Volume Claim, for example, ocs4registry . 7.2. Configuring monitoring to use OpenShift Data Foundation OpenShift Data Foundation provides a monitoring stack that comprises of Prometheus and Alert Manager. Follow the instructions in this section to configure OpenShift Data Foundation as storage for the monitoring stack. Important Monitoring will not function if it runs out of storage space. Always ensure that you have plenty of storage capacity for monitoring. Red Hat recommends configuring a short retention interval for this service. See the Modifying retention time for Prometheus metrics data of Monitoring guide in the OpenShift Container Platform documentation for details. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. In the OpenShift Web Console, click Operators Installed Operators to view installed operators. Monitoring Operator is installed and running in the openshift-monitoring namespace. In the OpenShift Web Console, click Administration Cluster Settings Cluster Operators to view cluster operators. A storage class with provisioner openshift-storage.rbd.csi.ceph.com is available. In the OpenShift Web Console, click Storage StorageClasses to view available storage classes. Procedure In the OpenShift Web Console, go to Workloads Config Maps . Set the Project dropdown to openshift-monitoring . Click Create Config Map . Define a new cluster-monitoring-config Config Map using the following example. Replace the content in angle brackets ( < , > ) with your own values, for example, retention: 24h or storage: 40Gi . Replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd . Example cluster-monitoring-config Config Map Click Create to save and create the Config Map. Verification steps Verify that the Persistent Volume Claims are bound to the pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-monitoring . Verify that 5 Persistent Volume Claims are visible with a state of Bound , attached to three alertmanager-main-* pods, and two prometheus-k8s-* pods. Figure 7.1. Monitoring storage created and bound Verify that the new alertmanager-main-* pods appear with a state of Running . Go to Workloads Pods . Click the new alertmanager-main-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-alertmanager-claim that matches one of your new Persistent Volume Claims, for example, ocs-alertmanager-claim-alertmanager-main-0 . Figure 7.2. Persistent Volume Claims attached to alertmanager-main-* pod Verify that the new prometheus-k8s-* pods appear with a state of Running . Click the new prometheus-k8s-* pods to view the pod details. Scroll down to Volumes and verify that the volume has a Type , ocs-prometheus-claim that matches one of your new Persistent Volume Claims, for example, ocs-prometheus-claim-prometheus-k8s-0 . Figure 7.3. Persistent Volume Claims attached to prometheus-k8s-* pod 7.3. Cluster logging for OpenShift Data Foundation You can deploy cluster logging to aggregate logs for a range of OpenShift Container Platform services. For information about how to deploy cluster logging, see Deploying cluster logging . Upon initial OpenShift Container Platform deployment, OpenShift Data Foundation is not configured by default and the OpenShift Container Platform cluster will solely rely on default storage available from the nodes. You can edit the default configuration of OpenShift logging (ElasticSearch) to be backed by OpenShift Data Foundation to have OpenShift Data Foundation backed logging (Elasticsearch). Important Always ensure that you have plenty of storage capacity for these services. If you run out of storage space for these critical services, the logging application becomes inoperable and very difficult to recover. Red Hat recommends configuring shorter curation and retention intervals for these services. See Cluster logging curator in the OpenShift Container Platform documentation for details. If you run out of storage space for these services, contact Red Hat Customer Support. 7.3.1. Configuring persistent storage You can configure a persistent storage class and size for the Elasticsearch cluster using the storage class name and size parameters. The Cluster Logging Operator creates a Persistent Volume Claim for each data node in the Elasticsearch cluster based on these parameters. For example: This example specifies that each data node in the cluster will be bound to a Persistent Volume Claim that requests 200GiB of ocs-storagecluster-ceph-rbd storage. Each primary shard will be backed by a single replica. A copy of the shard is replicated across all the nodes and are always available and the copy can be recovered if at least two nodes exist due to the single redundancy policy. For information about Elasticsearch replication policies, see Elasticsearch replication policy in About deploying and configuring cluster logging . Note Omission of the storage block will result in a deployment backed by default storage. For example: For more information, see Configuring cluster logging . 7.3.2. Configuring cluster logging to use OpenShift data Foundation Follow the instructions in this section to configure OpenShift Data Foundation as storage for the OpenShift cluster logging. Note You can obtain all the logs when you configure logging for the first time in OpenShift Data Foundation. However, after you uninstall and reinstall logging, the old logs are removed and only the new logs are processed. Prerequisites Administrative access to OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Cluster logging Operator is installed and running in the openshift-logging namespace. Procedure Click Administration Custom Resource Definitions from the left pane of the OpenShift Web Console. On the Custom Resource Definitions page, click ClusterLogging . On the Custom Resource Definition Overview page, select View Instances from the Actions menu or click the Instances Tab. On the Cluster Logging page, click Create Cluster Logging . You might have to refresh the page to load the data. In the YAML, replace the storageClassName with the storageclass that uses the provisioner openshift-storage.rbd.csi.ceph.com . In the example given below the name of the storageclass is ocs-storagecluster-ceph-rbd : If you have tainted the OpenShift Data Foundation nodes, you must add toleration to enable scheduling of the daemonset pods for logging. Click Save . Verification steps Verify that the Persistent Volume Claims are bound to the elasticsearch pods. Go to Storage Persistent Volume Claims . Set the Project dropdown to openshift-logging . Verify that Persistent Volume Claims are visible with a state of Bound , attached to elasticsearch- * pods. Figure 7.4. Cluster logging created and bound Verify that the new cluster logging is being used. Click Workload Pods . Set the Project to openshift-logging . Verify that the new elasticsearch- * pods appear with a state of Running . Click the new elasticsearch- * pod to view pod details. Scroll down to Volumes and verify that the elasticsearch volume has a Type that matches your new Persistent Volume Claim, for example, elasticsearch-elasticsearch-cdm-9r624biv-3 . Click the Persistent Volume Claim name and verify the storage class name in the PersistentVolumeClaim Overview page. Note Make sure to use a shorter curator time to avoid PV full scenario on PVs attached to Elasticsearch pods. You can configure Curator to delete Elasticsearch data based on retention settings. It is recommended that you set the following default index data retention of 5 days as a default. For more details, see Curation of Elasticsearch Data . Note To uninstall the cluster logging backed by Persistent Volume Claim, use the procedure removing the cluster logging operator from OpenShift Data Foundation in the uninstall chapter of the respective deployment guide.
[ "storage: pvc: claim: <new-pvc-name>", "storage: pvc: claim: ocs4registry", "apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: retention: <time to retain monitoring files, for example 24h> volumeClaimTemplate: metadata: name: ocs-prometheus-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi> alertmanagerMain: volumeClaimTemplate: metadata: name: ocs-alertmanager-claim spec: storageClassName: ocs-storagecluster-ceph-rbd resources: requests: storage: <size of claim, e.g. 40Gi>", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: \"ocs-storagecluster-ceph-rbd\" size: \"200G\"", "spec: logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: {}", "apiVersion: \"logging.openshift.io/v1\" kind: \"ClusterLogging\" metadata: name: \"instance\" namespace: \"openshift-logging\" spec: managementState: \"Managed\" logStore: type: \"elasticsearch\" elasticsearch: nodeCount: 3 storage: storageClassName: ocs-storagecluster-ceph-rbd size: 200G # Change as per your requirement redundancyPolicy: \"SingleRedundancy\" visualization: type: \"kibana\" kibana: replicas: 1 curation: type: \"curator\" curator: schedule: \"30 3 * * *\" collection: logs: type: \"fluentd\" fluentd: {}", "spec: [...] collection: logs: fluentd: tolerations: - effect: NoSchedule key: node.ocs.openshift.io/storage value: 'true' type: fluentd", "config.yaml: | openshift-storage: delete: days: 5" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/configure_storage_for_openshift_container_platform_services
Chapter 6. Launching clustered JBoss EAP
Chapter 6. Launching clustered JBoss EAP 6.1. Launch clustered JBoss EAP AMIs without mod_cluster and VPC This topic lists the steps to launch clustered JBoss EAP AMIs without mod_cluster and VPC. Note You can use the example configuration scripts that are provided with the image. To start clustered JBoss EAP AMI on a standalone server instance, you can use the example /opt/rh/eap8/root/usr/share/wildfly/docs/examples/configs/standalone-ec2-ha.xml file that contains a preconfigured S3_PING JGroups stack. For more information, see S3_PING in the Reliable group communication with JGroups document. This standalone-ec2-ha.xml profile file must be copied from /opt/rh/eap/root/usr/share/wildfly/docs/examples/configs/ to the JBoss EAP configuration directory /opt/rh/eap8/root/usr/share/wildfly/standalone/configuration/ . Then, you have to add the following line to the JBoss EAP service configuration file: A unique instance-id needs to be set for each standalone server instance in the undertow subsystem. A value for the instance-id can be set manually by editing the standalone-ec2-ha.xml file or by using the management CLI. For example, you can set the instance-id using the management CLI as follows: A value for jboss.jvmRoute can then be specified in standalone.conf using the JAVA_OPTS variable. The jgroups subsystem in the EC2 configuration file requires some S3_PING specific properties to discover cluster members. You must specify access key to S3, secret access key, and the S3 bucket to use for discovery. These properties can either be specified as Java options or put directly into the XML file by editing it or using CLI. You need to create an S3 bucket for discovery. See Amazon Simple Storage Service Documentation for more information. You may also have to configure the required permissions. The JGroups stack needs to be bound to an IP address, which is used to communicate with other nodes. This can be done by adding Java options, along with S3 Java options to the /opt/rh/eap8/root/usr/share/wildfly/bin/standalone.conf file. For example, if the private IP address was 10.10.10.10 , then you would add the following line to the standalone.conf file: You can deploy a sample application: /opt/rh/eap8/root/usr/share/java/eap8-jboss-ec2-eap-samples/cluster-demo.war and observe the logs in /opt/rh/eap8/root/usr/share/wildfly/standalone/log/server.log to see that the JBoss EAP servers have created a cluster. 6.1.1. Launching clustered AMIs without mod_cluster and VPC for domain controller instance Procedure Copy the domain-ec2.xml file from /opt/rh/eap8/root/usr/share/wildfly/docs/examples/configs to the JBoss EAP configuration directory. Set the following variables in the appropriate service configuration file: Add S3 domain controller discovery configuration to the host-master.xml file: Configure users and add the secret values for users to the host controller instances. For more information, see Create a Managed Domain on Two Machines in the JBoss EAP Configuration Guide . 6.1.2. Launching clustered AMIs without mod_cluster and VPC for host controller Procedure Set the following variable in the appropriate service configuration file: Add S3 domain controller discovery configuration to the host-slave.xml file: Note For information about S3 domain controller discovery, see Launch One or More Instances to Serve as Host Controllers . Warning Running a JBoss EAP cluster in a subnet with network mask smaller than 24-bits or spanning multiple subnets complicates acquiring a unique server peer ID for each cluster member. Important The auto-scaling Amazon EC2 feature can be used with JBoss EAP cluster nodes. However, ensure it is tested before deployment. You should ensure that your particular workloads scale to the required number of nodes, and that the performance meets your needs for the instance type you are planning to use, different instance types receive a different share of the EC2 cloud resources. Furthermore, instance locality and current network/storage/host machine/RDS utilization may affect cluster performance. Test with your expected real-life loads and try to account for unexpected conditions. Warning The Amazon EC2 scale-down action terminates the nodes without any chance to gracefully shut down and as some transactions might be interrupted, other cluster nodes and load balancers need time to fail over. This is likely to impact your application users' experience. It is recommended that you either scale down the application cluster manually by disabling the server from the mod_cluster management interface until processed sessions are completed, or shut down the JBoss EAP instance gracefully using SSH access to the instance or Red Hat JBoss Operations Network. Test your procedure for scaling down does not lead to adverse effects on your users' experience. Additional measures might be required for particular workloads, load balancers, and setups. 6.2. Launch clustered JBoss EAP AMIs with mod_cluster and VPC This topic lists the steps to launch an Apache HTTP server instance to serve as a mod_cluster proxy and a NAT instance for the Virtual Private Cloud (VPC). Note You can use the example configuration scripts that are provided with the image. An Amazon Virtual Private Cloud (Amazon VPC) is a feature of Amazon Web Services (AWS) that allows you to isolate a set of AWS resources in a private network. The topology and configuration of this private network can be customized to your needs. See Amazon Virtual Private Cloud for more information about Amazon VPC. Note If you start a cluster with a mod_cluster load balancer inside a VPC, the JBoss EAP servers are inaccessible to public. The mod_cluster load balancer can be the only endpoint that is connected to the Internet. See Launch an Instance to Serve as a Domain Controller for setting up domain controller instance. See Launch One or More Instances to Serve as Host Controllers for setting up host controller instance. See Launch One or More Instances to Serve as Host Controllers for information about S3 domain controller discovery. To launch clustered AMIs with VPC and mod_cluster Configuring the VPC is optional. See the Detecting Your Supported Platforms and Whether You Have a Default VPC section in the Amazon VPC user guide for more information. Install jbcs-httpd24-mod_cluster-native package and all of its dependencies. The mod_cluster configuration file is installed in /opt/rh/jbcs-httpd24/root/etc/httpd/conf.d/mod_cluster.conf . See the Apache HTTP Server Installation Guide for more information about installation of Red Hat JBoss Core Services Apache HTTP Server. Disable advertising for mod_cluster . Add the following to VirtualHost in the /opt/rh/jbcs-httpd24/root/etc/httpd/conf.d/mod_cluster.conf configuration file. Allow ports in SELinux . If required, configure the iptables . Ports can be allowed in SELinux by using the semanage port -a -t http_port_t -p tcp USDPORT_NR command. Configure JBoss EAP to look for mod_cluster proxy on the address that mod_cluster listens on. Note An /opt/rh/eap8/root/usr/share/wildfly/docs/examples/configs/standalone-ec2-ha.xml example configuration file is provided. You need to configure a list of proxies in the modcluster subsystem. You can define a list of proxies using one of the following methods: Define an outbound-socket-binding called mod-cluster-proxy1 with an appropriate host and port: <outbound-socket-binding name="mod-cluster-proxy1"> <remote-destination host="USD{jboss.modcluster.proxy1.host}" port="USD{jboss.modcluster.proxy1.port}"/> </outbound-socket-binding> Set the proxies attribute in the modcluster subsystem to mod-cluster-proxy1 with an appropriate host and port:
[ "WILDFLY_SERVER_CONFIG=standalone-ec2-ha.xml", "/subsystem=undertow:write-attribute(name=instance-id,value={USD{jboss.jvmRoute}})", "JAVA_OPTS=\"USDJAVA_OPTS -Djboss.bind.address.private=10.10.10.10 -Djboss.jgroups.aws.s3_ping.region_name= <S3_REGION_NAME> -Djboss.jgroups.aws.s3_ping.bucket_name= <S3_BUCKET_NAME> \"", "WILDFLY_SERVER_CONFIG=domain-ec2.xml WILDFLY_HOST_CONFIG=host-master.xml", "<local> <discovery-options> <discovery-option name=\"s3-discovery\" module=\"org.jboss.as.host-controller\" code=\"org.jboss.as.host.controller.discovery.S3Discovery\"> <property name=\"access-key\" value=\"S3_ACCESS_KEY\"/> <property name=\"secret-access-key\" value=\"S3_SECRET_ACCESS_KEY\"/> <property name=\"location\" value=\"S3_BUCKET_NAME\"/> </discovery-option> </discovery-options> </local>", "WILDFLY_HOST_CONFIG=host-slave.xml", "<remote security-realm=\"ManagementRealm\"> <discovery-options> <discovery-option name=\"s3-discovery\" module=\"org.jboss.as.host-controller\" code=\"org.jboss.as.host.controller.discovery.S3Discovery\"> <property name=\"access-key\" value=\"S3_ACCESS_KEY\"/> <property name=\"secret-access-key\" value=\"S3_SECRET_ACCESS_KEY\"/> <property name=\"location\" value=\"S3_BUCKET_NAME\"/> </discovery-option> </discovery-options> </remote>", "ServerAdvertise Off EnableMCPMReceive AdvertiseFrequency # comment out AdvertiseFrequency if present", "<outbound-socket-binding name=\"mod-cluster-proxy1\"> <remote-destination host=\"USD{jboss.modcluster.proxy1.host}\" port=\"USD{jboss.modcluster.proxy1.port}\"/> </outbound-socket-binding>", "/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=mod-cluster-proxy1:add(host={USD{jboss.modcluster.proxy1.host}}, port={USD{jboss.modcluster.proxy1.port}})" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/deploying_jboss_eap_on_amazon_web_services/assembly-launching-clustered-eap_default
Appendix A. Red Hat Trusted Profile Analyzer with AWS values file template
Appendix A. Red Hat Trusted Profile Analyzer with AWS values file template Red Hat's Trusted Profile Analyzer (RHTPA) with Amazon Web Services (AWS) values file template for use by the RHTPA Helm chart. Template appDomain: USDAPP_DOMAIN_URL tracing: {} ingress: className: openshift-default storage: region: REGIONAL_ENDPOINT accessKey: valueFrom: secretKeyRef: name: storage-credentials key: aws_access_key_id secretKey: valueFrom: secretKeyRef: name: storage-credentials key: aws_secret_access_key eventBus: type: sqs region: REGIONAL_ENDPOINT accessKey: valueFrom: secretKeyRef: name: event-bus-credentials key: aws_access_key_id secretKey: valueFrom: secretKeyRef: name: event-bus-credentials key: aws_secret_access_key authenticator: type: cognito cognitoDomainUrl: COGNITO_DOMAIN_URL oidc: issuerUrl: https://cognito-idp. REGION .amazonaws.com/ USER_POOL_ID clients: frontend: clientId: FRONTEND_CLIENT_ID walker: clientId: WALKER_CLIENT_ID clientSecret: valueFrom: secretKeyRef: name: oidc-walker key: client-secret bombastic: bucket: bombastic- UNIQUE_ID topics: failed: bombastic-failed-default indexed: bombastic-indexed-default stored: bombastic-stored-default vexination: bucket: vexination- UNIQUE_ID topics: failed: vexination-failed-default indexed: vexination-indexed-default stored: vexination-stored-default v11y: bucket: v11y- UNIQUE_ID topics: failed: v11y-failed-default indexed: v11y-indexed-default stored: v11y-stored-default guac: database: name: valueFrom: secretKeyRef: name: postgresql-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-credentials key: db.password initDatabase: name: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.password
[ "appDomain: USDAPP_DOMAIN_URL tracing: {} ingress: className: openshift-default storage: region: REGIONAL_ENDPOINT accessKey: valueFrom: secretKeyRef: name: storage-credentials key: aws_access_key_id secretKey: valueFrom: secretKeyRef: name: storage-credentials key: aws_secret_access_key eventBus: type: sqs region: REGIONAL_ENDPOINT accessKey: valueFrom: secretKeyRef: name: event-bus-credentials key: aws_access_key_id secretKey: valueFrom: secretKeyRef: name: event-bus-credentials key: aws_secret_access_key authenticator: type: cognito cognitoDomainUrl: COGNITO_DOMAIN_URL oidc: issuerUrl: https://cognito-idp. REGION .amazonaws.com/ USER_POOL_ID clients: frontend: clientId: FRONTEND_CLIENT_ID walker: clientId: WALKER_CLIENT_ID clientSecret: valueFrom: secretKeyRef: name: oidc-walker key: client-secret bombastic: bucket: bombastic- UNIQUE_ID topics: failed: bombastic-failed-default indexed: bombastic-indexed-default stored: bombastic-stored-default vexination: bucket: vexination- UNIQUE_ID topics: failed: vexination-failed-default indexed: vexination-indexed-default stored: vexination-stored-default v11y: bucket: v11y- UNIQUE_ID topics: failed: v11y-failed-default indexed: v11y-indexed-default stored: v11y-stored-default guac: database: name: valueFrom: secretKeyRef: name: postgresql-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-credentials key: db.password initDatabase: name: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.name host: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.host port: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.port username: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.user password: valueFrom: secretKeyRef: name: postgresql-admin-credentials key: db.password" ]
https://docs.redhat.com/en/documentation/red_hat_trusted_profile_analyzer/1/html/deployment_guide/rhtpa-with-aws-values-file-template_deploy
6.16. Improving Uptime with Virtual Machine High Availability
6.16. Improving Uptime with Virtual Machine High Availability 6.16.1. What is High Availability? High availability is recommended for virtual machines running critical workloads. A highly available virtual machine is automatically restarted, either on its original host or another host in the cluster, if its process is interrupted, such as in the following scenarios: A host becomes non-operational due to hardware failure. A host is put into maintenance mode for scheduled downtime. A host becomes unavailable because it has lost communication with an external storage resource. A highly available virtual machine is not restarted if it is shut down cleanly, such as in the following scenarios: The virtual machine is shut down from within the guest. The virtual machine is shut down from the Manager. The host is shut down by an administrator without being put in maintenance mode first. With storage domains V4 or later, virtual machines have the additional capability to acquire a lease on a special volume on the storage, enabling a virtual machine to start on another host even if the original host loses power. The functionality also prevents the virtual machine from being started on two different hosts, which may lead to corruption of the virtual machine disks. With high availability, interruption to service is minimal because virtual machines are restarted within seconds with no user intervention required. High availability keeps your resources balanced by restarting guests on a host with low current resource utilization, or based on any workload balancing or power saving policies that you configure. This ensures that there is sufficient capacity to restart virtual machines at all times. High Availability and Storage I/O Errors If a storage I/O error occurs, the virtual machine is paused. You can define how the host handles highly available virtual machines after the connection with the storage domain is reestablished; they can either be resumed, ungracefully shut down, or remain paused. For more information about these options, see Section A.1.6, "Virtual Machine High Availability Settings Explained" . 6.16.2. High Availability Considerations A highly available host requires a power management device and fencing parameters. In addition, for a virtual machine to be highly available when its host becomes non-operational, it needs to be started on another available host in the cluster. To enable the migration of highly available virtual machines: Power management must be configured for the hosts running the highly available virtual machines. The host running the highly available virtual machine must be part of a cluster which has other available hosts. The destination host must be running. The source and destination host must have access to the data domain on which the virtual machine resides. The source and destination host must have access to the same virtual networks and VLANs. There must be enough CPUs on the destination host that are not in use to support the virtual machine's requirements. There must be enough RAM on the destination host that is not in use to support the virtual machine's requirements. 6.16.3. Configuring a Highly Available Virtual Machine High availability must be configured individually for each virtual machine. Configuring a Highly Available Virtual Machine Click Compute Virtual Machines and select a virtual machine. Click Edit . Click the High Availability tab. Select the Highly Available check box to enable high availability for the virtual machine. Select the storage domain to hold the virtual machine lease, or select No VM Lease to disable the functionality, from the Target Storage Domain for VM Lease drop-down list. See Section 6.16.1, "What is High Availability?" for more information about virtual machine leases. Important This functionality is only available on storage domains that are V4 or later. Select AUTO_RESUME , LEAVE_PAUSED , or KILL from the Resume Behavior drop-down list. If you defined a virtual machine lease, KILL is the only option available. For more information see Section A.1.6, "Virtual Machine High Availability Settings Explained" . Select Low , Medium , or High from the Priority drop-down list. When migration is triggered, a queue is created in which the high priority virtual machines are migrated first. If a cluster is running low on resources, only the high priority virtual machines are migrated. Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/virtual_machine_management_guide/sect-improving_uptime_with_virtual_machine_high_availability
Chapter 3. Distributed tracing platform (Tempo)
Chapter 3. Distributed tracing platform (Tempo) 3.1. Installing Installing the distributed tracing platform (Tempo) requires the Tempo Operator and choosing which type of deployment is best for your use case: For microservices mode, deploy a TempoStack instance in a dedicated OpenShift project. For monolithic mode, deploy a TempoMonolithic instance in a dedicated OpenShift project. Important Using object storage requires setting up a supported object store and creating a secret for the object store credentials before deploying a TempoStack or TempoMonolithic instance. 3.1.1. Installing the Tempo Operator You can install the Tempo Operator by using the web console or the command line. 3.1.1.1. Installing the Tempo Operator by using the web console You can install the Tempo Operator from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Go to Operators OperatorHub and search for Tempo Operator . Select the Tempo Operator that is provided by Red Hat . Important The following selections are the default presets for this Operator: Update channel stable Installation mode All namespaces on the cluster Installed Namespace openshift-tempo-operator Update approval Automatic Select the Enable Operator recommended cluster monitoring on this Namespace checkbox. Select Install Install View Operator . Verification In the Details tab of the page of the installed Operator, under ClusterServiceVersion details , verify that the installation Status is Succeeded . 3.1.1.2. Installing the Tempo Operator by using the CLI You can install the Tempo Operator from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Create a project for the Tempo Operator by running the following command: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: "true" name: openshift-tempo-operator EOF Create an Operator group by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF Create a subscription by running the following command: USD oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF Verification Check the Operator status by running the following command: USD oc get csv -n openshift-tempo-operator 3.1.2. Installing a TempoStack instance You can install a TempoStack instance by using the web console or the command line. 3.1.2.1. Installing a TempoStack instance by using the web console You can install a TempoStack instance from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Go to Home Projects Create Project to create a project of your choice for the TempoStack instance that you will create in a subsequent step. Go to Workloads Secrets Create From YAML to create a secret for your object storage bucket in the project that you created for the TempoStack instance. For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance. Note You can create multiple TempoStack instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoStack Create TempoStack YAML view . In the YAML view , customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m 1 Size of the persistent volume claim for the Tempo WAL. The default is 10Gi . 2 Secret you created in step 2 for the object storage that had been set up as one of the prerequisites. 3 Value of the name in the metadata of the secret. 4 Accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Optional. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route 1 In this example, the object storage was set up as one of the prerequisites, and the object storage secret was created in step 2. 2 The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Select Create . Verification Use the Project: dropdown list to select the project of the TempoStack instance. Go to Operators Installed Operators to verify that the Status of the TempoStack instance is Condition: Ready . Go to Workloads Pods to verify that all the component pods of the TempoStack instance are running. Access the Tempo console: Go to Networking Routes and Ctrl + F to search for tempo . In the Location column, open the URL to access the Tempo console. Note The Tempo console initially shows no trace data following the Tempo console installation. 3.1.2.2. Installing a TempoStack instance by using the CLI You can install a TempoStack instance from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run the oc login command: USD oc login --username=<your_username> You have completed setting up the required object storage by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , Google Cloud Storage . For more information, see "Object storage setup". Warning Object storage is required and not included with the distributed tracing platform (Tempo). You must choose and set up object storage by a supported provider before installing the distributed tracing platform (Tempo). Procedure Run the following command to create a project of your choice for the TempoStack instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF In the project that you created for the TempoStack instance, create a secret for your object storage bucket by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoStack instance in the project that you created for it: Note You can create multiple TempoStack instances in separate projects on the same cluster. Customize the TempoStack custom resource (CR): apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m 1 Size of the persistent volume claim for the Tempo WAL. The default is 10Gi . 2 Secret you created in step 2 for the object storage that had been set up as one of the prerequisites. 3 Value of the name in the metadata of the secret. 4 Accepted values are azure for Azure Blob Storage; gcs for Google Cloud Storage; and s3 for Amazon S3, MinIO, or Red Hat OpenShift Data Foundation. 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Optional. Example of a TempoStack CR for AWS S3 and MinIO storage apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route 1 In this example, the object storage was set up as one of the prerequisites, and the object storage secret was created in step 2. 2 The stack deployed in this example is configured to receive Jaeger Thrift over HTTP and OpenTelemetry Protocol (OTLP), which permits visualizing the data with the Jaeger UI. Apply the customized CR by running the following command: USD oc apply -f - << EOF <tempostack_cr> EOF Verification Verify that the status of all TempoStack components is Running and the conditions are type: Ready by running the following command: USD oc get tempostacks.tempo.grafana.com simplest -o yaml Verify that all the TempoStack component pods are running by running the following command: USD oc get pods Access the Tempo console: Query the route details by running the following command: USD oc get route Open https://<route_from_previous_step> in a web browser. Note The Tempo console initially shows no trace data following the Tempo console installation. 3.1.3. Installing a TempoMonolithic instance Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance by using the web console or the command line. The TempoMonolithic custom resource (CR) creates a Tempo deployment in monolithic mode. All components of the Tempo deployment, such as the compactor, distributor, ingester, querier, and query frontend, are contained in a single container. A TempoMonolithic instance supports storing traces in in-memory storage, a persistent volume, or object storage. Tempo deployment in monolithic mode is preferred for a small deployment, demonstration, testing, and as a migration path of the Red Hat OpenShift distributed tracing platform (Jaeger) all-in-one deployment. Note The monolithic deployment of Tempo does not scale horizontally. If you require horizontal scaling, use the TempoStack CR for a Tempo deployment in microservices mode. 3.1.3.1. Installing a TempoMonolithic instance by using the web console Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance from the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Home Projects Create Project to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step. Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage. Important Object storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , or Google Cloud Storage . Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this in Workloads Secrets Create From YAML . For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoMonolithic instance: Note You can create multiple TempoMonolithic instances in separate projects on the same cluster. Go to Operators Installed Operators . Select TempoMonolithic Create TempoMonolithic YAML view . In the YAML view , customize the TempoMonolithic custom resource (CR). The following TempoMonolithic CR creates a TempoMonolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a supported type of storage and exposing Jaeger UI via a route: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m 1 Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is pv . The accepted values for object storage are s3 , gcs , or azure , depending on the used object store type. The default value is memory for the tmpfs in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. 2 Memory size: For in-memory storage, this means the size of the tmpfs volume, where the default is 2Gi . For a persistent volume, this means the size of the persistent volume claim, where the default is 10Gi . For object storage, this means the size of the persistent volume claim for the Tempo WAL, where the default is 10Gi . 3 Optional: For object storage, the type of object storage. The accepted values are s3 , gcs , and azure , depending on the used object store type. 4 Optional: For object storage, the value of the name in the metadata of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Enables the Jaeger UI. 8 Enables creation of a route for the Jaeger UI. 9 Optional. Select Create . Verification Use the Project: dropdown list to select the project of the TempoMonolithic instance. Go to Operators Installed Operators to verify that the Status of the TempoMonolithic instance is Condition: Ready . Go to Workloads Pods to verify that the pod of the TempoMonolithic instance is running. Access the Jaeger UI: Go to Networking Routes and Ctrl + F to search for jaegerui . Note The Jaeger UI uses the tempo-<metadata_name_of_TempoMonolithic_CR>-jaegerui route. In the Location column, open the URL to access the Jaeger UI. When the pod of the TempoMonolithic instance is ready, you can send traces to the tempo-<metadata_name_of_TempoMonolithic_CR>:4317 (OTLP/gRPC) and tempo-<metadata_name_of_TempoMonolithic_CR>:4318 (OTLP/HTTP) endpoints inside the cluster. The Tempo API is available at the tempo-<metadata_name_of_TempoMonolithic_CR>:3200 endpoint inside the cluster. 3.1.3.2. Installing a TempoMonolithic instance by using the CLI Important The TempoMonolithic instance is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can install a TempoMonolithic instance from the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run the oc login command: USD oc login --username=<your_username> Procedure Run the following command to create a project of your choice for the TempoMonolithic instance that you will create in a subsequent step: USD oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF Decide which type of supported storage to use for storing traces: in-memory storage, a persistent volume, or object storage. Important Object storage is not included with the distributed tracing platform (Tempo) and requires setting up an object store by a supported provider: Red Hat OpenShift Data Foundation , MinIO , Amazon S3 , Azure Blob Storage , or Google Cloud Storage . Additionally, opting for object storage requires creating a secret for your object storage bucket in the project that you created for the TempoMonolithic instance. You can do this by running the following command: USD oc apply -f - << EOF <object_storage_secret> EOF For more information, see "Object storage setup". Example secret for Amazon S3 and MinIO storage apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque Create a TempoMonolithic instance in the project that you created for it. Tip You can create multiple TempoMonolithic instances in separate projects on the same cluster. Customize the TempoMonolithic custom resource (CR). The following TempoMonolithic CR creates a TempoMonolithic deployment with trace ingestion over OTLP/gRPC and OTLP/HTTP, storing traces in a supported type of storage and exposing Jaeger UI via a route: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m 1 Type of storage for storing traces: in-memory storage, a persistent volume, or object storage. The value for a persistent volume is pv . The accepted values for object storage are s3 , gcs , or azure , depending on the used object store type. The default value is memory for the tmpfs in-memory storage, which is only appropriate for development, testing, demonstrations, and proof-of-concept environments because the data does not persist when the pod is shut down. 2 Memory size: For in-memory storage, this means the size of the tmpfs volume, where the default is 2Gi . For a persistent volume, this means the size of the persistent volume claim, where the default is 10Gi . For object storage, this means the size of the persistent volume claim for the Tempo WAL, where the default is 10Gi . 3 Optional: For object storage, the type of object storage. The accepted values are s3 , gcs , and azure , depending on the used object store type. 4 Optional: For object storage, the value of the name in the metadata of the storage secret. The storage secret must be in the same namespace as the TempoMonolithic instance and contain the fields specified in "Table 1. Required secret parameters" in the section "Object storage setup". 5 Optional. 6 Optional: Name of a ConfigMap object that contains a CA certificate. 7 Enables the Jaeger UI. 8 Enables creation of a route for the Jaeger UI. 9 Optional. Apply the customized CR by running the following command: USD oc apply -f - << EOF <tempomonolithic_cr> EOF Verification Verify that the status of all TempoMonolithic components is Running and the conditions are type: Ready by running the following command: USD oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml Run the following command to verify that the pod of the TempoMonolithic instance is running: USD oc get pods Access the Jaeger UI: Query the route details for the tempo-<metadata_name_of_tempomonolithic_cr>-jaegerui route by running the following command: USD oc get route Open https://<route_from_previous_step> in a web browser. When the pod of the TempoMonolithic instance is ready, you can send traces to the tempo-<metadata_name_of_tempomonolithic_cr>:4317 (OTLP/gRPC) and tempo-<metadata_name_of_tempomonolithic_cr>:4318 (OTLP/HTTP) endpoints inside the cluster. The Tempo API is available at the tempo-<metadata_name_of_tempomonolithic_cr>:3200 endpoint inside the cluster. 3.1.4. Object storage setup You can use the following configuration parameters when setting up a supported object storage. Table 3.1. Required secret parameters Storage provider Secret parameters Red Hat OpenShift Data Foundation name: tempostack-dev-odf # example bucket: <bucket_name> # requires an ObjectBucketClaim endpoint: https://s3.openshift-storage.svc access_key_id: <data_foundation_access_key_id> access_key_secret: <data_foundation_access_key_secret> MinIO See MinIO Operator . name: tempostack-dev-minio # example bucket: <minio_bucket_name> # MinIO documentation endpoint: <minio_bucket_endpoint> access_key_id: <minio_access_key_id> access_key_secret: <minio_access_key_secret> Amazon S3 name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation endpoint: <s3_bucket_endpoint> access_key_id: <s3_access_key_id> access_key_secret: <s3_access_key_secret> Amazon S3 with Security Token Service (STS) name: tempostack-dev-s3 # example bucket: <s3_bucket_name> # Amazon S3 documentation region: <s3_region> role_arn: <s3_role_arn> Microsoft Azure Blob Storage name: tempostack-dev-azure # example container: <azure_blob_storage_container_name> # Microsoft Azure documentation account_name: <azure_blob_storage_account_name> account_key: <azure_blob_storage_account_key> Google Cloud Storage on Google Cloud Platform (GCP) name: tempostack-dev-gcs # example bucketname: <google_cloud_storage_bucket_name> # requires a bucket created in a GCP project key.json: <path/to/key.json> # requires a service account in the bucket's GCP project for GCP authentication 3.1.4.1. Setting up the Amazon S3 storage with the Security Token Service You can set up the Amazon S3 storage with the Security Token Service (STS) by using the AWS Command Line Interface (AWS CLI). Important The Amazon S3 storage with the Security Token Service is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . Prerequisites You have installed the latest version of the AWS CLI. Procedure Create an AWS S3 bucket. Create the following trust.json file for the AWS IAM policy that will set up a trust relationship for the AWS IAM role, created in the step, with the service account of the TempoStack instance: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Federated": "arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}" 1 }, "Action": "sts:AssumeRoleWithWebIdentity", "Condition": { "StringEquals": { "USD{OIDC_PROVIDER}:sub": [ "system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}" 2 "system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend" ] } } } ] } 1 OIDC provider that you have configured on the OpenShift Container Platform. You can get the configured OIDC provider value also by running the following command: USD oc get authentication cluster -o json | jq -r '.spec.serviceAccountIssuer' | sed 's http[s]*:// ~g' . 2 Namespace in which you intend to create the TempoStack instance. Create an AWS IAM role by attaching the trust.json policy file that you created: USD aws iam create-role \ --role-name "tempo-s3-access" \ --assume-role-policy-document "file:///tmp/trust.json" \ --query Role.Arn \ --output text Attach an AWS IAM policy to the created role: USD aws iam attach-role-policy \ --role-name "tempo-s3-access" \ --policy-arn "arn:aws:iam::aws:policy/AmazonS3FullAccess" In the OpenShift Container Platform, create an object storage secret with keys as follows: apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque Additional resources AWS Identity and Access Management Documentation AWS Command Line Interface Documentation Configuring an OpenID Connect identity provider Identify AWS resources with Amazon Resource Names (ARNs) 3.1.4.2. Setting up IBM Cloud Object Storage You can set up IBM Cloud Object Storage by using the OpenShift CLI ( oc ). Prerequisites You have installed the latest version of OpenShift CLI ( oc ). For more information, see "Getting started with the OpenShift CLI" in Configure: CLI tools . You have installed the latest version of IBM Cloud Command Line Interface ( ibmcloud ). For more information, see "Getting started with the IBM Cloud CLI" in IBM Cloud Docs . You have configured IBM Cloud Object Storage. For more information, see "Choosing a plan and creating an instance" in IBM Cloud Docs . You have an IBM Cloud Platform account. You have ordered an IBM Cloud Object Storage plan. You have created an instance of IBM Cloud Object Storage. Procedure On IBM Cloud, create an object store bucket. On IBM Cloud, create a service key for connecting to the object store bucket by running the following command: USD ibmcloud resource service-key-create <tempo_bucket> Writer \ --instance-name <tempo_bucket> --parameters '{"HMAC":true}' On IBM Cloud, create a secret with the bucket credentials by running the following command: USD oc -n <namespace> create secret generic <ibm_cos_secret> \ --from-literal=bucket="<tempo_bucket>" \ --from-literal=endpoint="<ibm_bucket_endpoint>" \ --from-literal=access_key_id="<ibm_bucket_access_key>" \ --from-literal=access_key_secret="<ibm_bucket_secret_key>" On OpenShift Container Platform, create an object storage secret with keys as follows: apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque On OpenShift Container Platform, set the storage section in the TempoStack custom resource as follows: apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... storage: secret: name: <ibm_cos_secret> 1 type: s3 # ... 1 Name of the secret that contains the IBM Cloud Storage access and secret keys. Additional resources Getting started with the OpenShift CLI Getting started with the IBM Cloud CLI (IBM Cloud Docs) Choosing a plan and creating an instance (IBM Cloud Docs) Getting started with IBM Cloud Object Storage: Before you begin (IBM Cloud Docs) 3.1.5. Additional resources Creating a cluster admin OperatorHub.io Accessing the web console Installing from OperatorHub using the web console Creating applications from installed Operators Getting started with the OpenShift CLI 3.2. Configuring The Tempo Operator uses a custom resource definition (CRD) file that defines the architecture and configuration settings for creating and deploying the distributed tracing platform (Tempo) resources. You can install the default configuration or modify the file. 3.2.1. Configuring back-end storage For information about configuring the back-end storage, see Understanding persistent storage and the relevant configuration section for your chosen storage option. 3.2.2. Introduction to TempoStack configuration parameters The TempoStack custom resource (CR) defines the architecture and settings for creating the distributed tracing platform (Tempo) resources. You can modify these parameters to customize your implementation to your business needs. Example TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21 1 API version to use when creating the object. 2 Defines the kind of Kubernetes object to create. 3 Data that uniquely identifies the object, including a name string, UID , and optional namespace . OpenShift Container Platform automatically generates the UID and completes the namespace with the name of the project where the object is created. 4 Name of the TempoStack instance. 5 Contains all of the configuration parameters of the TempoStack instance. When a common definition for all Tempo components is required, define it in the spec section. When the definition relates to an individual component, place it in the spec.template.<component> section. 6 Storage is specified at instance deployment. See the installation page for information about storage options for the instance. 7 Defines the compute resources for the Tempo container. 8 Integer value for the number of ingesters that must acknowledge the data from the distributors before accepting a span. 9 Configuration options for retention of traces. 10 Configuration options for the Tempo distributor component. 11 Configuration options for the Tempo ingester component. 12 Configuration options for the Tempo compactor component. 13 Configuration options for the Tempo querier component. 14 Configuration options for the Tempo query-frontend component. 15 Configuration options for the Tempo gateway component. 16 Limits ingestion and query rates. 17 Defines ingestion rate limits. 18 Defines query rate limits. 19 Configures operands to handle telemetry data. 20 Configures search capabilities. 21 Defines whether or not this CR is managed by the Operator. The default value is managed . Additional resources Installing a TempoStack instance Installing a TempoMonolithic instance 3.2.3. Query configuration options Two components of the distributed tracing platform (Tempo), the querier and query frontend, manage queries. You can configure both of these components. The querier component finds the requested trace ID in the ingesters or back-end storage. Depending on the set parameters, the querier component can query both the ingesters and pull bloom or indexes from the back end to search blocks in object storage. The querier component exposes an HTTP endpoint at GET /querier/api/traces/<trace_id> , but it is not expected to be used directly. Queries must be sent to the query frontend. Table 3.2. Configuration parameters for the querier component Parameter Description Values nodeSelector The simple form of the node-selection constraint. type: object replicas The number of replicas to be created for the component. type: integer; format: int32 tolerations Component-specific pod tolerations. type: array The query frontend component is responsible for sharding the search space for an incoming query. The query frontend exposes traces via a simple HTTP endpoint: GET /api/traces/<trace_id> . Internally, the query frontend component splits the blockID space into a configurable number of shards and then queues these requests. The querier component connects to the query frontend component via a streaming gRPC connection to process these sharded queries. Table 3.3. Configuration parameters for the query frontend component Parameter Description Values component Configuration of the query frontend component. type: object component.nodeSelector The simple form of the node selection constraint. type: object component.replicas The number of replicas to be created for the query frontend component. type: integer; format: int32 component.tolerations Pod tolerations specific to the query frontend component. type: array jaegerQuery The options specific to the Jaeger Query component. type: object jaegerQuery.enabled When enabled , creates the Jaeger Query component, jaegerQuery . type: boolean jaegerQuery.ingress The options for the Jaeger Query ingress. type: object jaegerQuery.ingress.annotations The annotations of the ingress object. type: object jaegerQuery.ingress.host The hostname of the ingress object. type: string jaegerQuery.ingress.ingressClassName The name of an IngressClass cluster resource. Defines which ingress controller serves this ingress resource. type: string jaegerQuery.ingress.route The options for the OpenShift route. type: object jaegerQuery.ingress.route.termination The termination type. The default is edge . type: string (enum: insecure, edge, passthrough, reencrypt) jaegerQuery.ingress.type The type of ingress for the Jaeger Query UI. The supported types are ingress , route , and none . type: string (enum: ingress, route) jaegerQuery.monitorTab The monitor tab configuration. type: object jaegerQuery.monitorTab.enabled Enables the monitor tab in the Jaeger console. The PrometheusEndpoint must be configured. type: boolean jaegerQuery.monitorTab.prometheusEndpoint The endpoint to the Prometheus instance that contains the span rate, error, and duration (RED) metrics. For example, https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 . type: string Example configuration of the query frontend component in a TempoStack CR apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route Additional resources Understanding taints and tolerations 3.2.4. Configuration of the monitor tab in Jaeger UI Trace data contains rich information, and the data is normalized across instrumented languages and frameworks. Therefore, request rate, error, and duration (RED) metrics can be extracted from traces. The metrics can be visualized in Jaeger console in the Monitor tab. The metrics are derived from spans in the OpenTelemetry Collector that are scraped from the Collector by the Prometheus deployed in the user-workload monitoring stack. The Jaeger UI queries these metrics from the Prometheus endpoint and visualizes them. 3.2.4.1. OpenTelemetry Collector configuration The OpenTelemetry Collector requires configuration of the spanmetrics connector that derives metrics from traces and exports the metrics in the Prometheus format. OpenTelemetry Collector custom resource for span RED kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: "tempo-simplest-distributor:4317" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus] 1 Creates the ServiceMonitor custom resource to enable scraping of the Prometheus exporter. 2 The Spanmetrics connector receives traces and exports metrics. 3 The OTLP receiver to receive spans in the OpenTelemetry protocol. 4 The Prometheus exporter is used to export metrics in the Prometheus format. 5 The Spanmetrics connector is configured as exporter in traces pipeline. 6 The Spanmetrics connector is configured as receiver in metrics pipeline. 3.2.4.2. Tempo configuration The TempoStack custom resource must specify the following: the Monitor tab is enabled, and the Prometheus endpoint is set to the Thanos querier service to query the data from the user-defined monitoring stack. TempoStack custom resource with the enabled Monitor tab apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: "" 3 ingress: type: route 1 Enables the monitoring tab in the Jaeger console. 2 The service name for Thanos Querier from user-workload monitoring. 3 Optional: The metrics namespace on which the Jaeger query retrieves the Prometheus metrics. Include this line only if you are using an OpenTelemetry Collector version earlier than 0.109.0. If you are using an OpenTelemetry Collector version 0.109.0 or later, omit this line. 3.2.4.3. Span RED metrics and alerting rules The metrics generated by the spanmetrics connector are usable with alerting rules. For example, for alerts about a slow service or to define service level objectives (SLOs), the connector creates a duration_bucket histogram and the calls counter metric. These metrics have labels that identify the service, API name, operation type, and other attributes. Table 3.4. Labels of the metrics created in the spanmetrics connector Label Description Values service_name Service name set by the otel_service_name environment variable. frontend span_name Name of the operation. / /customer span_kind Identifies the server, client, messaging, or internal operation. SPAN_KIND_SERVER SPAN_KIND_CLIENT SPAN_KIND_PRODUCER SPAN_KIND_CONSUMER SPAN_KIND_INTERNAL Example PrometheusRule CR that defines an alerting rule for SLO when not serving 95% of requests within 2000ms on the front-end service apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name="frontend", span_kind="SPAN_KIND_SERVER"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: "High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}" description: "{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)" 1 The expression for checking if 95% of the front-end server response time values are below 2000 ms. The time range ( [5m] ) must be at least four times the scrape interval and long enough to accommodate a change in the metric. 3.2.5. Configuring the receiver TLS The custom resource of your TempoStack or TempoMonolithic instance supports configuring the TLS for receivers by using user-provided certificates or OpenShift's service serving certificates. 3.2.5.1. Receiver TLS configuration for a TempoStack instance You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform. To provide a TLS certificate in a secret, configure it in the TempoStack custom resource. Note This feature is not supported with the enabled Tempo Gateway. TLS for receivers and using a user-provided certificate in a secret apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ... 1 TLS enabled at the Tempo Distributor. 2 Secret containing a tls.key key and tls.crt certificate that you apply in advance. 3 Optional: CA in a config map to enable mutual TLS authentication (mTLS). Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform. Note Mutual TLS authentication (mTLS) is not supported with this feature. TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack # ... spec: # ... template: distributor: tls: enabled: true 1 # ... 1 Sufficient configuration for the TLS at the Tempo Distributor. Additional resources Understanding service serving certificates Service CA certificates 3.2.5.2. Receiver TLS configuration for a TempoMonolithic instance You can provide a TLS certificate in a secret or use the service serving certificates that are generated by OpenShift Container Platform. To provide a TLS certificate in a secret, configure it in the TempoMonolithic custom resource. Note This feature is not supported with the enabled Tempo Gateway. TLS for receivers and using a user-provided certificate in a secret apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3 # ... 1 TLS enabled at the Tempo Distributor. 2 Secret containing a tls.key key and tls.crt certificate that you apply in advance. 3 Optional: CA in a config map to enable mutual TLS authentication (mTLS). Alternatively, you can use the service serving certificates that are generated by OpenShift Container Platform. Note Mutual TLS authentication (mTLS) is not supported with this feature. TLS for receivers and using the service serving certificates that are generated by OpenShift Container Platform apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic # ... spec: # ... ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1 # ... 1 Minimal configuration for the TLS at the Tempo Distributor. Additional resources Understanding service serving certificates Service CA certificates 3.2.6. Multitenancy Multitenancy with authentication and authorization is provided in the Tempo Gateway service. The authentication uses OpenShift OAuth and the Kubernetes TokenReview API. The authorization uses the Kubernetes SubjectAccessReview API. Note The Tempo Gateway service supports ingestion of traces only via the OTLP/gRPC. The OTLP/HTTP is not supported. Sample Tempo CR with two tenants, dev and prod apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: "1610b0c3-c509-4592-a256-a1871353dbfa" 4 - tenantName: prod tenantId: "1610b0c3-c509-4592-a256-a1871353dbfb" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true 1 Must be set to openshift . 2 The list of tenants. 3 The tenant name. Must be provided in the X-Scope-OrgId header when ingesting the data. 4 A unique tenant ID. 5 Enables a gateway that performs authentication and authorization. The Jaeger UI is exposed at http://<gateway-ingress>/api/traces/v1/<tenant-name>/search . The authorization configuration uses the ClusterRole and ClusterRoleBinding of the Kubernetes Role-Based Access Control (RBAC). By default, no users have read or write permissions. Sample of the read RBAC configuration that allows authenticated users to read the trace data of the dev and prod tenants apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3 1 Lists the tenants. 2 The get value enables the read operation. 3 Grants all authenticated users the read permissions for trace data. Sample of the write RBAC configuration that allows the otel-collector service account to write the trace data for the dev tenant apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel 1 The service account name for the client to use when exporting trace data. The client must send the service account token, /var/run/secrets/kubernetes.io/serviceaccount/token , as the bearer token header. 2 Lists the tenants. 3 The create value enables the write operation. Trace data can be sent to the Tempo instance from the OpenTelemetry Collector that uses the service account with RBAC for writing the data. Sample OpenTelemetry CR configuration apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: "/var/run/secrets/kubernetes.io/serviceaccount/token" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: "/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: "dev" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3 1 OTLP gRPC Exporter. 2 OTLP HTTP Exporter. 3 You can specify otlp/dev for the OTLP gRPC Exporter or otlphttp/dev for the OTLP HTTP Exporter. 3.2.7. Using taints and tolerations To schedule the TempoStack pods on dedicated nodes, see How to deploy the different TempoStack components on infra nodes using nodeSelector and tolerations in OpenShift 4 . 3.2.8. Configuring monitoring and alerts The Tempo Operator supports monitoring and alerts about each TempoStack component such as distributor, ingester, and so on, and exposes upgrade and operational metrics about the Operator itself. 3.2.8.1. Configuring the TempoStack metrics and alerts You can enable metrics and alerts of TempoStack instances. Prerequisites Monitoring for user-defined projects is enabled in the cluster. Procedure To enable metrics of a TempoStack instance, set the spec.observability.metrics.createServiceMonitors field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true To enable alerts for a TempoStack instance, set the spec.observability.metrics.createPrometheusRules field to true : apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: User , and check that ServiceMonitors in the format tempo-<instance_name>-<component> have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: User , and check that the Alert rules for the TempoStack instance components are available. Additional resources Enabling monitoring for user-defined projects 3.2.8.2. Configuring the Tempo Operator metrics and alerts When installing the Tempo Operator from the web console, you can select the Enable Operator recommended cluster monitoring on this Namespace checkbox, which enables creating metrics and alerts of the Tempo Operator. If the checkbox was not selected during installation, you can manually enable metrics and alerts even after installing the Tempo Operator. Procedure Add the openshift.io/cluster-monitoring: "true" label in the project where the Tempo Operator is installed, which is openshift-tempo-operator by default. Verification You can use the Administrator view of the web console to verify successful configuration: Go to Observe Targets , filter for Source: Platform , and search for tempo-operator , which must have the Up status. To verify that alerts are set up correctly, go to Observe Alerting Alerting rules , filter for Source: Platform , and locate the Alert rules for the Tempo Operator . 3.3. Troubleshooting You can diagnose and fix issues in TempoStack or TempoMonolithic instances by using various troubleshooting methods. 3.3.1. Collecting diagnostic data from the command line When submitting a support case, it is helpful to include diagnostic information about your cluster to Red Hat Support. You can use the oc adm must-gather tool to gather diagnostic data for resources of various types, such as TempoStack or TempoMonolithic , and the created resources like Deployment , Pod , or ConfigMap . The oc adm must-gather tool creates a new pod that collects this data. Procedure From the directory where you want to save the collected data, run the oc adm must-gather command to collect the data: USD oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- \ /usr/bin/must-gather --operator-namespace <operator_namespace> 1 1 The default namespace where the Operator is installed is openshift-tempo-operator . Verification Verify that the new directory is created and contains the collected data. 3.4. Upgrading For version upgrades, the Tempo Operator uses the Operator Lifecycle Manager (OLM), which controls installation, upgrade, and role-based access control (RBAC) of Operators in a cluster. The OLM runs in the OpenShift Container Platform by default. The OLM queries for available Operators as well as upgrades for installed Operators. When the Tempo Operator is upgraded to the new version, it scans for running TempoStack instances that it manages and upgrades them to the version corresponding to the Operator's new version. 3.4.1. Additional resources Operator Lifecycle Manager concepts and resources Updating installed Operators 3.5. Removing The steps for removing the Red Hat OpenShift distributed tracing platform (Tempo) from an OpenShift Container Platform cluster are as follows: Shut down all distributed tracing platform (Tempo) pods. Remove any TempoStack instances. Remove the Tempo Operator. 3.5.1. Removing by using the web console You can remove a TempoStack instance in the Administrator view of the web console. Prerequisites You are logged in to the OpenShift Container Platform web console as a cluster administrator with the cluster-admin role. For Red Hat OpenShift Dedicated, you must be logged in using an account with the dedicated-admin role. Procedure Go to Operators Installed Operators Tempo Operator TempoStack . To remove the TempoStack instance, select Delete TempoStack Delete . Optional: Remove the Tempo Operator. 3.5.2. Removing by using the CLI You can remove a TempoStack instance on the command line. Prerequisites An active OpenShift CLI ( oc ) session by a cluster administrator with the cluster-admin role. Tip Ensure that your OpenShift CLI ( oc ) version is up to date and matches your OpenShift Container Platform version. Run oc login : USD oc login --username=<your_username> Procedure Get the name of the TempoStack instance by running the following command: USD oc get deployments -n <project_of_tempostack_instance> Remove the TempoStack instance by running the following command: USD oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance> Optional: Remove the Tempo Operator. Verification Run the following command to verify that the TempoStack instance is not found in the output, which indicates its successful removal: USD oc get deployments -n <project_of_tempostack_instance> 3.5.3. Additional resources Deleting Operators from a cluster Getting started with the OpenShift CLI
[ "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: labels: kubernetes.io/metadata.name: openshift-tempo-operator openshift.io/cluster-monitoring: \"true\" name: openshift-tempo-operator EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-tempo-operator namespace: openshift-tempo-operator spec: upgradeStrategy: Default EOF", "oc apply -f - << EOF apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: tempo-product namespace: openshift-tempo-operator spec: channel: stable installPlanApproval: Automatic name: tempo-product source: redhat-operators sourceNamespace: openshift-marketplace EOF", "oc get csv -n openshift-tempo-operator", "apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempostack_instance> EOF", "oc apply -f - << EOF <object_storage_secret> EOF", "apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: sample namespace: <project_of_tempostack_instance> spec: storageSize: <value>Gi 1 storage: secret: 2 name: <secret_name> 3 type: <secret_provider> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route resources: 7 total: limits: memory: <value>Gi cpu: <value>m", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: <project_of_tempostack_instance> spec: storageSize: 1Gi storage: 1 secret: name: minio-test type: s3 resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: 2 enabled: true ingress: route: termination: edge type: route", "oc apply -f - << EOF <tempostack_cr> EOF", "oc get tempostacks.tempo.grafana.com simplest -o yaml", "oc get pods", "oc get route", "apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m", "oc login --username=<your_username>", "oc apply -f - << EOF apiVersion: project.openshift.io/v1 kind: Project metadata: name: <project_of_tempomonolithic_instance> EOF", "oc apply -f - << EOF <object_storage_secret> EOF", "apiVersion: v1 kind: Secret metadata: name: minio-test stringData: endpoint: http://minio.minio.svc:9000 bucket: tempo access_key_id: tempo access_key_secret: <secret> type: Opaque", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic metadata: name: <metadata_name> namespace: <project_of_tempomonolithic_instance> spec: storage: traces: backend: <supported_storage_type> 1 size: <value>Gi 2 s3: 3 secret: <secret_name> 4 tls: 5 enabled: true caName: <ca_certificate_configmap_name> 6 jaegerui: enabled: true 7 route: enabled: true 8 resources: 9 total: limits: memory: <value>Gi cpu: <value>m", "oc apply -f - << EOF <tempomonolithic_cr> EOF", "oc get tempomonolithic.tempo.grafana.com <metadata_name_of_tempomonolithic_cr> -o yaml", "oc get pods", "oc get route", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": \"arn:aws:iam::USD{<aws_account_id>}:oidc-provider/USD{<oidc_provider>}\" 1 }, \"Action\": \"sts:AssumeRoleWithWebIdentity\", \"Condition\": { \"StringEquals\": { \"USD{OIDC_PROVIDER}:sub\": [ \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}\" 2 \"system:serviceaccount:USD{<openshift_project_for_tempostack>}:tempo-USD{<tempostack_cr_name>}-query-frontend\" ] } } } ] }", "aws iam create-role --role-name \"tempo-s3-access\" --assume-role-policy-document \"file:///tmp/trust.json\" --query Role.Arn --output text", "aws iam attach-role-policy --role-name \"tempo-s3-access\" --policy-arn \"arn:aws:iam::aws:policy/AmazonS3FullAccess\"", "apiVersion: v1 kind: Secret metadata: name: minio-test stringData: bucket: <s3_bucket_name> region: <s3_region> role_arn: <s3_role_arn> type: Opaque", "ibmcloud resource service-key-create <tempo_bucket> Writer --instance-name <tempo_bucket> --parameters '{\"HMAC\":true}'", "oc -n <namespace> create secret generic <ibm_cos_secret> --from-literal=bucket=\"<tempo_bucket>\" --from-literal=endpoint=\"<ibm_bucket_endpoint>\" --from-literal=access_key_id=\"<ibm_bucket_access_key>\" --from-literal=access_key_secret=\"<ibm_bucket_secret_key>\"", "apiVersion: v1 kind: Secret metadata: name: <ibm_cos_secret> stringData: bucket: <tempo_bucket> endpoint: <ibm_bucket_endpoint> access_key_id: <ibm_bucket_access_key> access_key_secret: <ibm_bucket_secret_key> type: Opaque", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: storage: secret: name: <ibm_cos_secret> 1 type: s3", "apiVersion: tempo.grafana.com/v1alpha1 1 kind: TempoStack 2 metadata: 3 name: <name> 4 spec: 5 storage: {} 6 resources: {} 7 replicationFactor: 1 8 retention: {} 9 template: distributor: {} 10 ingester: {} 11 compactor: {} 12 querier: {} 13 queryFrontend: {} 14 gateway: {} 15 limits: 16 global: ingestion: {} 17 query: {} 18 observability: 19 grafana: {} metrics: {} tracing: {} search: {} 20 managementState: managed 21", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest spec: storage: secret: name: minio type: s3 storageSize: 200M resources: total: limits: memory: 2Gi cpu: 2000m template: queryFrontend: jaegerQuery: enabled: true ingress: route: termination: edge type: route", "kind: OpenTelemetryCollector apiVersion: opentelemetry.io/v1alpha1 metadata: name: otel spec: mode: deployment observability: metrics: enableMetrics: true 1 config: | connectors: spanmetrics: 2 metrics_flush_interval: 15s receivers: otlp: 3 protocols: grpc: http: exporters: prometheus: 4 endpoint: 0.0.0.0:8889 add_metric_suffixes: false resource_to_telemetry_conversion: enabled: true # by default resource attributes are dropped otlp: endpoint: \"tempo-simplest-distributor:4317\" tls: insecure: true service: pipelines: traces: receivers: [otlp] exporters: [otlp, spanmetrics] 5 metrics: receivers: [spanmetrics] 6 exporters: [prometheus]", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: redmetrics spec: storage: secret: name: minio-test type: s3 storageSize: 1Gi template: gateway: enabled: false queryFrontend: jaegerQuery: enabled: true monitorTab: enabled: true 1 prometheusEndpoint: https://thanos-querier.openshift-monitoring.svc.cluster.local:9091 2 redMetricsNamespace: \"\" 3 ingress: type: route", "apiVersion: monitoring.coreos.com/v1 kind: PrometheusRule metadata: name: span-red spec: groups: - name: server-side-latency rules: - alert: SpanREDFrontendAPIRequestLatency expr: histogram_quantile(0.95, sum(rate(duration_bucket{service_name=\"frontend\", span_kind=\"SPAN_KIND_SERVER\"}[5m])) by (le, service_name, span_name)) > 2000 1 labels: severity: Warning annotations: summary: \"High request latency on {{USDlabels.service_name}} and {{USDlabels.span_name}}\" description: \"{{USDlabels.instance}} has 95th request latency above 2s (current value: {{USDvalue}}s)\"", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack spec: template: distributor: tls: enabled: true 1", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true 1 certName: <tls_secret> 2 caName: <ca_name> 3", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoMonolithic spec: ingestion: otlp: grpc: tls: enabled: true http: tls: enabled: true 1", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: simplest namespace: chainsaw-multitenancy spec: storage: secret: name: minio type: s3 storageSize: 1Gi resources: total: limits: memory: 2Gi cpu: 2000m tenants: mode: openshift 1 authentication: 2 - tenantName: dev 3 tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfa\" 4 - tenantName: prod tenantId: \"1610b0c3-c509-4592-a256-a1871353dbfb\" template: gateway: enabled: true 5 queryFrontend: jaegerQuery: enabled: true", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-reader rules: - apiGroups: - 'tempo.grafana.com' resources: 1 - dev - prod resourceNames: - traces verbs: - 'get' 2 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces-reader roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-reader subjects: - kind: Group apiGroup: rbac.authorization.k8s.io name: system:authenticated 3", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector 1 namespace: otel --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: tempostack-traces-write rules: - apiGroups: - 'tempo.grafana.com' resources: 2 - dev resourceNames: - traces verbs: - 'create' 3 --- apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: tempostack-traces roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: tempostack-traces-write subjects: - kind: ServiceAccount name: otel-collector namespace: otel", "apiVersion: opentelemetry.io/v1alpha1 kind: OpenTelemetryCollector metadata: name: cluster-collector namespace: tracing-system spec: mode: deployment serviceAccount: otel-collector config: | extensions: bearertokenauth: filename: \"/var/run/secrets/kubernetes.io/serviceaccount/token\" exporters: otlp/dev: 1 endpoint: tempo-simplest-gateway.tempo.svc.cluster.local:8090 tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" otlphttp/dev: 2 endpoint: https://tempo-simplest-gateway.chainsaw-multitenancy.svc.cluster.local:8080/api/traces/v1/dev tls: insecure: false ca_file: \"/var/run/secrets/kubernetes.io/serviceaccount/service-ca.crt\" auth: authenticator: bearertokenauth headers: X-Scope-OrgID: \"dev\" service: extensions: [bearertokenauth] pipelines: traces: exporters: [otlp/dev] 3", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createServiceMonitors: true", "apiVersion: tempo.grafana.com/v1alpha1 kind: TempoStack metadata: name: <name> spec: observability: metrics: createPrometheusRules: true", "oc adm must-gather --image=ghcr.io/grafana/tempo-operator/must-gather -- /usr/bin/must-gather --operator-namespace <operator_namespace> 1", "oc login --username=<your_username>", "oc get deployments -n <project_of_tempostack_instance>", "oc delete tempo <tempostack_instance_name> -n <project_of_tempostack_instance>", "oc get deployments -n <project_of_tempostack_instance>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/distributed_tracing/distributed-tracing-platform-tempo
6.2. Translator Deployment Overview
6.2. Translator Deployment Overview A translator JAR file can be deployed either as a JBoss module or by direct JAR deployment.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_4_server_development/translator_deployment_overview
Chapter 8. alarming
Chapter 8. alarming This chapter describes the commands under the alarming command. 8.1. alarming capabilities list List capabilities of alarming service Usage: Table 8.1. Command arguments Value Summary -h, --help Show this help message and exit Table 8.2. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 8.3. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 8.4. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 8.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack alarming capabilities list [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/command_line_interface_reference/alarming
Chapter 1. Web Console Overview
Chapter 1. Web Console Overview The Red Hat OpenShift Container Platform web console provides a graphical user interface to visualize your project data and perform administrative, management, and troubleshooting tasks. The web console runs as pods on the control plane nodes in the openshift-console project. It is managed by a console-operator pod. Both Administrator and Developer perspectives are supported. Both Administrator and Developer perspectives enable you to create quick start tutorials for OpenShift Container Platform. A quick start is a guided tutorial with user tasks and is useful for getting oriented with an application, Operator, or other product offering. 1.1. About the Administrator perspective in the web console The Administrator perspective enables you to view the cluster inventory, capacity, general and specific utilization information, and the stream of important events, all of which help you to simplify planning and troubleshooting tasks. Both project administrators and cluster administrators can view the Administrator perspective. Cluster administrators can also open an embedded command line terminal instance with the web terminal Operator in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Administrator perspective is displayed by default if the user is recognized as an administrator. The Administrator perspective provides workflows specific to administrator use cases, such as the ability to: Manage workload, storage, networking, and cluster settings. Install and manage Operators using the Operator Hub. Add identity providers that allow users to log in and manage user access through roles and role bindings. View and manage a variety of advanced settings such as cluster updates, partial cluster updates, cluster Operators, custom resource definitions (CRDs), role bindings, and resource quotas. Access and manage monitoring features such as metrics, alerts, and monitoring dashboards. View and manage logging, metrics, and high-status information about the cluster. Visually interact with applications, components, and services associated with the Administrator perspective in OpenShift Container Platform. 1.2. About the Developer perspective in the web console The Developer perspective offers several built-in ways to deploy applications, services, and databases. In the Developer perspective, you can: View real-time visualization of rolling and recreating rollouts on the component. View the application status, resource utilization, project event streaming, and quota consumption. Share your project with others. Troubleshoot problems with your applications by running Prometheus Query Language (PromQL) queries on your project and examining the metrics visualized on a plot. The metrics provide information about the state of a cluster and any user-defined workloads that you are monitoring. Cluster administrators can also open an embedded command line terminal instance in the web console in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Developer perspective is displayed by default if the user is recognised as a developer. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on OpenShift Container Platform by importing existing codebases, images, and container files. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using Eclipse Che. You can use the Topology view to display applications, components, and workloads of your project. If you have no workloads in the project, the Topology view will show some links to create or import them. You can also use the Quick Search to import components directly. Additional resources See Viewing application composition using the Topology view for more information on using the Topology view in Developer perspective. 1.3. Accessing the Perspectives You can access the Administrator and Developer perspective from the web console as follows: Prerequisites To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The Administrator perspective is selected for users with access to all projects, while the Developer perspective is selected for users with limited access to their own projects Additional resources See Adding User Preferences for more information on changing perspectives. Procedure Use the perspective switcher to switch to the Administrator or Developer perspective. Select an existing project from the Project drop-down list. You can also create a new project from this dropdown. Note You can use the perspective switcher only as cluster-admin . Additional resources Learn more about Cluster Administrator Overview of the Administrator perspective Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view Viewing cluster information Configuring the web console Customizing the web console About the web console Using the web terminal Creating quick start tutorials Disabling the web console
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/web_console/web-console-overview
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Tell us how we can make it better. Providing documentation feedback in Jira Use the Create Issue form to provide feedback on the documentation for Red Hat OpenStack Services on OpenShift (RHOSO) or earlier releases of Red Hat OpenStack Platform (RHOSP). When you create an issue for RHOSO or RHOSP documents, the issue is recorded in the RHOSO Jira project, where you can track the progress of your feedback. To complete the Create Issue form, ensure that you are logged in to Jira. If you do not have a Red Hat Jira account, you can create an account at https://issues.redhat.com . Click the following link to open a Create Issue page: Create Issue Complete the Summary and Description fields. In the Description field, include the documentation URL, chapter or section number, and a detailed description of the issue. Do not modify any other fields in the form. Click Create .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/integrating_the_overcloud_with_an_existing_red_hat_ceph_storage_cluster/proc_providing-feedback-on-red-hat-documentation
2.5.5. OProfile
2.5.5. OProfile The OProfile system-wide profiler is a low-overhead monitoring tool. OProfile makes use of the processor's performance monitoring hardware [8] to determine the nature of performance-related problems. Performance monitoring hardware is part of the processor itself. It takes the form of a special counter, incremented each time a certain event (such as the processor not being idle or the requested data not being in cache) occurs. Some processors have more than one such counter and allow the selection of different event types for each counter. The counters can be loaded with an initial value and produce an interrupt whenever the counter overflows. By loading a counter with different initial values, it is possible to vary the rate at which interrupts are produced. In this way it is possible to control the sample rate and, therefore, the level of detail obtained from the data being collected. At one extreme, setting the counter so that it generates an overflow interrupt with every event provides extremely detailed performance data (but with massive overhead). At the other extreme, setting the counter so that it generates as few interrupts as possible provides only the most general overview of system performance (with practically no overhead). The secret to effective monitoring is the selection of a sample rate sufficiently high to capture the required data, but not so high as to overload the system with performance monitoring overhead. Warning You can configure OProfile so that it produces sufficient overhead to render the system unusable. Therefore, you must exercise care when selecting counter values. For this reason, the opcontrol command supports the --list-events option, which displays the event types available for the currently-installed processor, along with suggested minimum counter values for each. It is important to keep the tradeoff between sample rate and overhead in mind when using OProfile. 2.5.5.1. OProfile Components Oprofile consists of the following components: Data collection software Data analysis software Administrative interface software The data collection software consists of the oprofile.o kernel module, and the oprofiled daemon. The data analysis software includes the following programs: op_time Displays the number and relative percentages of samples taken for each executable file oprofpp Displays the number and relative percentage of samples taken by either function, individual instruction, or in gprof -style output op_to_source Displays annotated source code and/or assembly listings op_visualise Graphically displays collected data These programs make it possible to display the collected data in a variety of ways. The administrative interface software controls all aspects of data collection, from specifying which events are to be monitored to starting and stopping the collection itself. This is done using the opcontrol command. [8] OProfile can also use a fallback mechanism (known as TIMER_INT) for those system architectures that lack performance monitoring hardware.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s2-resource-tools-oprofile
Chapter 12. Installation configuration parameters for OpenStack
Chapter 12. Installation configuration parameters for OpenStack Before you deploy an OpenShift Container Platform cluster on Red Hat OpenStack Platform (RHOSP), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 12.1. Available installation configuration parameters for OpenStack The following tables specify the required, optional, and OpenStack-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 12.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 12.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The string must be 14 characters or fewer long. The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 12.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 12.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 12.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 12.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 and arm64 . Not all installation options support the 64-bit ARM architecture. To verify if your installation option is supported on your platform, see Supported installation methods for different platforms in Selecting a cluster installation method and preparing it for users . String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Mint , Passthrough , Manual or an empty string ( "" ). [1] Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings Required to set the NLB load balancer type in AWS. Valid values are Classic or NLB . If no value is specified, the installation program defaults to Classic . The installation program sets the value provided here in the ingress cluster configuration object. If you do not specify a load balancer type for other Ingress Controllers, they use the type set in this parameter. Classic or NLB . The default value is Classic . How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough , or Manual . If you are installing on GCP into a shared virtual private cloud (VPC), credentialsMode must be set to Passthrough or Manual . Important Setting this parameter to Manual enables alternatives to storing administrator-level secrets in the kube-system project, which require additional configuration steps. For more information, see "Alternatives to storing administrator-level secrets in the kube-system project". 12.1.4. Optional AWS configuration parameters Optional AWS configuration parameters are described in the following table: Table 12.4. Optional AWS parameters Parameter Description Values The AWS AMI used to boot compute machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the compute machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume. Integer, for example 4000 . The size in GiB of the root volume. Integer, for example 500 . The type of the root volume. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of worker nodes with a specific KMS key. Valid key ID or the key ARN . The EC2 instance type for the compute machines. Valid AWS instance type, such as m4.2xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the compute machine pool. If you provide your own VPC, you must provide a subnet in that availability zone. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates compute resources in. Any valid AWS region , such as us-east-1 . You can use the AWS CLI to access the regions available based on your selected instance type. For example: aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge Important When running on ARM based AWS instances, ensure that you enter a region where AWS Graviton processors are available. See Global availability map in the AWS documentation. Currently, AWS Graviton3 processors are only available in some regions. The AWS AMI used to boot control plane machines for the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. A pre-existing AWS IAM role applied to the control plane machine pool instance profiles. You can use these fields to match naming schemes and include predefined permissions boundaries for your IAM roles. If undefined, the installation program creates a new IAM role. The name of a valid AWS IAM role. The Input/Output Operations Per Second (IOPS) that is reserved for the root volume on control plane machines. Integer, for example 4000 . The size in GiB of the root volume for control plane machines. Integer, for example 500 . The type of the root volume for control plane machines. Valid AWS EBS volume type , such as io1 . The Amazon Resource Name (key ARN) of a KMS key. This is required to encrypt operating system volumes of control plane nodes with a specific KMS key. Valid key ID and the key ARN . The EC2 instance type for the control plane machines. Valid AWS instance type, such as m6i.xlarge . See the Supported AWS machine types table that follows. The availability zones where the installation program creates machines for the control plane machine pool. A list of valid AWS availability zones, such as us-east-1c , in a YAML sequence . The AWS region that the installation program creates control plane resources in. Valid AWS region , such as us-east-1 . The AWS AMI used to boot all machines for the cluster. If set, the AMI must belong to the same region as the cluster. This is required for regions that require a custom RHCOS AMI. Any published or custom RHCOS AMI that belongs to the set AWS region. See RHCOS AMIs for AWS infrastructure for available AMI IDs. An existing Route 53 private hosted zone for the cluster. You can only use a pre-existing hosted zone when also supplying your own VPC. The hosted zone must already be associated with the user-provided VPC before installation. Also, the domain of the hosted zone must be the cluster domain or a parent of the cluster domain. If undefined, the installation program creates a new hosted zone. String, for example Z3URY6TWQ91KVV . An Amazon Resource Name (ARN) for an existing IAM role in the account containing the specified hosted zone. The installation program and cluster operators will assume this role when performing operations on the hosted zone. This parameter should only be used if you are installing a cluster into a shared VPC. String, for example arn:aws:iam::1234567890:role/shared-vpc-role . The AWS service endpoint name and URL. Custom endpoints are only required for cases where alternative AWS endpoints, like FIPS, must be used. Custom API endpoints can be specified for EC2, S3, IAM, Elastic Load Balancing, Tagging, Route 53, and STS AWS services. Valid AWS service endpoint name and valid AWS service endpoint URL. A map of keys and values that the installation program adds as tags to all resources that it creates. Any valid YAML map, such as key value pairs in the <key>: <value> format. For more information about AWS tags, see Tagging Your Amazon EC2 Resources in the AWS documentation. Note You can add up to 25 user defined tags during installation. The remaining 25 tags are reserved for OpenShift Container Platform. A flag that directs in-cluster Operators to include the specified user tags in the tags of the AWS resources that the Operators create. Boolean values, for example true or false . If you provide the VPC instead of allowing the installation program to create the VPC for you, specify the subnet for the cluster to use. The subnet must be part of the same machineNetwork[].cidr ranges that you specify. For a standard cluster, specify a public and a private subnet for each availability zone. For a private cluster, specify a private subnet for each availability zone. For clusters that use AWS Local Zones, you must add AWS Local Zone subnets to this list to ensure edge machine pool creation. Valid subnet IDs. Prevents the S3 bucket from being deleted after completion of bootstrapping. true or false . The default value is false , which results in the S3 bucket being deleted. 12.1.5. Additional Red Hat OpenStack Platform (RHOSP) configuration parameters Additional RHOSP configuration parameters are described in the following table: Table 12.5. Additional RHOSP parameters Parameter Description Values For compute machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . For compute machines, the root volume types. A list of strings, for example, { performance-host1 , performance-host2 , performance-host3 }. [1] For compute machines, the root volume's type. This property is deprecated and is replaced by compute.platform.openstack.rootVolume.types . String, for example, performance . [2] For compute machines, the Cinder availability zone to install root volumes on. If you do not set a value for this parameter, the installation program selects the default availability zone. This parameter is mandatory when compute.platform.openstack.zones is defined. A list of strings, for example ["zone-1", "zone-2"] . For control plane machines, the size in gigabytes of the root volume. If you do not set this value, machines use ephemeral storage. Integer, for example 30 . For control plane machines, the root volume types. A list of strings, for example, { performance-host1 , performance-host2 , performance-host3 }. [1] For control plane machines, the root volume's type. This property is deprecated and is replaced by compute.platform.openstack.rootVolume.types . String, for example, performance . [2] For control plane machines, the Cinder availability zone to install root volumes on. If you do not set this value, the installation program selects the default availability zone. This parameter is mandatory when controlPlane.platform.openstack.zones is defined. A list of strings, for example ["zone-1", "zone-2"] . The name of the RHOSP cloud to use from the list of clouds in the clouds.yaml file. In the cloud configuration in the clouds.yaml file, if possible, use application credentials rather than a user name and password combination. Using application credentials avoids disruptions from secret propogation that follow user name and password rotation. String, for example MyCloud . The RHOSP external network name to be used for installation. String, for example external . The RHOSP flavor to use for control plane and compute machines. This property is deprecated. To use a flavor as the default for all machine pools, add it as the value of the type key in the platform.openstack.defaultMachinePlatform property. You can also set a flavor value for each machine pool individually. String, for example m1.xlarge . If the machine pool defines zones , the count of types can either be a single item or match the number of items in zones . For example, the count of types cannot be 2 if there are 3 items in zones . If you have any existing reference to this property, the installer populates the corresponding value in the controlPlane.platform.openstack.rootVolume.types field. 12.1.6. Optional RHOSP configuration parameters Optional RHOSP configuration parameters are described in the following table: Table 12.6. Optional RHOSP parameters Parameter Description Values Additional networks that are associated with compute machines. Allowed address pairs are not created for additional networks. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . Additional security groups that are associated with compute machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . Server group policy to apply to the group that will contain the compute machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . Additional networks that are associated with control plane machines. Allowed address pairs are not created for additional networks. Additional networks that are attached to a control plane machine are also attached to the bootstrap node. A list of one or more UUIDs as strings. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . Additional security groups that are associated with control plane machines. A list of one or more UUIDs as strings. For example, 7ee219f3-d2e9-48a1-96c2-e7429f1b0da7 . RHOSP Compute (Nova) availability zones (AZs) to install machines on. If this parameter is not set, the installation program relies on the default settings for Nova that the RHOSP administrator configured. On clusters that use Kuryr, RHOSP Octavia does not support availability zones. Load balancers and, if you are using the Amphora provider driver, OpenShift Container Platform services that rely on Amphora VMs, are not created according to the value of this property. A list of strings. For example, ["zone-1", "zone-2"] . Server group policy to apply to the group that will contain the control plane machines in the pool. You cannot change server group policies or affiliations after creation. Supported options include anti-affinity , soft-affinity , and soft-anti-affinity . The default value is soft-anti-affinity . An affinity policy prevents migrations, and therefore affects RHOSP upgrades. The affinity policy is not supported. If you use a strict anti-affinity policy, an additional RHOSP host is required during instance migration. A server group policy to apply to the machine pool. For example, soft-affinity . The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with an SHA-256 checksum. For example, http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d . The value can also be the name of an existing Glance image, for example my-rhcos . Properties to add to the installer-uploaded ClusterOSImage in Glance. This property is ignored if platform.openstack.clusterOSImage is set to an existing Glance image. You can use this property to exceed the default persistent volume (PV) limit for RHOSP of 26 PVs per node. To exceed the limit, set the hw_scsi_model property value to virtio-scsi and the hw_disk_bus value to scsi . You can also use this property to enable the QEMU guest agent by including the hw_qemu_guest_agent property with a value of yes . A list of key-value string pairs. For example, ["hw_scsi_model": "virtio-scsi", "hw_disk_bus": "scsi"] . The default machine pool platform configuration. { "type": "ml.large", "rootVolume": { "size": 30, "type": "performance" } } An existing floating IP address to associate with the Ingress port. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . An existing floating IP address to associate with the API load balancer. To use this property, you must also define the platform.openstack.externalNetwork property. An IP address, for example 128.0.0.1 . IP addresses for external DNS servers that cluster instances use for DNS resolution. A list of IP addresses as strings. For example, ["8.8.8.8", "192.168.1.12"] . Whether or not to use the default, internal load balancer. If the value is set to UserManaged , this default load balancer is disabled so that you can deploy a cluster that uses an external, user-managed load balancer. If the parameter is not set, or if the value is OpenShiftManagedDefault , the cluster uses the default load balancer. UserManaged or OpenShiftManagedDefault . The UUID of a RHOSP subnet that the cluster's nodes use. Nodes and virtual IP (VIP) ports are created on this subnet. The first item in networking.machineNetwork must match the value of machinesSubnet . If you deploy to a custom subnet, you cannot specify an external DNS server to the OpenShift Container Platform installer. Instead, add DNS to the subnet in RHOSP . A UUID as a string. For example, fa806b2f-ac49-4bce-b9db-124bc64209bf . 12.1.7. Additional Google Cloud Platform (GCP) configuration parameters Additional GCP configuration parameters are described in the following table: Table 12.7. Additional GCP parameters Parameter Description Values Optional. By default, the installation program downloads and installs the Red Hat Enterprise Linux CoreOS (RHCOS) image that is used to boot control plane machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for control plane machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane machines. If you use controlPlane.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for compute machines only. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot compute machines. If you use compute.platform.gcp.osImage.project , this field is required. String. The name of the RHCOS image. The name of the existing Virtual Private Cloud (VPC) where you want to deploy your cluster. If you want to deploy your cluster into a shared VPC, you must set platform.gcp.networkProjectID with the name of the GCP project that contains the shared VPC. String. Optional. The name of the GCP project that contains the shared VPC where you want to deploy your cluster. String. The name of the GCP project where the installation program installs the cluster. String. The name of the GCP region that hosts your cluster. Any valid region name, such as us-central1 . The name of the existing subnet where you want to deploy your control plane machines. The subnet name. The name of the existing subnet where you want to deploy your compute machines. The subnet name. The availability zones where the installation program creates machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. The size of the disk in gigabytes (GB). Any size between 16 GB and 65536 GB. The GCP disk type . The default disk type for all machines. Control plane nodes must use the pd-ssd disk type. Compute nodes can use the pd-ssd , pd-balanced , or pd-standard disk types. Optional. By default, the installation program downloads and installs the RHCOS image that is used to boot control plane and compute machines. You can override the default behavior by specifying the location of a custom RHCOS image that the installation program is to use for both types of machines. String. The name of GCP project where the image is located. The name of the custom RHCOS image that the installation program is to use to boot control plane and compute machines. If you use platform.gcp.defaultMachinePlatform.osImage.project , this field is required. String. The name of the RHCOS image. Optional. Additional network tags to add to the control plane and compute machines. One or more strings, for example network-tag1 . The GCP machine type for control plane and compute machines. The GCP machine type, for example n1-standard-4 . The name of the customer managed encryption key to be used for machine disk encryption. The encryption key name. The name of the Key Management Service (KMS) key ring to which the KMS key belongs. The KMS key ring name. The GCP location in which the KMS key ring exists. The GCP location. The ID of the project in which the KMS key ring exists. This value defaults to the value of the platform.gcp.projectID parameter if it is not set. The GCP project ID. The GCP service account used for the encryption request for control plane and compute machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . Whether to enable Shielded VM secure boot for all machines in the cluster. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to use Confidential VMs for all machines in the cluster. Confidential VMs provide encryption for data during processing. For more information on Confidential computing, see Google's documentation on Confidential computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of all VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . The name of the customer managed encryption key to be used for control plane machine disk encryption. The encryption key name. For control plane machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For control plane machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For control plane machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for control plane machines. If absent, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to control plane machines. Any integer between 16 and 65536. The GCP disk type for control plane machines. Control plane machines must use the pd-ssd disk type, which is the default. Optional. Additional network tags to add to the control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for control plane machines. One or more strings, for example control-plane-tag1 . The GCP machine type for control plane machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates control plane machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for control plane machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for control plane machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of control plane VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate . The name of the customer managed encryption key to be used for compute machine disk encryption. The encryption key name. For compute machines, the name of the KMS key ring to which the KMS key belongs. The KMS key ring name. For compute machines, the GCP location in which the key ring exists. For more information about KMS locations, see Google's documentation on Cloud KMS locations . The GCP location for the key ring. For compute machines, the ID of the project in which the KMS key ring exists. This value defaults to the VM project ID if not set. The GCP project ID. The GCP service account used for the encryption request for compute machines. If this value is not set, the Compute Engine default service account is used. For more information about GCP service accounts, see Google's documentation on service accounts . The GCP service account email, for example <service_account_name>@<project_id>.iam.gserviceaccount.com . The size of the disk in gigabytes (GB). This value applies to compute machines. Any integer between 16 and 65536. The GCP disk type for compute machines. pd-ssd , pd-standard , or pd-balanced . The default is pd-ssd . Optional. Additional network tags to add to the compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.tags parameter for compute machines. One or more strings, for example compute-network-tag1 . The GCP machine type for compute machines. If set, this parameter overrides the platform.gcp.defaultMachinePlatform.type parameter. The GCP machine type, for example n1-standard-4 . The availability zones where the installation program creates compute machines. A list of valid GCP availability zones , such as us-central1-a , in a YAML sequence . Important When running your cluster on GCP 64-bit ARM infrastructures, ensure that you use a zone where Ampere Altra Arm CPU's are available. You can find which zones are compatible with 64-bit ARM processors in the "GCP availability zones" link. Whether to enable Shielded VM secure boot for compute machines. Shielded VMs have additional security protocols such as secure boot, firmware and integrity monitoring, and rootkit protection. For more information on Shielded VMs, see Google's documentation on Shielded VMs . Enabled or Disabled . The default value is Disabled . Whether to enable Confidential VMs for compute machines. Confidential VMs provide encryption for data while it is being processed. For more information on Confidential VMs, see Google's documentation on Confidential Computing . Enabled or Disabled . The default value is Disabled . Specifies the behavior of compute VMs during a host maintenance event, such as a software or hardware update. For Confidential VMs, this parameter must be set to Terminate . Confidential VMs do not support live VM migration. Terminate or Migrate . The default value is Migrate .
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "platform: aws: lbType:", "publish:", "sshKey:", "compute: platform: aws: amiID:", "compute: platform: aws: iamRole:", "compute: platform: aws: rootVolume: iops:", "compute: platform: aws: rootVolume: size:", "compute: platform: aws: rootVolume: type:", "compute: platform: aws: rootVolume: kmsKeyARN:", "compute: platform: aws: type:", "compute: platform: aws: zones:", "compute: aws: region:", "aws ec2 describe-instance-type-offerings --filters Name=instance-type,Values=c7g.xlarge", "controlPlane: platform: aws: amiID:", "controlPlane: platform: aws: iamRole:", "controlPlane: platform: aws: rootVolume: iops:", "controlPlane: platform: aws: rootVolume: size:", "controlPlane: platform: aws: rootVolume: type:", "controlPlane: platform: aws: rootVolume: kmsKeyARN:", "controlPlane: platform: aws: type:", "controlPlane: platform: aws: zones:", "controlPlane: aws: region:", "platform: aws: amiID:", "platform: aws: hostedZone:", "platform: aws: hostedZoneRole:", "platform: aws: serviceEndpoints: - name: url:", "platform: aws: userTags:", "platform: aws: propagateUserTags:", "platform: aws: subnets:", "platform: aws: preserveBootstrapIgnition:", "compute: platform: openstack: rootVolume: size:", "compute: platform: openstack: rootVolume: types:", "compute: platform: openstack: rootVolume: type:", "compute: platform: openstack: rootVolume: zones:", "controlPlane: platform: openstack: rootVolume: size:", "controlPlane: platform: openstack: rootVolume: types:", "controlPlane: platform: openstack: rootVolume: type:", "controlPlane: platform: openstack: rootVolume: zones:", "platform: openstack: cloud:", "platform: openstack: externalNetwork:", "platform: openstack: computeFlavor:", "compute: platform: openstack: additionalNetworkIDs:", "compute: platform: openstack: additionalSecurityGroupIDs:", "compute: platform: openstack: zones:", "compute: platform: openstack: serverGroupPolicy:", "controlPlane: platform: openstack: additionalNetworkIDs:", "controlPlane: platform: openstack: additionalSecurityGroupIDs:", "controlPlane: platform: openstack: zones:", "controlPlane: platform: openstack: serverGroupPolicy:", "platform: openstack: clusterOSImage:", "platform: openstack: clusterOSImageProperties:", "platform: openstack: defaultMachinePlatform:", "{ \"type\": \"ml.large\", \"rootVolume\": { \"size\": 30, \"type\": \"performance\" } }", "platform: openstack: ingressFloatingIP:", "platform: openstack: apiFloatingIP:", "platform: openstack: externalDNS:", "platform: openstack: loadbalancer:", "platform: openstack: machinesSubnet:", "controlPlane: platform: gcp: osImage: project:", "controlPlane: platform: gcp: osImage: name:", "compute: platform: gcp: osImage: project:", "compute: platform: gcp: osImage: name:", "platform: gcp: network:", "platform: gcp: networkProjectID:", "platform: gcp: projectID:", "platform: gcp: region:", "platform: gcp: controlPlaneSubnet:", "platform: gcp: computeSubnet:", "platform: gcp: defaultMachinePlatform: zones:", "platform: gcp: defaultMachinePlatform: osDisk: diskSizeGB:", "platform: gcp: defaultMachinePlatform: osDisk: diskType:", "platform: gcp: defaultMachinePlatform: osImage: project:", "platform: gcp: defaultMachinePlatform: osImage: name:", "platform: gcp: defaultMachinePlatform: tags:", "platform: gcp: defaultMachinePlatform: type:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: name:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: keyRing:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: location:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKey: projectID:", "platform: gcp: defaultMachinePlatform: osDisk: encryptionKey: kmsKeyServiceAccount:", "platform: gcp: defaultMachinePlatform: secureBoot:", "platform: gcp: defaultMachinePlatform: confidentialCompute:", "platform: gcp: defaultMachinePlatform: onHostMaintenance:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: name:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: location:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:", "controlPlane: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:", "controlPlane: platform: gcp: osDisk: diskSizeGB:", "controlPlane: platform: gcp: osDisk: diskType:", "controlPlane: platform: gcp: tags:", "controlPlane: platform: gcp: type:", "controlPlane: platform: gcp: zones:", "controlPlane: platform: gcp: secureBoot:", "controlPlane: platform: gcp: confidentialCompute:", "controlPlane: platform: gcp: onHostMaintenance:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: name:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: keyRing:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: location:", "compute: platform: gcp: osDisk: encryptionKey: kmsKey: projectID:", "compute: platform: gcp: osDisk: encryptionKey: kmsKeyServiceAccount:", "compute: platform: gcp: osDisk: diskSizeGB:", "compute: platform: gcp: osDisk: diskType:", "compute: platform: gcp: tags:", "compute: platform: gcp: type:", "compute: platform: gcp: zones:", "compute: platform: gcp: secureBoot:", "compute: platform: gcp: confidentialCompute:", "compute: platform: gcp: onHostMaintenance:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_openstack/installation-config-parameters-openstack
Chapter 21. KafkaAuthorizationSimple schema reference
Chapter 21. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationSimple schema properties Simple authorization in AMQ Streams uses the AclAuthorizer plugin, the default Access Control Lists (ACLs) authorization plugin provided with Apache Kafka. ACLs allow you to define which users have access to which resources at a granular level. Configure the Kafka custom resource to use simple authorization. Set the type property in the authorization section to the value simple , and configure a list of super users. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . 21.1. superUsers A list of user principals treated as super users, so that they are always allowed without querying ACL rules. An example of simple authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. For more information, see Kafka broker configuration . 21.2. KafkaAuthorizationSimple schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value simple for the type KafkaAuthorizationSimple . Property Description type Must be simple . string superUsers List of super users. Should contain list of user principals which should get unlimited access rights. string array
[ "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 #" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-KafkaAuthorizationSimple-reference
Chapter 4. Troubleshooting Ceph Monitors
Chapter 4. Troubleshooting Ceph Monitors This chapter contains information on how to fix the most common errors related to the Ceph Monitors. Prerequisites Verify the network connection. 4.1. Most common Ceph Monitor errors The following tables list the most common error messages that are returned by the ceph health detail command, or included in the Ceph logs. The tables provide links to corresponding sections that explain the errors and point to specific procedures to fix the problems. Prerequisites A running Red Hat Ceph Storage cluster. 4.1.1. Ceph Monitor error messages A table of common Ceph Monitor error messages, and a potential fix. Error message See HEALTH_WARN mon.X is down (out of quorum) Ceph Monitor is out of quorum clock skew Clock skew store is getting too big! The Ceph Monitor store is getting too big 4.1.2. Common Ceph Monitor error messages in the Ceph logs A table of common Ceph Monitor error messages found in the Ceph logs, and a link to a potential fix. Error message Log file See clock skew Main cluster log Clock skew clocks not synchronized Main cluster log Clock skew Corruption: error in middle of record Monitor log Ceph Monitor is out of quorum Recovering the Ceph Monitor store Corruption: 1 missing files Monitor log Ceph Monitor is out of quorum Recovering the Ceph Monitor store Caught signal (Bus error) Monitor log Ceph Monitor is out of quorum 4.1.3. Ceph Monitor is out of quorum One or more Ceph Monitors are marked as down but the other Ceph Monitors are still able to form a quorum. In addition, the ceph health detail command returns an error message similar to the following one: What This Means Ceph marks a Ceph Monitor as down due to various reasons. If the ceph-mon daemon is not running, it might have a corrupted store or some other error is preventing the daemon from starting. Also, the /var/ partition might be full. As a consequence, ceph-mon is not able to perform any operations to the store located by default at /var/lib/ceph/mon- SHORT_HOST_NAME /store.db and terminates. If the ceph-mon daemon is running but the Ceph Monitor is out of quorum and marked as down , the cause of the problem depends on the Ceph Monitor state: If the Ceph Monitor is in the probing state longer than expected, it cannot find the other Ceph Monitors. This problem can be caused by networking issues, or the Ceph Monitor can have an outdated Ceph Monitor map ( monmap ) and be trying to reach the other Ceph Monitors on incorrect IP addresses. Alternatively, if the monmap is up-to-date, Ceph Monitor's clock might not be synchronized. If the Ceph Monitor is in the electing state longer than expected, the Ceph Monitor's clock might not be synchronized. If the Ceph Monitor changes its state from synchronizing to electing and back, the cluster state is advancing. This means that it is generating new maps faster than the synchronization process can handle. If the Ceph Monitor marks itself as the leader or a peon , then it believes to be in a quorum, while the remaining cluster is sure that it is not. This problem can be caused by failed clock synchronization. To Troubleshoot This Problem Verify that the ceph-mon daemon is running. If not, start it: Syntax Example If you are not able to start ceph-mon , follow the steps in The ceph-mon daemon cannot start . If you are able to start the ceph-mon daemon but is marked as down , follow the steps in The ceph-mon daemon is running, but marked as `down` . The ceph-mon Daemon Cannot Start Check the corresponding Ceph Monitor log located at /var/log/ceph/ CLUSTER_FSID /ceph-mon. HOST_NAME .log by default. Note By default, the monitor logs are not present in the log folder. You need to enable logging to files for the logs to appear in the folder. See the Ceph daemon logs to enable logging to files. If the log contains error messages similar to the following ones, the Ceph Monitor might have a corrupted store. To fix this problem, replace the Ceph Monitor. See Replacing a failed monitor . If the log contains an error message similar to the following one, the /var/ partition might be full. Delete any unnecessary data from /var/ . Important Do not delete any data from the Monitor directory manually.. Instead, use the ceph-monstore-tool to compact it. See Compacting the Ceph Monitor store for details. If you see any other error messages, open a support ticket. See Contacting Red Hat Support for service for details. The ceph-mon Daemon Is Running, but Still Marked as down From the Ceph Monitor host that is out of the quorum, use the mon_status command to check its state: Replace ID with the ID of the Ceph Monitor, for example: If the status is probing , verify the locations of the other Ceph Monitors in the mon_status output. If the addresses are incorrect, the Ceph Monitor has incorrect Ceph Monitor map ( monmap ). To fix this problem, see Injecting a Ceph Monitor map . If the addresses are correct, verify that the Ceph Monitor clocks are synchronized. See Clock skew for details. If the status is electing , verify that the Ceph Monitor clocks are synchronized. See Clock skew for details. If the status changes from electing to synchronizing , open a support ticket. See Contacting Red Hat Support for service for details. If the Ceph Monitor is the leader or a peon , verify that the Ceph Monitor clocks are synchronized. See Clock skew for details. Open a support ticket if synchronizing the clocks does not solve the problem. See Contacting Red Hat Support for service for details. Additional Resources See Understanding Ceph Monitor status The Starting, Stopping, Restarting the Ceph daemons section in the Red Hat Ceph Storage Administration Guide . The Using the Ceph Administration Socket section in the Red Hat Ceph Storage Administration Guide . 4.1.4. Clock skew A Ceph Monitor is out of quorum, and the ceph health detail command output contains error messages similar to these: In addition, Ceph logs contain error messages similar to these: What This Means The clock skew error message indicates that Ceph Monitors' clocks are not synchronized. Clock synchronization is important because Ceph Monitors depend on time precision and behave unpredictably if their clocks are not synchronized. The mon_clock_drift_allowed parameter determines what disparity between the clocks is tolerated. By default, this parameter is set to 0.05 seconds. Important Do not change the default value of mon_clock_drift_allowed without testing. Changing this value might affect the stability of the Ceph Monitors and the Ceph Storage Cluster in general. Possible causes of the clock skew error include network problems or problems with chrony Network Time Protocol (NTP) synchronization if that is configured. In addition, time synchronization does not work properly on Ceph Monitors deployed on virtual machines. To Troubleshoot This Problem Verify that your network works correctly. If you use a remote NTP server, consider deploying your own chrony NTP server on your network. For details, see the Using the Chrony Suite to Configure NTP chapter within the Configuring basic system settings guide within the Product Documentation for {os-product for your OS version, on the Red Hat Customer Portal. Note Ceph evaluates time synchronization every five minutes only so there will be a delay between fixing the problem and clearing the clock skew messages. Additional Resources Understanding Ceph Monitor status Ceph Monitor is out of quorum 4.1.5. The Ceph Monitor store is getting too big The ceph health command returns an error message similar to the following one: What This Means Ceph Monitors store is in fact a RocksDB database that stores entries as key-values pairs. The database includes a cluster map and is located by default at /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME /store.db . Querying a large Monitor store can take time. As a consequence, the Ceph Monitor can be delayed in responding to client queries. In addition, if the /var/ partition is full, the Ceph Monitor cannot perform any write operations to the store and terminates. See Ceph Monitor is out of quorum for details on troubleshooting this issue. To Troubleshoot This Problem Check the size of the database: Syntax Specify the name of the cluster and the short host name of the host where the ceph-mon is running. Example Compact the Ceph Monitor store. For details, see Compacting the Ceph Monitor Store . Additional Resources Ceph Monitor is out of quorum 4.1.6. Understanding Ceph Monitor status The mon_status command returns information about a Ceph Monitor, such as: State Rank Elections epoch Monitor map ( monmap ) If Ceph Monitors are able to form a quorum, use mon_status with the ceph command-line utility. If Ceph Monitors are not able to form a quorum, but the ceph-mon daemon is running, use the administration socket to execute mon_status . An example output of mon_status Ceph Monitor States Leader During the electing phase, Ceph Monitors are electing a leader. The leader is the Ceph Monitor with the highest rank, that is the rank with the lowest value. In the example above, the leader is mon.1 . Peon Peons are the Ceph Monitors in the quorum that are not leaders. If the leader fails, the peon with the highest rank becomes a new leader. Probing A Ceph Monitor is in the probing state if it is looking for other Ceph Monitors. For example, after you start the Ceph Monitors, they are probing until they find enough Ceph Monitors specified in the Ceph Monitor map ( monmap ) to form a quorum. Electing A Ceph Monitor is in the electing state if it is in the process of electing the leader. Usually, this status changes quickly. Synchronizing A Ceph Monitor is in the synchronizing state if it is synchronizing with the other Ceph Monitors to join the quorum. The smaller the Ceph Monitor store it, the faster the synchronization process. Therefore, if you have a large store, synchronization takes a longer time. Additional Resources For details, see the Using the Ceph Administration Socket section in the Administration Guide for Red Hat Ceph Storage 6. See the Section 4.1.1, "Ceph Monitor error messages" in the Red Hat Ceph Storage Troubleshooting Guide . See the Section 4.1.2, "Common Ceph Monitor error messages in the Ceph logs" in the Red Hat Ceph Storage Troubleshooting Guide . 4.2. Injecting a monmap If a Ceph Monitor has an outdated or corrupted Ceph Monitor map ( monmap ), it cannot join a quorum because it is trying to reach the other Ceph Monitors on incorrect IP addresses. The safest way to fix this problem is to obtain and inject the actual Ceph Monitor map from other Ceph Monitors. Note This action overwrites the existing Ceph Monitor map kept by the Ceph Monitor. This procedure shows how to inject the Ceph Monitor map when the other Ceph Monitors are able to form a quorum, or when at least one Ceph Monitor has a correct Ceph Monitor map. If all Ceph Monitors have corrupted store and therefore also the Ceph Monitor map, see Recovering the Ceph Monitor store . Prerequisites Access to the Ceph Monitor Map. Root-level access to the Ceph Monitor node. Procedure If the remaining Ceph Monitors are able to form a quorum, get the Ceph Monitor map by using the ceph mon getmap command: Example If the remaining Ceph Monitors are not able to form the quorum and you have at least one Ceph Monitor with a correct Ceph Monitor map, copy it from that Ceph Monitor: Stop the Ceph Monitor which you want to copy the Ceph Monitor map from: Syntax Example Copy the Ceph Monitor map: Syntax Replace ID with the ID of the Ceph Monitor which you want to copy the Ceph Monitor map from: Example Stop the Ceph Monitor with the corrupted or outdated Ceph Monitor map: Syntax Example Inject the Ceph Monitor map: Syntax Replace ID with the ID of the Ceph Monitor with the corrupted or outdated Ceph Monitor map: Example Start the Ceph Monitor: Syntax Example If you copied the Ceph Monitor map from another Ceph Monitor, start that Ceph Monitor, too: Syntax Example Additional Resources See the Ceph Monitor is out of quorum See the Recovering the Ceph Monitor store 4.3. Replacing a failed Monitor When a Ceph Monitor has a corrupted store, you can replace the monitor in the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Able to form a quorum. Root-level access to Ceph Monitor node. Procedure From the Monitor host, remove the Monitor store by default located at /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME : Specify the short host name of the Monitor host and the cluster name. For example, to remove the Monitor store of a Monitor running on host1 from a cluster called remote : Remove the Monitor from the Monitor map ( monmap ): Specify the short host name of the Monitor host and the cluster name. For example, to remove the Monitor running on host1 from a cluster called remote : Troubleshoot and fix any problems related to the underlying file system or hardware of the Monitor host. Additional Resources See the Ceph Monitor is out of quorum for details. 4.4. Compacting the monitor store When the Monitor store has grown big in size, you can compact it: Dynamically by using the ceph tell command. Upon the start of the ceph-mon daemon. By using the ceph-monstore-tool when the ceph-mon daemon is not running. Use this method when the previously mentioned methods fail to compact the Monitor store or when the Monitor is out of quorum and its log contains the Caught signal (Bus error) error message. Important Monitor store size changes when the cluster is not in the active+clean state or during the rebalancing process. For this reason, compact the Monitor store when rebalancing is completed. Also, ensure that the placement groups are in the active+clean state. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the Ceph Monitor node. Procedure To compact the Monitor store when the ceph-mon daemon is running: Syntax Replace HOST_NAME with the short host name of the host where the ceph-mon is running. Use the hostname -s command when unsure. Example Add the following parameter to the Ceph configuration under the [mon] section: Restart the ceph-mon daemon: Syntax Example Ensure that Monitors have formed a quorum: Repeat these steps on other Monitors if needed. Note Before you start, ensure that you have the ceph-test package installed. Verify that the ceph-mon daemon with the large store is not running. Stop the daemon if needed. Syntax Example Compact the Monitor store: Syntax Replace HOST_NAME with a short host name of the Monitor host. Example Start ceph-mon again: Syntax Example Additional Resources See The Ceph Monitor store is getting too big See the Ceph Monitor is out of quorum 4.5. Opening port for Ceph manager The ceph-mgr daemons receive placement group information from OSDs on the same range of ports as the ceph-osd daemons. If these ports are not open, a cluster will devolve from HEALTH_OK to HEALTH_WARN and will indicate that PGs are unknown with a percentage count of the PGs unknown. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to Ceph Manager. Procedure To resolve this situation, for each host running ceph-mgr daemons, open ports 6800-7300 . Example Restart the ceph-mgr daemons. 4.6. Recovering the Ceph Monitor store Ceph Monitors store the cluster map in a key-value store such as RocksDB. If the store is corrupted on a Monitor, the Monitor terminates unexpectedly and fails to start again. The Ceph logs might include the following errors: The Red Hat Ceph Storage clusters use at least three Ceph Monitors so that if one fails, it can be replaced with another one. However, under certain circumstances, all Ceph Monitors can have corrupted stores. For example, when the Ceph Monitor nodes have incorrectly configured disk or file system settings, a power outage can corrupt the underlying file system. If there is corruption on all Ceph Monitors, you can recover it with information stored on the OSD nodes by using utilities called ceph-monstore-tool and ceph-objectstore-tool . Important These procedures cannot recover the following information: Metadata Daemon Server (MDS) keyrings and maps Placement Group settings: full ratio set by using the ceph pg set_full_ratio command nearfull ratio set by using the ceph pg set_nearfull_ratio command Important Never restore the Ceph Monitor store from an old backup. Rebuild the Ceph Monitor store from the current cluster state using the following steps and restore from that. 4.6.1. Recovering the Ceph Monitor store when using BlueStore Follow this procedure if the Ceph Monitor store is corrupted on all Ceph Monitors and you use the BlueStore back end. In containerized environments, this method requires attaching Ceph repositories and restoring to a non-containerized Ceph Monitor first. Warning This procedure can cause data loss. If you are unsure about any step in this procedure, contact the Red Hat Technical Support for assistance with the recovering process. Prerequisites All OSDs containers are stopped. Enable Ceph repositories on the Ceph nodes based on their roles. The ceph-test and rsync packages are installed on the OSD and Monitor nodes. The ceph-mon package is installed on the Monitor nodes. The ceph-osd package is installed on the OSD nodes. Procedure Mount all disks with Ceph data to a temporary location. Repeat this step for all OSD nodes. List the data partitions using the ceph-volume command: Example Mount the data partitions to a temporary location: Syntax Restore the SELinux context: Syntax Replace OSD_ID with a numeric, space-separated list of Ceph OSD IDs on the OSD node. Change the owner and group to ceph:ceph : Syntax Replace OSD_ID with a numeric, space-separated list of Ceph OSD IDs on the OSD node. Important Due to a bug that causes the update-mon-db command to use additional db and db.slow directories for the Monitor database, you must also copy these directories. To do so: Prepare a temporary location outside the container to mount and access the OSD database and extract the OSD maps needed to restore the Ceph Monitor: Syntax Replace OSD-DATA with the Volume Group (VG) or Logical Volume (LV) path to the OSD data and OSD-ID with the ID of the OSD. Create a symbolic link between the BlueStore database and block.db : Syntax Replace BLUESTORE-DATABASE with the Volume Group (VG) or Logical Volume (LV) path to the BlueStore database and OSD-ID with the ID of the OSD. Use the following commands from the Ceph Monitor node with the corrupted store. Repeat them for all OSDs on all nodes. Collect the cluster map from all OSD nodes: Example Set the appropriate capabilities: Example Move all sst file from the db and db.slow directories to the temporary location: Example Rebuild the Monitor store from the collected map: Example Note After using this command, only keyrings extracted from the OSDs and the keyring specified on the ceph-monstore-tool command line are present in Ceph's authentication database. You have to recreate or import all other keyrings, such as clients, Ceph Manager, Ceph Object Gateway, and others, so those clients can access the cluster. Back up the corrupted store. Repeat this step for all Ceph Monitor nodes: Syntax Replace HOSTNAME with the host name of the Ceph Monitor node. Replace the corrupted store. Repeat this step for all Ceph Monitor nodes: Syntax Replace HOSTNAME with the host name of the Monitor node. Change the owner of the new store. Repeat this step for all Ceph Monitor nodes: Syntax Replace HOSTNAME with the host name of the Ceph Monitor node. Unmount all the temporary mounted OSDs on all nodes: Example Start all the Ceph Monitor daemons: Syntax Example Ensure that the Monitors are able to form a quorum: Syntax Replace HOSTNAME with the host name of the Ceph Monitor node. Import the Ceph Manager keyring and start all Ceph Manager processes: Syntax Example Replace HOSTNAME with the host name of the Ceph Manager node. Start all OSD processes across all OSD nodes. Repeat for all OSDs on the cluster: Syntax Example Ensure that the OSDs are returning to service: Example Additional Resources For details on registering Ceph nodes to the Content Delivery Network (CDN), see Registering the Red Hat Ceph Storage nodes to the CDN and attaching subscriptions section in the Red Hat Ceph Storage Installation Guide . See Troubleshooting networking issues in the Red Hat Ceph Storage Troubleshooting Guide for network-related problems.
[ "HEALTH_WARN 1 mons down, quorum 1,2 mon.b,mon.c mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum)", "systemctl status ceph- FSID @ DAEMON_NAME systemctl start ceph- FSID @ DAEMON_NAME", "systemctl status [email protected] systemctl start [email protected]", "Corruption: error in middle of record Corruption: 1 missing files; example: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb", "Caught signal (Bus error)", "ceph daemon ID mon_status", "ceph daemon mon.host01 mon_status", "mon.a (rank 0) addr 127.0.0.1:6789/0 is down (out of quorum) mon.a addr 127.0.0.1:6789/0 clock skew 0.08235s > max 0.05s (latency 0.0045s)", "2022-05-04 07:28:32.035795 7f806062e700 0 log [WRN] : mon.a 127.0.0.1:6789/0 clock skew 0.14s > max 0.05s 2022-05-04 04:31:25.773235 7f4997663700 0 log [WRN] : message from mon.1 was stamped 0.186257s in the future, clocks not synchronized", "mon.ceph1 store is getting too big! 48031 MB >= 15360 MB -- 62% avail", "du -sch /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME /store.db/", "du -sh /var/lib/ceph/b341e254-b165-11ed-a564-ac1f6bb26e8c/mon.host01/ 109M /var/lib/ceph/b341e254-b165-11ed-a564-ac1f6bb26e8c/mon.host01/ 47G /var/lib/ceph/mon/ceph-ceph1/store.db/ 47G total", "{ \"name\": \"mon.3\", \"rank\": 2, \"state\": \"peon\", \"election_epoch\": 96, \"quorum\": [ 1, 2 ], \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 1, \"fsid\": \"d5552d32-9d1d-436c-8db1-ab5fc2c63cd0\", \"modified\": \"0.000000\", \"created\": \"0.000000\", \"mons\": [ { \"rank\": 0, \"name\": \"mon.1\", \"addr\": \"172.25.1.10:6789\\/0\" }, { \"rank\": 1, \"name\": \"mon.2\", \"addr\": \"172.25.1.12:6789\\/0\" }, { \"rank\": 2, \"name\": \"mon.3\", \"addr\": \"172.25.1.13:6789\\/0\" } ] } }", "ceph mon getmap -o /tmp/monmap", "systemctl stop ceph- FSID @ DAEMON_NAME", "systemctl stop [email protected]", "ceph-mon -i ID --extract-monmap /tmp/monmap", "ceph-mon -i mon.a --extract-monmap /tmp/monmap", "systemctl stop ceph- FSID @ DAEMON_NAME", "systemctl stop [email protected]", "ceph-mon -i ID --inject-monmap /tmp/monmap", "ceph-mon -i mon.host01 --inject-monmap /tmp/monmap", "systemctl start ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]", "systemctl start ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]", "rm -rf /var/lib/ceph/mon/ CLUSTER_NAME - SHORT_HOST_NAME", "rm -rf /var/lib/ceph/mon/remote-host1", "ceph mon remove SHORT_HOST_NAME --cluster CLUSTER_NAME", "ceph mon remove host01 --cluster remote", "ceph tell mon. HOST_NAME compact", "ceph tell mon.host01 compact", "[mon] mon_compact_on_start = true", "systemctl restart ceph- FSID @ DAEMON_NAME", "systemctl restart [email protected]", "ceph mon stat", "systemctl status ceph- FSID @ DAEMON_NAME systemctl stop ceph- FSID @ DAEMON_NAME", "systemctl status [email protected] systemctl stop [email protected]", "ceph-monstore-tool /var/lib/ceph/ CLUSTER_FSID /mon. HOST_NAME compact", "ceph-monstore-tool /var/lib/ceph/b404c440-9e4c-11ec-a28a-001a4a0001df/mon.host01 compact", "systemctl start ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]", "firewall-cmd --add-port 6800-7300/tcp firewall-cmd --add-port 6800-7300/tcp --permanent", "Corruption: error in middle of record Corruption: 1 missing files; e.g.: /var/lib/ceph/mon/mon.0/store.db/1234567.ldb", "ceph-volume lvm list", "mount -t tmpfs tmpfs /var/lib/ceph/osd/ceph-USDi", "for i in { OSD_ID }; do restorecon /var/lib/ceph/osd/ceph-USDi; done", "for i in { OSD_ID }; do chown -R ceph:ceph /var/lib/ceph/osd/ceph-USDi; done", "ceph-bluestore-tool --cluster=ceph prime-osd-dir --dev OSD-DATA --path /var/lib/ceph/osd/ceph- OSD-ID", "ln -snf BLUESTORE DATABASE /var/lib/ceph/osd/ceph- OSD-ID /block.db", "cd /root/ ms=/tmp/monstore/ db=/root/db/ db_slow=/root/db.slow/ mkdir USDms for host in USDosd_nodes; do echo \"USDhost\" rsync -avz USDms USDhost:USDms rsync -avz USDdb USDhost:USDdb rsync -avz USDdb_slow USDhost:USDdb_slow rm -rf USDms rm -rf USDdb rm -rf USDdb_slow sh -t USDhost <<EOF for osd in /var/lib/ceph/osd/ceph-*; do ceph-objectstore-tool --type bluestore --data-path \\USDosd --op update-mon-db --mon-store-path USDms done EOF rsync -avz USDhost:USDms USDms rsync -avz USDhost:USDdb USDdb rsync -avz USDhost:USDdb_slow USDdb_slow done", "ceph-authtool /etc/ceph/ceph.client.admin.keyring -n mon. --cap mon 'allow *' --gen-key cat /etc/ceph/ceph.client.admin.keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"", "mv /root/db/*.sst /root/db.slow/*.sst /tmp/monstore/store.db", "ceph-monstore-tool /tmp/monstore rebuild -- --keyring /etc/ceph/ceph.client.admin", "mv /var/lib/ceph/mon/ceph- HOSTNAME /store.db /var/lib/ceph/mon/ceph- HOSTNAME /store.db.corrupted", "scp -r /tmp/monstore/store.db HOSTNAME :/var/lib/ceph/mon/ceph- HOSTNAME /", "chown -R ceph:ceph /var/lib/ceph/mon/ceph- HOSTNAME /store.db", "umount /var/lib/ceph/osd/ceph-*", "systemctl start ceph- FSID @ DAEMON_NAME", "systemctl start [email protected]", "ceph -s", "ceph auth import -i /etc/ceph/ceph.mgr. HOSTNAME .keyring systemctl start ceph- FSID @ DAEMON_NAME", "systemctl start ceph-b341e254-b165-11ed-a564-ac1f6bb26e8c@mgr.extensa003.exrqql.service", "systemctl start ceph- FSID @osd. OSD_ID", "systemctl start [email protected]", "ceph -s" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/troubleshooting_guide/troubleshooting-ceph-monitors
Chapter 17. Getting started with ROSA
Chapter 17. Getting started with ROSA 17.1. Tutorial: What is ROSA Red Hat OpenShift Service on AWS (ROSA) is a fully-managed turnkey application platform that allows you to focus on what matters most, delivering value to your customers by building and deploying applications. Red Hat and AWS SRE experts manage the underlying platform so you do not have to worry about infrastructure management. ROSA provides seamless integration with a wide range of AWS compute, database, analytics, machine learning, networking, mobile, and other services to further accelerate the building and delivering of differentiating experiences to your customers. ROSA makes use of AWS Security Token Service (STS) to obtain credentials to manage infrastructure in your AWS account. AWS STS is a global web service that creates temporary credentials for IAM users or federated users. ROSA uses this to assign short-term, limited-privilege, security credentials. These credentials are associated with IAM roles that are specific to each component that makes AWS API calls. This method aligns with the principals of least privilege and secure practices in cloud service resource management. The ROSA command line interface (CLI) tool manages the STS credentials that are assigned for unique tasks and takes action on AWS resources as part of OpenShift functionality. 17.1.1. Key features of ROSA Native AWS service: Access and use Red Hat OpenShift on-demand with a self-service onboarding experience through the AWS management console. Flexible, consumption-based pricing: Scale to your business needs and pay as you go with flexible pricing and an on-demand hourly or annual billing model. Single bill for Red Hat OpenShift and AWS usage: Customers will receive a single bill from AWS for both Red Hat OpenShift and AWS consumption. Fully integrated support experience: Installation, management, maintenance, and upgrades are performed by Red Hat site reliability engineers (SREs) with joint Red Hat and Amazon support and a 99.95% service-level agreement (SLA). AWS service integration: AWS has a robust portfolio of cloud services, such as compute, storage, networking, database, analytics, and machine learning. All of these services are directly accessible through ROSA. This makes it easier to build, operate, and scale globally and on-demand through a familiar management interface. Maximum Availability: Deploy clusters across multiple availability zones in supported regions to maximize availability and maintain high availability for your most demanding mission-critical applications and data. Cluster node scaling: Easily add or remove compute nodes to match resource demand. Optimized clusters: Choose from memory-optimized, compute-optimized, or general purpose EC2 instance types with clusters sized to meet your needs. Global availability: Refer to the product regional availability page to see where ROSA is available globally. 17.1.2. ROSA and Kubernetes In ROSA, everything you need to deploy and manage containers is bundled, including container management, Operators, networking, load balancing, service mesh, CI/CD, firewall, monitoring, registry, authentication, and authorization capabilities. These components are tested together for unified operations as a complete platform. Automated cluster operations, including over-the-air platform upgrades, further enhance your Kubernetes experience. 17.1.3. Basic responsibilities In general, cluster deployment and upkeep is Red Hat's or AWS's responsibility, while applications, users, and data is the customer's responsibility. For a more detailed breakdown of responsibilities, see the responsibility matrix . 17.1.4. Roadmap and feature requests Visit the ROSA roadmap to stay up-to-date with the status of features currently in development. Open a new issue if you have any suggestions for the product team. 17.1.5. AWS region availability Refer to the product regional availability page for an up-to-date view of where ROSA is available. 17.1.6. Compliance certifications ROSA is currently compliant with SOC-2 type 2, SOC 3, ISO-27001, ISO 27017, ISO 27018, HIPAA, GDPR, and PCI-DSS. We are also currently working towards FedRAMP High. 17.1.7. Nodes 17.1.7.1. Worker nodes across multiple AWS regions All nodes in a ROSA cluster must be located in the same AWS region. For clusters configured for multiple availability zones, control plane nodes and worker nodes will be distributed across the availability zones. 17.1.7.2. Minimum number of worker nodes For a ROSA cluster, the minimum is 2 worker nodes for single availability zone and 3 worker nodes for multiple availability zones. 17.1.7.3. Underlying node operating system As with all OpenShift v4.x offerings, the control plane, infra and worker nodes run Red Hat Enterprise Linux CoreOS (RHCOS). 17.1.7.4. Node hibernation or shut-down At this time, ROSA does not have a hibernation or shut-down feature for nodes. The shutdown and hibernation feature is an OpenShift platform feature that is not yet mature enough for widespread cloud services use. 17.1.7.5. Supported instances for worker nodes For a complete list of supported instances for worker nodes see AWS instance types . Spot instances are also supported. 17.1.7.6. Node autoscaling Autoscaling allows you to automatically adjust the size of the cluster based on the current workload. See About autoscaling nodes on a cluster for more details. 17.1.7.7. Maximum number of worker nodes The maximum number of worker nodes in ROSA clusters versions 4.14.14 and later is 249. For earlier versions, the limit is 180 nodes. A list of the account-wide and per-cluster roles is provided in the ROSA documentation . 17.1.8. Administrators A ROSA customer's administrator can manage users and quotas in addition to accessing all user-created projects. 17.1.9. OpenShift versions and upgrades ROSA is a managed service which is based on OpenShift Container Platform. You can view the current version and life cycle dates in the ROSA documentation . Customers can upgrade to the newest version of OpenShift and use the features from that version of OpenShift. For more information, see life cycle dates . Not all OpenShift features are be available on ROSA. Review the Service Definition for more information. 17.1.10. Support You can open a ticket directly from the OpenShift Cluster Manager . See the ROSA support documentation for more details about obtaining support. You can also visit the Red Hat Customer Portal to search or browse through the Red Hat knowledge base of articles and solutions relating to Red Hat products or submit a support case to Red Hat Support. 17.1.10.1. Limited support If a ROSA cluster is not upgraded before the "end of life" date, the cluster continues to operate in a limited support status. The SLA for that cluster will no longer be applicable, but you can still get support for that cluster. See the limited support status documentation for more details. Additional support resources Red Hat Support AWS Support AWS support customers must have a valid AWS support contract 17.1.11. Service-level agreement (SLA) Refer to the ROSA SLA page for details. 17.1.12. Notifications and communication Red Hat will provide notifications regarding new Red Hat and AWS features, updates, and scheduled maintenance through email and the Hybrid Cloud Console service log. 17.1.13. Open Service Broker for AWS (OBSA) You can use OSBA with ROSA. However, the preferred method is the more recent AWS Controller for Kubernetes . See Open Service Broker for AWS for more information on OSBA. 17.1.14. Offboarding Customers can stop using ROSA at any time and move their applications to on-premise, a private cloud, or other cloud providers. Standard reserved instances (RI) policy applies for unused RI. 17.1.15. Authentication ROSA supports the following authentication mechanisms: OpenID Connect (a profile of OAuth2), Google OAuth, GitHub OAuth, GitLab, and LDAP. 17.1.16. SRE cluster access All SRE cluster access is secured by MFA. See SRE access for more details. 17.1.17. Encryption 17.1.17.1. Encryption keys ROSA uses a key stored in KMS to encrypt EBS volumes. Customers also have the option to provide their own KMS keys at cluster creation. 17.1.17.2. KMS keys If you specify a KMS key, the control plane, infrastructure and worker node root volumes and the persistent volumes are encrypted with the key. 17.1.17.3. Data encryption By default, there is encryption at rest. The AWS Storage platform automatically encrypts your data before persisting it and decrypts the data before retrieval. See AWS EBS Encryption for more details. You can also encrypt etcd in the cluster, combining it with AWS storage encryption. This results in double the encryption which adds up to a 20% performance hit. For more details see the etcd encryption documentation. 17.1.17.4. etcd encryption etcd encryption can only be enabled at cluster creation. Note etcd encryption incurs additional overhead with negligible security risk mitigation. 17.1.17.5. etcd encryption configuration etcd encryption is configured the same as in OpenShift Container Platform. The aescbc cypher is used and the setting is patched during cluster deployment. For more details, see the Kubernetes documentation . 17.1.17.6. Multi-region KMS keys for EBS encryption Currently, the ROSA CLI does not accept multi-region KMS keys for EBS encryption. This feature is in our backlog for product updates. The ROSA CLI accepts single region KMS keys for EBS encryption if it is defined at cluster creation. 17.1.18. Infrastructure ROSA uses several different cloud services such as virtual machines, storage, and load balancers. You can see a defined list in the AWS prerequisites . 17.1.19. Credential methods There are two credential methods to grant Red Hat the permissions needed to perform the required actions in your AWS account: AWS with STS or an IAM user with admin permissions. AWS with STS is the preferred method, and the IAM user method will eventually be deprecated. AWS with STS better aligns with the principles of least privilege and secure practices in cloud service resource management. 17.1.20. Prerequisite permission or failure errors Check for a newer version of the ROSA CLI. Every release of the ROSA CLI is located in two places: Github and the Red Hat signed binary releases . 17.1.21. Storage Refer to the storage section of the service definition. OpenShift includes the CSI driver for AWS EFS. For more information, see Setting up AWS EFS for Red Hat OpenShift Service on AWS . 17.1.22. Using a VPC At installation you can select to deploy to an existing VPC or bring your own VPC. You can then select the required subnets and provide a valid CIDR range that encompasses the subnets for the installation program when using those subnets. ROSA allows multiple clusters to share the same VPC. The number of clusters on one VPC is limited by the remaining AWS resource quota and CIDR ranges that cannot overlap. See CIDR Range Definitions for more information. 17.1.23. Network plugin ROSA uses the OpenShift OVN-Kubernetes default CNI network provider. 17.1.24. Cross-namespace networking Cluster admins can customize, and deny, cross-namespace on a project basis using NetworkPolicy objects. Refer to Configuring multitenant isolation with network policy for more information. 17.1.25. Using Prometheus and Grafana You can use Prometheus and Grafana to monitor containers and manage capacity using OpenShift User Workload Monitoring. This is a check-box option in the OpenShift Cluster Manager . 17.1.26. Audit logs output from the cluster control-plane If the Cluster Logging Operator Add-on has been added to the cluster then audit logs are available through CloudWatch. If it has not, then a support request would allow you to request some audit logs. Small targeted and time-boxed logs can be requested for export and sent to a customer. The selection of audit logs available are at the discretion of SRE in the category of platform security and compliance. Requests for exports of a cluster's entirety of logs will be rejected. 17.1.27. AWS Permissions Boundary You can use an AWS Permissions Boundary around the policies for your cluster. 17.1.28. AMI ROSA worker nodes use a different AMI from OSD and OpenShift Container Platform. Control Plane and Infra node AMIs are common across products in the same version. 17.1.29. Cluster backups ROSA STS clusters do not have backups. Users must have their own backup policies for applications and data. See our backup policy for more information. 17.1.30. Custom domain You can define a custom domain for your applications. See Configuring custom domains for applications for more information. 17.1.31. ROSA domain certificates Red Hat infrastructure (Hive) manages certificate rotation for default application ingress. 17.1.32. Disconnected environments ROSA does not support an air-gapped, disconnected environment. The ROSA cluster must have egress to the internet to access our registry, S3, and send metrics. The service requires a number of egress endpoints. Ingress can be limited to a PrivateLink for Red Hat SREs and a VPN for customer access. Additional resources ROSA product pages: Red Hat product page AWS product page Red Hat Customer Portal ROSA specific resources AWS ROSA getting started guide ROSA documentation ROSA service definition ROSA responsibility assignment matrix Understanding Process and Security About Availability Updates Lifecycle ROSA roadmap Learn about OpenShift OpenShift Cluster Manager Red Hat Support 17.2. Tutorial: ROSA with AWS STS explained This tutorial outlines the two options for allowing Red Hat OpenShift Service on AWS (ROSA) to interact with resources in a user's Amazon Web Service (AWS) account. It details the components and processes that ROSA with Security Token Service (STS) uses to obtain the necessary credentials. It also reviews why ROSA with STS is the more secure, preferred method. Note This content currently covers ROSA Classic with AWS STS. For ROSA with hosted control planes (HCP) with AWS STS, see AWS STS and ROSA with HCP explained . This tutorial will: Enumerate two of the deployment options: ROSA with IAM Users ROSA with STS Explain the differences between the two options Explain why ROSA with STS is more secure and the preferred option Explain how ROSA with STS works 17.2.1. Different credential methods to deploy ROSA As part of ROSA, Red Hat manages infrastructure resources in your AWS account and must be granted the necessary permissions. There are currently two supported methods for granting those permissions: Using static IAM user credentials with an AdministratorAccess policy This is referred to as "ROSA with IAM Users" in this tutorial. It is not the preferred credential method. Using AWS STS with short-lived, dynamic tokens This is referred to as "ROSA with STS" in this tutorial. It is the preferred credential method. 17.2.1.1. Rosa with IAM Users When ROSA was first released, the only credential method was ROSA with IAM Users. This method grants IAM users with an AdministratorAccess policy full access to create the necessary resources in the AWS account that uses ROSA. The cluster can then create and expand its credentials as needed. 17.2.1.2. ROSA with STS ROSA with STS grants users limited, short-term access to resources in your AWS account. The STS method uses predefined roles and policies to grant temporary, least-privilege permissions to IAM users or authenticated federated users. The credentials typically expire an hour after being requested. Once expired, they are no longer recognized by AWS and no longer have account access from API requests made with them. For more information, see the AWS documentation . While both ROSA with IAM Users and ROSA with STS are currently enabled, ROSA with STS is the preferred and recommended option. 17.2.2. ROSA with STS security Several crucial components make ROSA with STS more secure than ROSA with IAM Users: An explicit and limited set of roles and policies that the user creates ahead of time. The user knows every requested permission and every role used. The service cannot do anything outside of those permissions. Whenever the service needs to perform an action, it obtains credentials that expire in one hour or less. This means that there is no need to rotate or revoke credentials. Additionally, credential expiration reduces the risks of credentials leaking and being reused. 17.2.3. AWS STS explained ROSA uses AWS STS to grant least-privilege permissions with short-term security credentials to specific and segregated IAM roles. The credentials are associated with IAM roles specific to each component and cluster that makes AWS API calls. This method aligns with principles of least-privilege and secure practices in cloud service resource management. The ROSA command line interface (CLI) tool manages the STS roles and policies that are assigned for unique tasks and takes action upon AWS resources as part of OpenShift functionality. STS roles and policies must be created for each ROSA cluster. To make this easier, the installation tools provide all the commands and files needed to create the roles as policies and an option to allow the CLI to automatically create the roles and policies. See Creating a ROSA cluster with STS using customizations for more information about the different --mode options. 17.2.4. Components specific to ROSA with STS AWS infrastructure - This provides the infrastructure required for the cluster. It contains the actual EC2 instances, storage, and networking components. See AWS compute types to see supported instance types for compute nodes and provisioned AWS infrastructure for control plane and infrastructure node configuration. AWS STS - See the credential method section above. OpenID Connect (OIDC) - This provides a mechanism for cluster Operators to authenticate with AWS, assume the cluster roles through a trust policy, and obtain temporary credentials from STS to make the required API calls. Roles and policies - The roles and policies are one of the main differences between ROSA with STS and ROSA with IAM Users. For ROSA with STS, the roles and policies used by ROSA are broken into account-wide roles and policies and Operator roles and policies. The policies determine the allowed actions for each of the roles. See About IAM resources for more details about the individual roles and policies. The following account-wide roles are required: ManagedOpenShift-Installer-Role ManagedOpenShift-ControlPlane-Role ManagedOpenShift-Worker-Role ManagedOpenShift-Support-Role The following account-wide policies are required: ManagedOpenShift-Installer-Role-Policy ManagedOpenShift-ControlPlane-Role-Policy ManagedOpenShift-Worker-Role-Policy ManagedOpenShift-Support-Role-Policy ManagedOpenShift-openshift-ingress-operator-cloud-credentials [1] ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent [1] ManagedOpenShift-openshift-cloud-network-config-controller-cloud [1] ManagedOpenShift-openshift-machine-api-aws-cloud-credentials [1] ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede [1] ManagedOpenShift-openshift-image-registry-installer-cloud-creden [1] This policy is used by the cluster Operator roles, listed below. The Operator roles are created in a second step because they are dependent on an existing cluster name and cannot be created at the same time as the account-wide roles. The Operator roles are: <cluster-name\>-xxxx-openshift-cluster-csi-drivers-ebs-cloud-credent <cluster-name\>-xxxx-openshift-cloud-network-config-controller-cloud <cluster-name\>-xxxx-openshift-machine-api-aws-cloud-credentials <cluster-name\>-xxxx-openshift-cloud-credential-operator-cloud-crede <cluster-name\>-xxxx-openshift-image-registry-installer-cloud-creden <cluster-name\>-xxxx-openshift-ingress-operator-cloud-credentials Trust policies are created for each account-wide and Operator role. 17.2.5. Deploying a ROSA STS cluster You are not expected to create the resources listed in the below steps from scratch. The ROSA CLI creates the required JSON files for you and outputs the commands you need. The ROSA CLI can also take this a step further and run the commands for you, if desired. Steps to deploy a ROSA with STS cluster Create the account-wide roles and policies. Assign the permissions policy to the corresponding account-wide role. Create the cluster. Create the Operator roles and policies. Assign the permission policy to the corresponding Operator role. Create the OIDC provider. The roles and policies can be created automatically by the ROSA CLI, or they can be manually created by utilizing the --mode manual or --mode auto flags in the ROSA CLI. For further details about deployment, see Creating a cluster with customizations or the Deploying the cluster tutorial . 17.2.6. ROSA with STS workflow The user creates the required account-wide roles and account-wide policies. For more information, see the components section in this tutorial. During role creation, a trust policy, known as a cross-account trust policy, is created which allows a Red Hat-owned role to assume the roles. Trust policies are also created for the EC2 service, which allows workloads on EC2 instances to assume roles and obtain credentials. The user can then assign a corresponding permissions policy to each role. After the account-wide roles and policies are created, the user can create a cluster. Once cluster creation is initiated, the Operator roles are created so that cluster Operators can make AWS API calls. These roles are then assigned to the corresponding permission policies that were created earlier and a trust policy with an OIDC provider. The Operator roles differ from the account-wide roles in that they ultimately represent the pods that need access to AWS resources. Because a user cannot attach IAM roles to pods, they must create a trust policy with an OIDC provider so that the Operator, and therefore the pods, can access the roles they need. Once the user assigns the roles to the corresponding policy permissions, the final step is creating the OIDC provider. When a new role is needed, the workload currently using the Red Hat role will assume the role in the AWS account, obtain temporary credentials from AWS STS, and begin performing the actions using API calls within the customer's AWS account as permitted by the assumed role's permissions policy. The credentials are temporary and have a maximum duration of one hour. The entire workflow is depicted in the following graphic: Operators use the following process to obtain the requisite credentials to perform their tasks. Each Operator is assigned an Operator role, a permissions policy, and a trust policy with an OIDC provider. The Operator will assume the role by passing a JSON web token that contains the role and a token file ( web_identity_token_file ) to the OIDC provider, which then authenticates the signed key with a public key. The public key is created during cluster creation and stored in an S3 bucket. The Operator then confirms that the subject in the signed token file matches the role in the role trust policy which ensures that the OIDC provider can only obtain the allowed role. The OIDC provider then returns the temporary credentials to the Operator so that the Operator can make AWS API calls. For a visual representation, see below: 17.2.7. ROSA with STS use cases Creating nodes at cluster install The Red Hat installation program uses the RH-Managed-OpenShift-Installer role and a trust policy to assume the Managed-OpenShift-Installer-Role role in the customer's account. This process returns temporary credentials from AWS STS. The installation program begins making the required API calls with the temporary credentials just received from STS. The installation program creates the required infrastructure in AWS. The credentials expire within an hour and the installation program no longer has access to the customer's account. The same process also applies for support cases. In support cases, a Red Hat site reliability engineer (SRE) replaces the installation program. Scaling the cluster The machine-api-operator uses AssumeRoleWithWebIdentity to assume the machine-api-aws-cloud-credentials role. This launches the sequence for the cluster Operators to receive the credentials. The machine-api-operator role can now make the relevant API calls to add more EC2 instances to the cluster. 17.3. Tutorial: OpenShift concepts 17.3.1. Source-to-Image (S2I) Source-to-Image (S2I) is a toolkit and workflow for building reproducible container images from source code. S2I produces ready-to-run images by inserting source code into a container image and letting the container prepare the source code. By creating self-assembling builder images, you can version and control your build environments exactly like you use container images to version your runtime environments. Additional resources Source-to-Image (S2I) upstream project 17.3.1.1. How it works For a dynamic language such as Ruby, the build time and run time environments are typically the same. Assuming that Ruby, Bundler, Rake, Apache, GCC, and all other packages needed to set up and run a Ruby application are already installed, a builder image performs the following steps: The builder image starts a container with the application source injected into a known directory. The container process transforms that source code into the appropriate runnable setup. For example, it installs dependencies with Bundler and moves the source code into a directory where Apache has been preconfigured to look for the Ruby configuration file. It then commits the new container and sets the image entrypoint to be a script that will start Apache to host the Ruby application. For compiled languages such as C, C++, Go, or Java, the necessary dependencies for compilation might outweigh the size of the runtime artifacts. To keep runtime images small, S2I enables a multiple-step build process, where a binary artifact such as an executable file is created in the first builder image, extracted, and injected into a second runtime image that simply places the executable program in the correct location. For example, to create a reproducible build pipeline for Tomcat and Maven: Create a builder image containing OpenJDK and Tomcat that expects to have a WAR file injected. Create a second image that layers on top of the first image Maven and any other standard dependencies, and expects to have a Maven project injected. Start S2I using the Java application source and the Maven image to create the desired application WAR. Start S2I a second time using the WAR file from the earlier step and the initial Tomcat image to create the runtime image. By placing build logic inside of images and combining the images into multiple steps, the runtime environment is close to the build environment without requiring the deployment of build tools to production. 17.3.1.2. S2I benefits Reproducibility Allow build environments to be tightly versioned by encapsulating them within a container image and defining a simple interface of injected source code for callers. Reproducible builds are a key requirement for enabling security updates and continuous integration in containerized infrastructure, and builder images help ensure repeatability and the ability to swap run times. Flexibility Any existing build system that can run on Linux can run inside of a container, and each individual builder can also be part of a larger pipeline. The scripts that process the application source code can be injected into the builder image, allowing authors to adapt existing images to enable source handling. Speed Instead of building multiple layers in a single Dockerfile, S2I encourages authors to represent an application in a single image layer. This saves time during creation and deployment and allows for better control over the output of the final image. Security Dockerfiles are run without many of the normal operational controls of containers. They usually run as root and have access to the container network. S2I can control what permissions and privileges are available to the builder image since the build is launched in a single container. In concert with platforms like OpenShift, S2I allows administrators to control what privileges developers have at build time. 17.3.2. Routes An OpenShift route exposes a service at a hostname so that external clients can reach it by name. When a Route object is created on OpenShift, it gets picked up by the built-in HAProxy load balancer to expose the requested service and make it externally available with the given configuration. Similar to the Kubernetes Ingress object, Red Hat created the concept of route to fill a need and then contributed the design principles behind it to the community, which heavily influenced the Ingress design. A route does have some additional features as can be seen in the following chart: Feature Ingress on OpenShift Route on OpenShift Standard Kubernetes object X External access to services X X Persistent (sticky) sessions X X Load-balancing strategies (e.g. round robin) X X Rate-limit and throttling X X IP whitelisting X X TLS edge termination for improved security X X TLS re-encryption for improved security X TLS passhtrough for improved security X Multiple weighted backends (split traffic) X Generated pattern-based hostnames X Wildcard domains X Note DNS resolution for a hostname is handled separately from routing. Your administrator might have configured a cloud domain that will always correctly resolve to the router or modify your unrelated hostname DNS records independently to resolve to the router. An individual route can override some defaults by providing specific configurations in its annotations. Additional resources Route-specific annotations 17.3.3. Image streams An image stream stores a mapping of tags to images, metadata overrides that are applied when images are tagged in a stream, and an optional reference to a Docker image repository on a registry. 17.3.3.1. Image stream benefits Using an image stream makes it easier to change a tag for a container image. Otherwise, to manually change a tag, you must download the image, change it locally, then push it all back. Promoting applications by manually changing a tag and then updating the deployment object entails many steps. With image streams, you upload a container image once and then you manage its virtual tags internally in OpenShift. In one project you might use the developer tag and only change a reference to it internally, while in production you might use a production tag and also manage it internally. You do not have to deal with the registry. You can also use image streams in conjunction with deployment configs to set a trigger that will start a deployment as soon as a new image appears or a tag changes its reference. Additional resources Red Hat Blog: How to Simplify Container Image Management in Kubernetes with OpenShift Image Streams 17.3.4. Builds A build is the process of transforming input parameters into a resulting object. Most often, the process is used to transform input parameters or source code into a runnable image. A BuildConfig object is the definition of the entire build process. OpenShift Container Platform leverages Kubernetes by creating Docker-formatted containers from build images and pushing them to a container image registry. Build objects share common characteristics: Inputs for a build Requirements to complete a build process Logging the build process Publishing resources from successful builds Publishing the final status of the build Builds take advantage of resource restrictions, specifying limitations on resources such as CPU usage, memory usage, and build or pod execution time. Additional resources Understanding image builds 17.4. Deploying a cluster 17.4.1. Tutorial: Choosing a deployment method This tutorial outlines the different ways to deploy a cluster. Choose the deployment method that best fits your preferences and needs. 17.4.1.1. Deployment options If you want: Only the necessary CLI commands - Simple CLI guide A user interface - Simple UI guide The CLI commands with details - Detailed CLI guide A user interface with details - Detailed UI guide All of the above deployment options work well for this tutorial. If you are doing this tutorial for the first time, the Simple CLI guide is the simplest and recommended method. 17.4.2. Tutorial: Simple CLI guide This page outlines the minimum list of commands to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster using the command line interface (CLI). Note While this simple deployment works well for a tutorial setting, clusters used in production should be deployed with a more detailed method. 17.4.2.1. Prerequisites You have completed the prerequisites in the Setup tutorial. 17.4.2.2. Creating account roles Run the following command once for each AWS account and y-stream OpenShift version: rosa create account-roles --mode auto --yes 17.4.2.3. Deploying the cluster Create the cluster with the default configuration by running the following command substituting your own cluster name: rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes Check the status of your cluster by running the following command: rosa list clusters 17.4.3. Tutorial: Detailed CLI guide This tutorial outlines the detailed steps to deploy a ROSA cluster using the ROSA CLI. 17.4.3.1. CLI deployment modes There are two modes with which to deploy a ROSA cluster. One is automatic, which is quicker and performs the manual work for you. The other is manual, requires you to run extra commands, and allows you to inspect the roles and policies being created. This tutorial documents both options. If you want to create a cluster quickly, use the automatic option. If you prefer exploring the roles and policies being created, use the manual option. Choose the deployment mode by using the --mode flag in the relevant commands. Valid options for --mode are: manual : Role and policies are created and saved in the current directory. You must manually run the provided commands as the step. This option allows you to review the policy and roles before creating them. auto : Roles and policies are created and applied automatically using the current AWS account. Tip You can use either deployment method for this tutorial. The auto mode is faster and has less steps. 17.4.3.2. Deployment workflow The overall deployment workflow follows these steps: rosa create account-roles - This is executed only once for each account. Once created, the account roles do not need to be created again for more clusters of the same y-stream version. rosa create cluster rosa create operator-roles - For manual mode only. rosa create oidc-provider - For manual mode only. For each additional cluster in the same account for the same y-stream version, only step 2 is needed for automatic mode. Steps 2 through 4 are needed for manual mode. 17.4.3.3. Automatic mode Use this method if you want the ROSA CLI to automate the creation of the roles and policies to create your cluster quickly. 17.4.3.3.1. Creating account roles If this is the first time you are deploying ROSA in this account and you have not yet created the account roles, then create the account-wide roles and policies, including Operator policies. Run the following command to create the account-wide roles: rosa create account-roles --mode auto --yes Example output I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts 17.4.3.3.2. Creating a cluster Run the following command to create a cluster with all the default options: rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes Note This will also create the required Operator roles and OIDC provider. If you want to see all available options for your cluster use the --help flag or --interactive for interactive mode. Example input USD rosa create cluster --cluster-name my-rosa-cluster --sts --mode auto --yes Example output I: Creating cluster 'my-rosa-cluster' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'. Name: my-rosa-cluster ID: 1mlhulb3bo0l54ojd0ji000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.ibhp.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Master: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Master: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credential-oper State: waiting (Waiting for OIDC configuration) Private: No Created: Oct 28 2021 20:28:09 UTC Details Page: https://console.redhat.com/openshift/details/s/1wupmiQy45xr1nN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1mlhulb3bo0l54ojd0ji000000000000 17.4.3.3.2.1. Default configuration The default settings are as follows: Nodes: 3 control plane nodes 2 infrastructure nodes 2 worker nodes No autoscaling See the documentation on ec2 instances for more details. Region: As configured for the aws CLI Networking IP ranges: Machine CIDR: 10.0.0.0/16 Service CIDR: 172.30.0.0/16 Pod CIDR: 10.128.0.0/14 New VPC Default AWS KMS key for encryption The most recent version of OpenShift available to rosa A single availability zone Public cluster 17.4.3.3.3. Checking the installation status Run one of the following commands to check the status of your cluster: For a detailed view of the status, run: rosa describe cluster --cluster <cluster-name> For an abridged view of the status, run: rosa list clusters The cluster state will change from "waiting" to "installing" to "ready". This will take about 40 minutes. Once the state changes to "ready" your cluster is installed. 17.4.3.4. Manual Mode If you want to review the roles and policies before applying them to a cluster, use the manual method. This method requires running a few extra commands to create the roles and policies. This section uses the --interactive mode. See the documentation on interactive mode for a description of the fields in this section. 17.4.3.4.1. Creating account roles If this is the first time you are deploying ROSA in this account and you have not yet created the account roles, create the account-wide roles and policies, including the Operator policies. The command creates the needed JSON files for the required roles and policies for your account in the current directory. It also outputs the aws CLI commands that you need to run to create these objects. Run the following command to create the needed files and output the additional commands: rosa create account-roles --mode manual Example output I: All policy files saved to the current directory I: Run the following commands to create the account roles and policies: aws iam create-role \ --role-name ManagedOpenShift-Worker-Role \ --assume-role-policy-document file://sts_instance_worker_trust_policy.json \ --tags Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy \ --role-name ManagedOpenShift-Worker-Role \ --policy-name ManagedOpenShift-Worker-Role-Policy \ --policy-document file://sts_instance_worker_permission_policy.json Check the contents of your current directory to see the new files. Use the aws CLI to create each of these objects. Example output USD ls openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json sts_instance_controlplane_permission_policy.json openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json sts_instance_controlplane_trust_policy.json openshift_image_registry_installer_cloud_credentials_policy.json sts_instance_worker_permission_policy.json openshift_ingress_operator_cloud_credentials_policy.json sts_instance_worker_trust_policy.json openshift_machine_api_aws_cloud_credentials_policy.json sts_support_permission_policy.json sts_installer_permission_policy.json sts_support_trust_policy.json sts_installer_trust_policy.json Optional: Open the files to review what you will create. For example, opening the sts_installer_permission_policy.json shows: Example output USD cat sts_installer_permission_policy.json { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "autoscaling:DescribeAutoScalingGroups", "ec2:AllocateAddress", "ec2:AssociateAddress", "ec2:AssociateDhcpOptions", "ec2:AssociateRouteTable", "ec2:AttachInternetGateway", "ec2:AttachNetworkInterface", "ec2:AuthorizeSecurityGroupEgress", "ec2:AuthorizeSecurityGroupIngress", [...] You can also see the contents in the About IAM resources for ROSA clusters documentation. Run the aws commands listed in step 1. You can copy and paste if you are in the same directory as the JSON files you created. 17.4.3.4.2. Creating a cluster After the aws commands are executed successfully, run the following command to begin ROSA cluster creation in interactive mode: rosa create cluster --interactive --sts See the ROSA documentation for a description of the fields. For the purpose of this tutorial, copy and then input the following values: Cluster name: my-rosa-cluster OpenShift version: <choose version> External ID (optional): <leave blank> Operator roles prefix: <accept default> Multiple availability zones: No AWS region: <choose region> PrivateLink cluster: No Install into an existing VPC: No Enable Customer Managed key: No Compute nodes instance type: m5.xlarge Enable autoscaling: No Compute nodes: 2 Machine CIDR: <accept default> Service CIDR: <accept default> Pod CIDR: <accept default> Host prefix: <accept default> Encrypt etcd data (optional): No Disable Workload monitoring: No Example output I: Creating cluster 'my-rosa-cluster' I: To create this cluster again in the future, you can run: rosa create cluster --cluster-name my-rosa-cluster --role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role --operator-roles-prefix my-rosa-cluster --region us-west-2 --version 4.8.13 --compute-nodes 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: my-rosa-cluster ID: 1t6i760dbum4mqltqh6o000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.abcd.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Control plane: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cloud-network-config-controller-cloud-cre - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credentia - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credential State: waiting (Waiting for OIDC configuration) Private: No Created: Jul 1 2022 22:13:50 UTC Details Page: https://console.redhat.com/openshift/details/s/2BMQm8xz8Hq5yEN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1t6i760dbum4mqltqh6o000000000000 I: Run the following commands to continue the cluster creation: rosa create operator-roles --cluster my-rosa-cluster rosa create oidc-provider --cluster my-rosa-cluster I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'. Note The cluster state will remain as "waiting" until the two steps are completed. 17.4.3.4.3. Creating Operator roles The above step outputs the commands to run. These roles need to be created once for each cluster. To create the roles run the following command: rosa create operator-roles --mode manual --cluster <cluster-name> Example output I: Run the following commands to create the operator roles: aws iam create-role \ --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials \ --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json \ --tags Key=rosa_cluster_id,Value=1mkesci269png3tck000000000000000 Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy \ --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials \ --policy-arn arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden [...] Run each of the aws commands. 17.4.3.4.4. Creating the OIDC provider Run the following command to create the OIDC provider: rosa create oidc-provider --mode manual --cluster <cluster-name> This displays the aws commands that you need to run. Example output I: Run the following commands to create the OIDC provider: USD aws iam create-open-id-connect-provider \ --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 \ --client-id-list openshift sts.amazonaws.com \ --thumbprint-list a9d53002e97e00e043244f3d170d000000000000 USD aws iam create-open-id-connect-provider \ --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 \ --client-id-list openshift sts.amazonaws.com \ --thumbprint-list a9d53002e97e00e043244f3d170d000000000000 Your cluster will now continue the installation process. 17.4.3.4.5. Checking the installation status Run one of the following commands to check the status of your cluster: For a detailed view of the status, run: rosa describe cluster --cluster <cluster-name> For an abridged view of the status, run: rosa list clusters The cluster state will change from "waiting" to "installing" to "ready". This will take about 40 minutes. Once the state changes to "ready" your cluster is installed. 17.4.3.5. Obtaining the Red Hat Hybrid Cloud Console URL To obtain the Hybrid Cloud Console URL, run the following command: rosa describe cluster -c <cluster-name> | grep Console The cluster has now been successfully deployed. The tutorial shows how to create an admin user to be able to use the cluster immediately. 17.4.4. Tutorial: Simple UI guide This page outlines the minimum list of commands to deploy a ROSA cluster using the user interface (UI). Note While this simple deployment works well for a tutorial setting, clusters used in production should be deployed with a more detailed method. 17.4.4.1. Prerequisites You have completed the prerequisites in the Setup tutorial. 17.4.4.2. Creating account roles Run the following command once for each AWS account and y-stream OpenShift version: rosa create account-roles --mode auto --yes 17.4.4.3. Creating Red Hat OpenShift Cluster Manager roles Create one OpenShift Cluster Manager role for each AWS account by running the following command: rosa create ocm-role --mode auto --admin --yes Create one OpenShift Cluster Manager user role for each AWS account by running the following command: rosa create user-role --mode auto --yes Use the OpenShift Cluster Manager to select your AWS account, cluster options, and begin deployment. OpenShift Cluster Manager UI displays cluster status. 17.4.5. Tutorial: Detailed UI guide This tutorial outlines the detailed steps to deploy a Red Hat OpenShift Service on AWS (ROSA) cluster using the Red Hat OpenShift Cluster Manager user interface (UI). 17.4.5.1. Deployment workflow The overall deployment workflow follows these steps: Create the account wide roles and policies. Associate your AWS account with your Red Hat account. Create and link the Red Hat OpenShift Cluster Manager role. Create and link the user role. Create the cluster. Step 1 only needs to be performed the first time you are deploying into an AWS account. Step 2 only needs to be performed the first time you are using the UI. For successive clusters of the same y-stream version, you only need to create the cluster. 17.4.5.2. Creating account wide roles Note If you already have account roles from an earlier deployment, skip this step. The UI will detect your existing roles after you select an associated AWS account. If this is the first time you are deploying ROSA in this account and you have not yet created the account roles, create the account-wide roles and policies, including the Operator policies. In your terminal, run the following command to create the account-wide roles: USD rosa create account-roles --mode auto --yes Example output I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts 17.4.5.3. Associating your AWS account with your Red Hat account This step tells the OpenShift Cluster Manager what AWS account you want to use when deploying ROSA. Note If you have already associated your AWS accounts, skip this step. Open the Red Hat Hybrid Cloud Console by visiting the OpenShift Cluster Manager and logging in to your Red Hat account. Click Create Cluster . Scroll down to the Red Hat OpenShift Service on AWS (ROSA) row and click Create Cluster . A dropdown menu appears. Click With web interface . Under "Select an AWS control plane type," choose Classic . Then click . Click the dropbox under Associated AWS infrastructure account . If you have not yet associated any AWS accounts, the dropbox may be empty. Click How to associate a new AWS account . A sidebar appears with instructions for associating a new AWS account. 17.4.5.4. Creating and associating an OpenShift Cluster Manager role Run the following command to see if an OpenShift Cluster Manager role exists: USD rosa list ocm-role The UI displays the commands to create an OpenShift Cluster Manager role with two different levels of permissions: Basic OpenShift Cluster Manager role: Allows the OpenShift Cluster Manager to have read-only access to the account to check if the roles and policies that are required by ROSA are present before creating a cluster. You will need to manually create the required roles, policies, and OIDC provider using the CLI. Admin OpenShift Cluster Manager role: Grants the OpenShift Cluster Manager additional permissions to create the required roles, policies, and OIDC provider for ROSA. Using this makes the deployment of a ROSA cluster quicker since the OpenShift Cluster Manager will be able to create the required resources for you. To read more about these roles, see the OpenShift Cluster Manager roles and permissions section of the documentation. For the purposes of this tutorial, use the Admin OpenShift Cluster Manager role for the simplest and quickest approach. Copy the command to create the Admin OpenShift Cluster Manager role from the sidebar or switch to your terminal and enter the following command: USD rosa create ocm-role --mode auto --admin --yes This command creates the OpenShift Cluster Manager role and associates it with your Red Hat account. Example output I: Creating ocm role I: Creating role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-OCM-Role-12561000' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' I: Linking OCM role I: Successfully linked role-arn 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' with organization account '1MpZfntsZeUdjWHg7XRgP000000' Click Step 2: User role . 17.4.5.4.1. Other OpenShift Cluster Manager role creation options Manual mode: If you prefer to run the AWS CLI commands yourself, you can define the mode as manual rather than auto . The CLI will output the AWS commands and the relevant JSON files are created in the current directory. Use the following command to create the OpenShift Cluster Manager role in manual mode: USD rosa create ocm-role --mode manual --admin --yes Basic OpenShift Cluster Manager role: If you prefer that the OpenShift Cluster Manager has read only access to the account, create a basic OpenShift Cluster Manager role. You will then need to manually create the required roles, policies, and OIDC provider using the CLI. Use the following command to create a Basic OpenShift Cluster Manager role: USD rosa create ocm-role --mode auto --yes 17.4.5.5. Creating an OpenShift Cluster Manager user role As defined in the user role documentation , the user role needs to be created so that ROSA can verify your AWS identity. This role has no permissions, and it is only used to create a trust relationship between the installation program account and your OpenShift Cluster Manager role resources. Check if a user role already exists by running the following command: USD rosa list user-role Run the following command to create the user role and to link it to your Red Hat account: USD rosa create user-role --mode auto --yes Example output I: Creating User role I: Creating ocm user role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-User-rosa-user-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' I: Linking User role I: Successfully linked role ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' with account '1rbOQez0z5j1YolInhcXY000000' Note As before, you can define --mode manual if you'd prefer to run the AWS CLI commands yourself. The CLI outputs the AWS commands and the relevant JSON files are created in the current directory. Make sure to link the role. Click Step 3: Account roles . 17.4.5.6. Creating account roles Create your account roles by running the following command: USD rosa create account-roles --mode auto Click OK to close the sidebar. 17.4.5.7. Confirming successful account association You should now see your AWS account in the Associated AWS infrastructure account dropdown menu. If you see your account, account association was successful. Select the account. You will see the account role ARNs populated below. Click . 17.4.5.8. Creating the cluster For the purposes of this tutorial make the following selections: Cluster settings Cluster name: <pick a name\> Version: <select latest version\> Region: <select region\> Availability: Single zone Enable user workload monitoring: leave checked Enable additional etcd encryption: leave unchecked Encrypt persistent volumes with customer keys: leave unchecked Click . Leave the default settings on for the machine pool: Default machine pool settings Compute node instance type: m5.xlarge - 4 vCPU 16 GiB RAM Enable autoscaling: unchecked Compute node count: 2 Leave node labels blank Click . 17.4.5.8.1. Networking Leave all the default values for configuration. Click . Leave all the default values for CIDR ranges. Click . 17.4.5.8.2. Cluster roles and policies For this tutorial, leave Auto selected. It will make the cluster deployment process simpler and quicker. Note If you selected a Basic OpenShift Cluster Manager role earlier, you can only use manual mode. You must manually create the operator roles and OIDC provider. See the "Basic OpenShift Cluster Manager role" section below after you have completed the "Cluster updates" section and started cluster creation. 17.4.5.8.3. Cluster updates Leave all the options at default in this section. 17.4.5.8.4. Reviewing and creating your cluster Review the content for the cluster configuration. Click Create cluster . 17.4.5.8.5. Monitoring the installation progress Stay on the current page to monitor the installation progress. It should take about 40 minutes. 17.4.5.9. Basic OpenShift Cluster Manager Role Note If you created an Admin OpenShift Cluster Manager role as directed above ignore this entire section. The OpenShift Cluster Manager will create the resources for you. If you created a Basic OpenShift Cluster Manager role earlier, you will need to manually create two more elements before cluster installation can continue: Operator roles OIDC provider 17.4.5.9.1. Creating Operator roles A pop up window will show you the commands to run. Run the commands from the window in your terminal to launch interactive mode. Or, for simplicity, run the following command to create the Operator roles: USD rosa create operator-roles --mode auto --cluster <cluster-name> --yes Example output I: Creating roles using 'arn:aws:iam::000000000000:user/rosauser' I: Created role 'rosacluster-b736-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-ingress-operator-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created role 'rosacluster-b736-openshift-cloud-network-config-controller-cloud' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-network-config-controller-cloud' I: Created role 'rosacluster-b736-openshift-machine-api-aws-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-machine-api-aws-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' I: Created role 'rosacluster-b736-openshift-image-registry-installer-cloud-creden' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-image-registry-installer-cloud-creden' 17.4.5.9.2. Creating the OIDC provider In your terminal, run the following command to create the OIDC provider: USD rosa create oidc-provider --mode auto --cluster <cluster-name> --yes Example output I: Creating OIDC provider using 'arn:aws:iam::000000000000:user/rosauser' I: Created OIDC provider with ARN 'arn:aws:iam::000000000000:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/1tt4kvrr2kha2rgs8gjfvf0000000000' 17.4.6. Tutorial: Hosted control plane (HCP) guide Follow this workshop to deploy a sample Red Hat OpenShift Service on AWS (ROSA) with hosted control planes (HCP) cluster. You can then use your cluster in the tutorials. Tutorial objectives Learn to create your cluster prerequisites: Create a sample virtual private cloud (VPC) Create sample OpenID Connect (OIDC) resources Create sample environment variables Deploy a sample ROSA cluster Prerequisites ROSA version 1.2.31 or later Amazon Web Service (AWS) command line interface (CLI) ROSA CLI ( rosa ) 17.4.6.1. Creating your cluster prerequisites Before deploying a ROSA with HCP cluster, you must have both a VPC and OIDC resources. We will create these resources first. ROSA uses the bring your own VPC (BYO-VPC) model. 17.4.6.1.1. Creating a VPC Make sure your AWS CLI ( aws ) is configured to use a region where ROSA is available. See the regions supported by the AWS CLI by running the following command: USD rosa list regions --hosted-cp Create the VPC. For this tutorial, the following script creates the VPC and its required components. It uses the region configured in your aws CLI. #!/bin/bash set -e ########## # This script will create the network requirements for a ROSA cluster. This will be # a public cluster. This creates: # - VPC # - Public and private subnets # - Internet Gateway # - Relevant route tables # - NAT Gateway # # This will automatically use the region configured for the aws cli # ########## VPC_CIDR=10.0.0.0/16 PUBLIC_CIDR_SUBNET=10.0.1.0/24 PRIVATE_CIDR_SUBNET=10.0.0.0/24 # Create VPC echo -n "Creating VPC..." VPC_ID=USD(aws ec2 create-vpc --cidr-block USDVPC_CIDR --query Vpc.VpcId --output text) # Create tag name aws ec2 create-tags --resources USDVPC_ID --tags Key=Name,Value=USDCLUSTER_NAME # Enable dns hostname aws ec2 modify-vpc-attribute --vpc-id USDVPC_ID --enable-dns-hostnames echo "done." # Create Public Subnet echo -n "Creating public subnet..." PUBLIC_SUBNET_ID=USD(aws ec2 create-subnet --vpc-id USDVPC_ID --cidr-block USDPUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources USDPUBLIC_SUBNET_ID --tags Key=Name,Value=USDCLUSTER_NAME-public echo "done." # Create private subnet echo -n "Creating private subnet..." PRIVATE_SUBNET_ID=USD(aws ec2 create-subnet --vpc-id USDVPC_ID --cidr-block USDPRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources USDPRIVATE_SUBNET_ID --tags Key=Name,Value=USDCLUSTER_NAME-private echo "done." # Create an internet gateway for outbound traffic and attach it to the VPC. echo -n "Creating internet gateway..." IGW_ID=USD(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text) echo "done." aws ec2 create-tags --resources USDIGW_ID --tags Key=Name,Value=USDCLUSTER_NAME aws ec2 attach-internet-gateway --vpc-id USDVPC_ID --internet-gateway-id USDIGW_ID > /dev/null 2>&1 echo "Attached IGW to VPC." # Create a route table for outbound traffic and associate it to the public subnet. echo -n "Creating route table for public subnet..." PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 create-route-table --vpc-id USDVPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources USDPUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=USDCLUSTER_NAME echo "done." aws ec2 create-route --route-table-id USDPUBLIC_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id USDIGW_ID > /dev/null 2>&1 echo "Created default public route." aws ec2 associate-route-table --subnet-id USDPUBLIC_SUBNET_ID --route-table-id USDPUBLIC_ROUTE_TABLE_ID > /dev/null 2>&1 echo "Public route table associated" # Create a NAT gateway in the public subnet for outgoing traffic from the private network. echo -n "Creating NAT Gateway..." NAT_IP_ADDRESS=USD(aws ec2 allocate-address --domain vpc --query AllocationId --output text) NAT_GATEWAY_ID=USD(aws ec2 create-nat-gateway --subnet-id USDPUBLIC_SUBNET_ID --allocation-id USDNAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text) aws ec2 create-tags --resources USDNAT_IP_ADDRESS --resources USDNAT_GATEWAY_ID --tags Key=Name,Value=USDCLUSTER_NAME sleep 10 echo "done." # Create a route table for the private subnet to the NAT gateway. echo -n "Creating a route table for the private subnet to the NAT gateway..." PRIVATE_ROUTE_TABLE_ID=USD(aws ec2 create-route-table --vpc-id USDVPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources USDPRIVATE_ROUTE_TABLE_ID USDNAT_IP_ADDRESS --tags Key=Name,Value=USDCLUSTER_NAME-private aws ec2 create-route --route-table-id USDPRIVATE_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id USDNAT_GATEWAY_ID > /dev/null 2>&1 aws ec2 associate-route-table --subnet-id USDPRIVATE_SUBNET_ID --route-table-id USDPRIVATE_ROUTE_TABLE_ID > /dev/null 2>&1 echo "done." # echo "***********VARIABLE VALUES*********" # echo "VPC_ID="USDVPC_ID # echo "PUBLIC_SUBNET_ID="USDPUBLIC_SUBNET_ID # echo "PRIVATE_SUBNET_ID="USDPRIVATE_SUBNET_ID # echo "PUBLIC_ROUTE_TABLE_ID="USDPUBLIC_ROUTE_TABLE_ID # echo "PRIVATE_ROUTE_TABLE_ID="USDPRIVATE_ROUTE_TABLE_ID # echo "NAT_GATEWAY_ID="USDNAT_GATEWAY_ID # echo "IGW_ID="USDIGW_ID # echo "NAT_IP_ADDRESS="USDNAT_IP_ADDRESS echo "Setup complete." echo "" echo "To make the cluster create commands easier, please run the following commands to set the environment variables:" echo "export PUBLIC_SUBNET_ID=USDPUBLIC_SUBNET_ID" echo "export PRIVATE_SUBNET_ID=USDPRIVATE_SUBNET_ID" Additional resources For more about VPC requirements, see the VPC documentation . The script outputs commands. Set the commands as environment variables to store the subnet IDs for later use. Copy and run the commands: USD export PUBLIC_SUBNET_ID=USDPUBLIC_SUBNET_ID USD export PRIVATE_SUBNET_ID=USDPRIVATE_SUBNET_ID Confirm your environment variables by running the following command: USD echo "Public Subnet: USDPUBLIC_SUBNET_ID"; echo "Private Subnet: USDPRIVATE_SUBNET_ID" Example output Public Subnet: subnet-0faeeeb0000000000 Private Subnet: subnet-011fe340000000000 17.4.6.1.2. Creating your OIDC configuration In this tutorial, we will use the automatic mode when creating the OIDC configuration. We will also store the OIDC ID as an environment variable for later use. The command uses the ROSA CLI to create your cluster's unique OIDC configuration. Create the OIDC configuration by running the following command: USD export OIDC_ID=USD(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id') 17.4.6.2. Creating additional environment variables Run the following command to set up environment variables. These variables make it easier to run the command to create a ROSA cluster: USD export CLUSTER_NAME=<cluster_name> USD export REGION=<VPC_region> Tip Run rosa whoami to find the VPC region. 17.4.6.3. Creating a cluster Optional: Run the following command to create the account-wide roles and policies, including the Operator policies and the AWS IAM roles and policies: Important Only complete this step if this is the first time you are deploying ROSA in this account and you have not yet created your account roles and policies. USD rosa create account-roles --mode auto --yes Run the following command to create the cluster: USD rosa create cluster --cluster-name USDCLUSTER_NAME \ --subnet-ids USD{PUBLIC_SUBNET_ID},USD{PRIVATE_SUBNET_ID} \ --hosted-cp \ --region USDREGION \ --oidc-config-id USDOIDC_ID \ --sts --mode auto --yes The cluster is ready after about 10 minutes. The cluster will have a control plane across three AWS availability zones in your selected region and create two worker nodes in your AWS account. 17.4.6.4. Checking the installation status Run one of the following commands to check the status of the cluster: For a detailed view of the cluster status, run: USD rosa describe cluster --cluster USDCLUSTER_NAME For an abridged view of the cluster status, run: USD rosa list clusters To watch the log as it progresses, run: USD rosa logs install --cluster USDCLUSTER_NAME --watch Once the state changes to "ready" your cluster is installed. It might take a few more minutes for the worker nodes to come online. 17.5. Tutorial: Creating an admin user Creating an administration (admin) user allows you to access your cluster quickly. Follow these steps to create an admin user. Note An admin user works well in this tutorial setting. For actual deployment, use a formal identity provider to access the cluster and grant the user admin privileges. Run the following command to create the admin user: rosa create admin --cluster=<cluster-name> Example output W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster 'my-rosa-cluster'. It may take up to a minute for the account to become active. I: To login, run the following command: oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 \ --username cluster-admin \ --password FWGYL-2mkJI-00000-00000 Copy the log in command returned to you in the step and paste it into your terminal. This will log you in to the cluster using the CLI so you can start using the cluster. USD oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 \ > --username cluster-admin \ > --password FWGYL-2mkJI-00000-00000 Example output Login successful. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project "default". To check that you are logged in as the admin user, run one of the following commands: Option 1: USD oc whoami Example output cluster-admin Option 2: oc get all -n openshift-apiserver Only an admin user can run this command without errors. You can now use the cluster as an admin user, which will suffice for this tutorial. For actual deployment, it is highly recommended to set up an identity provider, which is explained in the tutorial . 17.6. Tutorial: Setting up an identity provider To log in to your cluster, set up an identity provider (IDP). This tutorial uses GitHub as an example IDP. See the full list of IDPs supported by ROSA . To view all IDP options, run the following command: rosa create idp --help 17.6.1. Setting up an IDP with GitHub Log in to your GitHub account. Create a new GitHub organization where you are an administrator. Tip If you are already an administrator in an existing organization and you want to use that organization, skip to step 9. Click the + icon, then click New Organization . Choose the most applicable plan for your situation or click Join for free . Enter an organization account name, an email, and whether it is a personal or business account. Then, click . Optional: Add the GitHub IDs of other users to grant additional access to your ROSA cluster. You can also add them later. Click Complete Setup . Optional: Enter the requested information on the following page. Click Submit . Go back to the terminal and enter the following command to set up the GitHub IDP: rosa create idp --cluster=<cluster name> --interactive Enter the following values: Type of identity provider: github Identity Provider Name: <IDP-name> Restrict to members of: organizations GitHub organizations: <organization-account-name> The CLI will provide you with a link. Copy and paste the link into a browser and press Enter . This will fill the required information to register this application for OAuth. You do not need to modify any of the information. Click Register application . The page displays a Client ID . Copy the ID and paste it in the terminal where it asks for Client ID . Note Do not close the tab. The CLI will ask for a Client Secret . Go back in your browser and click Generate a new client secret . A secret is generated for you. Copy your secret because it will never be visible again. Paste your secret into the terminal and press Enter . Leave GitHub Enterprise Hostname blank. Select claim . Wait approximately 1 minute for the IDP to be created and the configuration to land on your cluster. Copy the returned link and paste it into your browser. The new IDP should be available under your chosen name. Click your IDP and use your GitHub credentials to access the cluster. 17.6.2. Granting other users access to the cluster To grant access to other cluster user you will need to add their GitHub user ID to the GitHub organization used for this cluster. In GitHub, go to the Your organizations page. Click your profile icon , then Your organizations . Then click <your-organization-name> . In our example, it is my-rosa-cluster . Click Invite someone . Enter the GitHub ID of the new user, select the correct user, and click Invite . Once the new user accepts the invitation, they will be able to log in to the ROSA cluster using the Hybrid Cloud Console link and their GitHub credentials. 17.7. Tutorial: Granting admin privileges Administration (admin) privileges are not automatically granted to users that you add to your cluster. If you want to grant admin-level privileges to certain users, you will need to manually grant them to each user. You can grant admin privileges from either the ROSA command line interface (CLI) or the Red Hat OpenShift Cluster Manager web user interface (UI). Red Hat offers two types of admin privileges: cluster-admin : cluster-admin privileges give the admin user full privileges within the cluster. dedicated-admin : dedicated-admin privileges allow the admin user to complete most administrative tasks with certain limitations to prevent cluster damage. It is best practice to use dedicated-admin when elevated privileges are needed. For more information on admin privileges, see the administering a cluster documentation. 17.7.1. Using the ROSA CLI Assuming you are the user who created the cluster, run one of the following commands to grant admin privileges: For cluster-admin : USD rosa grant user cluster-admin --user <idp_user_name> --cluster=<cluster-name> For dedicated-admin : USD rosa grant user dedicated-admin --user <idp_user_name> --cluster=<cluster-name> Verify that the admin privileges were added by running the following command: USD rosa list users --cluster=<cluster-name> Example output USD rosa list users --cluster=my-rosa-cluster ID GROUPS <idp_user_name> cluster-admins If you are currently logged into the Red Hat Hybrid Cloud Console, log out of the console and log back in to the cluster to see a new perspective with the "Administrator Panel". You might need an incognito or private window. You can also test that admin privileges were added to your account by running the following command. Only a cluster-admin users can run this command without errors. USD oc get all -n openshift-apiserver 17.7.2. Using the Red Hat OpenShift Cluster Manager UI Log in to the OpenShift Cluster Manager . Select your cluster. Click the Access Control tab. Click the Cluster roles and Access tab in the sidebar. Click Add user . On the pop-up screen, enter the user ID. Select whether you want to grant the user cluster-admins or dedicated-admins privileges. 17.8. Tutorial: Accessing your cluster You can connect to your cluster using the command line interface (CLI) or the Red Hat Hybrid Cloud Console user interface (UI). 17.8.1. Accessing your cluster using the CLI To access the cluster using the CLI, you must have the oc CLI installed. If you are following the tutorials, you already installed the oc CLI. Log in to the OpenShift Cluster Manager . Click your username in the top right corner. Click Copy Login Command . This opens a new tab with a choice of identity providers (IDPs). Click the IDP you want to use. For example, "rosa-github". A new tab opens. Click Display token . Run the following command in your terminal: USD oc login --token=sha256~GBAfS4JQ0t1UTKYHbWAK6OUWGUkdMGz000000000000 --server=https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 Example output Logged into "https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443" as "rosa-user" using the token provided. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project "default". Confirm that you are logged in by running the following command: USD oc whoami Example output rosa-user You can now access your cluster. 17.8.2. Accessing the cluster via the Hybrid Cloud Console Log in to the OpenShift Cluster Manager . To retrieve the Hybrid Cloud Console URL run: rosa describe cluster -c <cluster-name> | grep Console Click your IDP. For example, "rosa-github". Enter your user credentials. You should be logged in. If you are following the tutorials, you will be a cluster-admin and should see the Hybrid Cloud Console webpage with the Administrator panel visible. 17.9. Tutorial: Managing worker nodes In Red Hat OpenShift Service on AWS (ROSA), changing aspects of your worker nodes is performed through the use of machine pools. A machine pool allows users to manage many machines as a single entity. Every ROSA cluster has a default machine pool that is created when the cluster is created. For more information, see the machine pool documentation. 17.9.1. Creating a machine pool You can create a machine pool with either the command line interface (CLI) or the user interface (UI). 17.9.1.1. Creating a machine pool with the CLI Run the following command: rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes> Example input USD rosa create machinepool --cluster=my-rosa-cluster --name=new-mp --replicas=2 Example output I: Machine pool 'new-mp' created successfully on cluster 'my-rosa-cluster' I: To view all machine pools, run 'rosa list machinepools -c my-rosa-cluster' Optional: Add node labels or taints to specific nodes in a new machine pool by running the following command: rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes> --labels=`<key=pair>` Example input USD rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-mp --replicas=2 --labels='app=db','tier=backend' Example output I: Machine pool 'db-nodes-mp' created successfully on cluster 'my-rosa-cluster' This creates an additional 2 nodes that can be managed as a unit and also assigns them the labels shown. Run the following command to confirm machine pool creation and the assigned labels: rosa list machinepools --cluster=<cluster-name> Example output ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a 17.9.1.2. Creating a machine pool with the UI Log in to the OpenShift Cluster Manager and click your cluster. Click Machine pools . Click Add machine pool . Enter the desired configuration. Tip You can also and expand the Edit node labels and taints section to add node labels and taints to the nodes in the machine pool. You will see the new machine pool you created. 17.9.2. Scaling worker nodes Edit a machine pool to scale the number of worker nodes in that specific machine pool. You can use either the CLI or the UI to scale worker nodes. 17.9.2.1. Scaling worker nodes using the CLI Run the following command to see the default machine pool that is created with each cluster: rosa list machinepools --cluster=<cluster-name> Example output ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a To scale the default machine pool out to a different number of nodes, run the following command: rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> <machinepool-name> Example input rosa edit machinepool --cluster=my-rosa-cluster --replicas 3 Default Run the following command to confirm that the machine pool has scaled: rosa describe cluster --cluster=<cluster-name> | grep Compute Example input USD rosa describe cluster --cluster=my-rosa-cluster | grep Compute Example output - Compute: 3 (m5.xlarge) 17.9.2.2. Scaling worker nodes using the UI Click the three dots to the right of the machine pool you want to edit. Click Edit . Enter the desired number of nodes, and click Save . Confirm that the cluster has scaled by selecting the cluster, clicking the Overview tab, and scrolling to Compute listing . The compute listing should equal the scaled nodes. For example, 3/3. 17.9.2.3. Adding node labels Use the following command to add node labels: rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> --labels='key=value' <machinepool-name> Example input rosa edit machinepool --cluster=my-rosa-cluster --replicas=2 --labels 'foo=bar','baz=one' new-mp This adds 2 labels to the new machine pool. Important This command replaces all machine pool configurations with the newly defined configuration. If you want to add another label and keep the old label, you must state both the new and preexisting the label. Otherwise the command will replace all preexisting labels with the one you wanted to add. Similarly, if you want to delete a label, run the command and state the ones you want, excluding the one you want to delete. 17.9.3. Mixing node types You can also mix different worker node machine types in the same cluster by using new machine pools. You cannot change the node type of a machine pool once it is created, but you can create a new machine pool with different nodes by adding the --instance-type flag. For example, to change the database nodes to a different node type, run the following command: rosa create machinepool --cluster=<cluster-name> --name=<mp-name> --replicas=<number-nodes> --labels='<key=pair>' --instance-type=<type> Example input rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-large-mp --replicas=2 --labels='app=db','tier=backend' --instance-type=m5.2xlarge To see all the instance types available , run the following command: rosa list instance-types To make step-by-step changes, use the --interactive flag: rosa create machinepool -c <cluster-name> --interactive Run the following command to list the machine pools and see the new, larger instance type: rosa list machinepools -c <cluster-name> 17.10. Tutorial: Autoscaling The cluster autoscaler adds or removes worker nodes from a cluster based on pod resources. The cluster autoscaler increases the size of the cluster when: Pods fail to schedule on the current nodes due to insufficient resources. Another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler decreases the size of the cluster when: Some nodes are consistently not needed for a significant period. For example, when a node has low resource use and all of its important pods can fit on other nodes. 17.10.1. Enabling autoscaling for an existing machine pool using the CLI Note Cluster autoscaling can be enabled at cluster creation and when creating a new machine pool by using the --enable-autoscaling option. Autoscaling is set based on machine pool availability. To find out which machine pools are available for autoscaling, run the following command: USD rosa list machinepools -c <cluster-name> Example output ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a Run the following command to add autoscaling to an available machine pool: USD rosa edit machinepool -c <cluster-name> --enable-autoscaling <machinepool-name> --min-replicas=<num> --max-replicas=<num> Example input USD rosa edit machinepool -c my-rosa-cluster --enable-autoscaling Default --min-replicas=2 --max-replicas=4 The above command creates an autoscaler for the worker nodes that scales between 2 and 4 nodes depending on the resources. 17.10.2. Enabling autoscaling for an existing machine pool using the UI Note Cluster autoscaling can be enabled at cluster creation by checking the Enable autoscaling checkbox when creating machine pools. Go to the Machine pools tab and click the three dots in the right.. Click Scale , then Enable autoscaling . Run the following command to confirm that autoscaling was added: USD rosa list machinepools -c <cluster-name> Example output ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default Yes 2-4 m5.xlarge us-east-1a 17.11. Tutorial: Upgrading your cluster Red Hat OpenShift Service on AWS (ROSA) executes all cluster upgrades as part of the managed service. You do not need to run any commands or make changes to the cluster. You can schedule the upgrades at a convenient time. Ways to schedule a cluster upgrade include: Manually using the command line interface (CLI) : Start a one-time immediate upgrade or schedule a one-time upgrade for a future date and time. Manually using the Red Hat OpenShift Cluster Manager user interface (UI) : Start a one-time immediate upgrade or schedule a one-time upgrade for a future date and time. Automated upgrades : Set an upgrade window for recurring y-stream upgrades whenever a new version is available without needing to manually schedule it. Minor versions have to be manually scheduled. For more details about cluster upgrades, run the following command: USD rosa upgrade cluster --help 17.11.1. Manually upgrading your cluster using the CLI Check if there is an upgrade available by running the following command: USD rosa list upgrade -c <cluster-name> Example output USD rosa list upgrade -c <cluster-name> VERSION NOTES 4.14.7 recommended 4.14.6 ... In the above example, versions 4.14.7 and 4.14.6 are both available. Schedule the cluster to upgrade within the hour by running the following command: USD rosa upgrade cluster -c <cluster-name> --version <desired-version> Optional: Schedule the cluster to upgrade at a later date and time by running the following command: USD rosa upgrade cluster -c <cluster-name> --version <desired-version> --schedule-date <future-date-for-update> --schedule-time <future-time-for-update> 17.11.2. Manually upgrading your cluster using the UI Log in to the OpenShift Cluster Manager, and select the cluster you want to upgrade. Click Settings . If an upgrade is available, click Update . Select the version to which you want to upgrade in the new window. Schedule a time for the upgrade or begin it immediately. 17.11.3. Setting up automatic recurring upgrades Log in to the OpenShift Cluster Manager, and select the cluster you want to upgrade. Click Settings . Under Update Strategy , click Recurring updates . Set the day and time for the upgrade to occur. Under Node draining , select a grace period to allow the nodes to drain before pod eviction. Click Save . 17.12. Tutorial: Deleting your cluster You can delete your Red Hat OpenShift Service on AWS (ROSA) cluster using either the command line interface (CLI) or the user interface (UI). 17.12.1. Deleting a ROSA cluster using the CLI Optional: List your clusters to make sure you are deleting the correct one by running the following command: USD rosa list clusters Delete a cluster by running the following command: USD rosa delete cluster --cluster <cluster-name> Warning This command is non-recoverable. The CLI prompts you to confirm that you want to delete the cluster. Press y and then Enter . The cluster and all its associated infrastructure will be deleted. Note All AWS STS and IAM roles and policies will remain and must be deleted manually once the cluster deletion is complete by following the steps below. The CLI outputs the commands to delete the OpenID Connect (OIDC) provider and Operator IAM roles resources that were created. Wait until the cluster finishes deleting before deleting these resources. Perform a quick status check by running the following command: USD rosa list clusters Once the cluster is deleted, delete the OIDC provider by running the following command: USD rosa delete oidc-provider -c <clusterID> --mode auto --yes Delete the Operator IAM roles by running the following command: USD rosa delete operator-roles -c <clusterID> --mode auto --yes Note This command requires the cluster ID and not the cluster name. Only remove the remaining account roles if they are no longer needed by other clusters in the same account. If you want to create other ROSA clusters in this account, do not perform this step. To delete the account roles, you need to know the prefix used when creating them. The default is "ManagedOpenShift" unless you specified otherwise. Delete the account roles by running the following command: USD rosa delete account-roles --prefix <prefix> --mode auto --yes 17.12.2. Deleting a ROSA cluster using the UI Log in to the OpenShift Cluster Manager , and locate the cluster you want to delete. Click the three dots to the right of the cluster. In the dropdown menu, click Delete cluster . Enter the name of the cluster to confirm deletion, and click Delete . 17.13. Tutorial: Obtaining support Finding the right help when you need it is important. These are some of the resources at your disposal when you need assistance. 17.13.1. Adding support contacts You can add additional email addresses for communications about your cluster. On the Red Hat OpenShift Cluster Manager user interface (UI), click select cluster . Click the Support tab. Click Add notification contact , and enter the additional email addresses. 17.13.2. Contacting Red Hat for support using the UI On the OpenShift Cluster Manager UI, click the Support tab. Click Open support case . 17.13.3. Contacting Red Hat for support using the support page Go to the Red Hat support page . Click Open a new Case . Log in to your Red Hat account. Select the reason for contacting support. Select Red Hat OpenShift Service on AWS . Click continue . Enter a summary of the issue and the details of your request. Upload any files, logs, and screenshots. The more details you provide, the better Red Hat support can help your case. Note Relevant suggestions that might help with your issue will appear at the bottom of this page. Click Continue . Answer the questions in the new fields. Click Continue . Enter the following information about your case: Support level: Premium Severity: Review the Red Hat Support Severity Level Definitions to choose the correct one. Group: If this is related to a few other cases you can select the corresponding group. Language Send notifications: Add any additional email addresses to keep notified of activity. Red Hat associates: If you are working with anyone from Red Hat and want to keep them in the loop you can enter their email address here. Alternate Case ID: If you want to attach your own ID to it you can enter it here. Click Continue . On the review screen make sure you select the correct cluster ID that you are contacting support about. Click Submit . You will be contacted based on the response time committed to for the indicated severity level .
[ "rosa create account-roles --mode auto --yes", "rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes", "rosa list clusters", "rosa create account-roles --mode auto --yes", "I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts", "rosa create cluster --cluster-name <cluster-name> --sts --mode auto --yes", "rosa create cluster --cluster-name my-rosa-cluster --sts --mode auto --yes", "I: Creating cluster 'my-rosa-cluster' I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'. Name: my-rosa-cluster ID: 1mlhulb3bo0l54ojd0ji000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.ibhp.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Master: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Master: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credential-oper State: waiting (Waiting for OIDC configuration) Private: No Created: Oct 28 2021 20:28:09 UTC Details Page: https://console.redhat.com/openshift/details/s/1wupmiQy45xr1nN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1mlhulb3bo0l54ojd0ji000000000000", "rosa describe cluster --cluster <cluster-name>", "rosa list clusters", "rosa create account-roles --mode manual", "I: All policy files saved to the current directory I: Run the following commands to create the account roles and policies: aws iam create-role --role-name ManagedOpenShift-Worker-Role --assume-role-policy-document file://sts_instance_worker_trust_policy.json --tags Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value=ManagedOpenShift Key=rosa_role_type,Value=instance_worker aws iam put-role-policy --role-name ManagedOpenShift-Worker-Role --policy-name ManagedOpenShift-Worker-Role-Policy --policy-document file://sts_instance_worker_permission_policy.json", "ls openshift_cloud_credential_operator_cloud_credential_operator_iam_ro_creds_policy.json sts_instance_controlplane_permission_policy.json openshift_cluster_csi_drivers_ebs_cloud_credentials_policy.json sts_instance_controlplane_trust_policy.json openshift_image_registry_installer_cloud_credentials_policy.json sts_instance_worker_permission_policy.json openshift_ingress_operator_cloud_credentials_policy.json sts_instance_worker_trust_policy.json openshift_machine_api_aws_cloud_credentials_policy.json sts_support_permission_policy.json sts_installer_permission_policy.json sts_support_trust_policy.json sts_installer_trust_policy.json", "cat sts_installer_permission_policy.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Action\": [ \"autoscaling:DescribeAutoScalingGroups\", \"ec2:AllocateAddress\", \"ec2:AssociateAddress\", \"ec2:AssociateDhcpOptions\", \"ec2:AssociateRouteTable\", \"ec2:AttachInternetGateway\", \"ec2:AttachNetworkInterface\", \"ec2:AuthorizeSecurityGroupEgress\", \"ec2:AuthorizeSecurityGroupIngress\", [...]", "rosa create cluster --interactive --sts", "Cluster name: my-rosa-cluster OpenShift version: <choose version> External ID (optional): <leave blank> Operator roles prefix: <accept default> Multiple availability zones: No AWS region: <choose region> PrivateLink cluster: No Install into an existing VPC: No Enable Customer Managed key: No Compute nodes instance type: m5.xlarge Enable autoscaling: No Compute nodes: 2 Machine CIDR: <accept default> Service CIDR: <accept default> Pod CIDR: <accept default> Host prefix: <accept default> Encrypt etcd data (optional): No Disable Workload monitoring: No", "I: Creating cluster 'my-rosa-cluster' I: To create this cluster again in the future, you can run: rosa create cluster --cluster-name my-rosa-cluster --role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role --support-role-arn arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role --master-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role --worker-iam-role arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role --operator-roles-prefix my-rosa-cluster --region us-west-2 --version 4.8.13 --compute-nodes 2 --machine-cidr 10.0.0.0/16 --service-cidr 172.30.0.0/16 --pod-cidr 10.128.0.0/14 --host-prefix 23 I: To view a list of clusters and their status, run 'rosa list clusters' I: Cluster 'my-rosa-cluster' has been created. I: Once the cluster is installed you will need to add an Identity Provider before you can login into the cluster. See 'rosa create idp --help' for more information. Name: my-rosa-cluster ID: 1t6i760dbum4mqltqh6o000000000000 External ID: OpenShift Version: Channel Group: stable DNS: my-rosa-cluster.abcd.p1.openshiftapps.com AWS Account: 000000000000 API URL: Console URL: Region: us-west-2 Multi-AZ: false Nodes: - Control plane: 3 - Infra: 2 - Compute: 2 Network: - Service CIDR: 172.30.0.0/16 - Machine CIDR: 10.0.0.0/16 - Pod CIDR: 10.128.0.0/14 - Host Prefix: /23 STS Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role Support Role ARN: arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role Instance IAM Roles: - Control plane: arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role - Worker: arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role Operator IAM Roles: - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-ingress-operator-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cluster-csi-drivers-ebs-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-w7i6-openshift-cloud-network-config-controller-cloud-cre - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-machine-api-aws-cloud-credentials - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-cloud-credential-operator-cloud-credentia - arn:aws:iam::000000000000:role/my-rosa-cluster-openshift-image-registry-installer-cloud-credential State: waiting (Waiting for OIDC configuration) Private: No Created: Jul 1 2022 22:13:50 UTC Details Page: https://console.redhat.com/openshift/details/s/2BMQm8xz8Hq5yEN000000000000 OIDC Endpoint URL: https://rh-oidc.s3.us-east-1.amazonaws.com/1t6i760dbum4mqltqh6o000000000000 I: Run the following commands to continue the cluster creation: rosa create operator-roles --cluster my-rosa-cluster rosa create oidc-provider --cluster my-rosa-cluster I: To determine when your cluster is Ready, run 'rosa describe cluster -c my-rosa-cluster'. I: To watch your cluster installation logs, run 'rosa logs install -c my-rosa-cluster --watch'.", "rosa create operator-roles --mode manual --cluster <cluster-name>", "I: Run the following commands to create the operator roles: aws iam create-role --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials --assume-role-policy-document file://operator_image_registry_installer_cloud_credentials_policy.json --tags Key=rosa_cluster_id,Value=1mkesci269png3tck000000000000000 Key=rosa_openshift_version,Value=4.8 Key=rosa_role_prefix,Value= Key=operator_namespace,Value=openshift-image-registry Key=operator_name,Value=installer-cloud-credentials aws iam attach-role-policy --role-name my-rosa-cluster-openshift-image-registry-installer-cloud-credentials --policy-arn arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden [...]", "rosa create oidc-provider --mode manual --cluster <cluster-name>", "I: Run the following commands to create the OIDC provider: aws iam create-open-id-connect-provider --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 --client-id-list openshift sts.amazonaws.com --thumbprint-list a9d53002e97e00e043244f3d170d000000000000 aws iam create-open-id-connect-provider --url https://rh-oidc.s3.us-east-1.amazonaws.com/1mkesci269png3tckknhh0rfs2da5fj9 --client-id-list openshift sts.amazonaws.com --thumbprint-list a9d53002e97e00e043244f3d170d000000000000", "rosa describe cluster --cluster <cluster-name>", "rosa list clusters", "rosa describe cluster -c <cluster-name> | grep Console", "rosa create account-roles --mode auto --yes", "rosa create ocm-role --mode auto --admin --yes", "rosa create user-role --mode auto --yes", "rosa create account-roles --mode auto --yes", "I: Creating roles using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-ControlPlane-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-ControlPlane-Role' I: Created role 'ManagedOpenShift-Worker-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Worker-Role' I: Created role 'ManagedOpenShift-Support-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Support-Role' I: Created role 'ManagedOpenShift-Installer-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-Installer-Role' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-machine-api-aws-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cloud-credential-operator-cloud-crede' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-image-registry-installer-cloud-creden' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-ingress-operator-cloud-credentials' I: Created policy with ARN 'arn:aws:iam::000000000000:policy/ManagedOpenShift-openshift-cluster-csi-drivers-ebs-cloud-credent' I: To create a cluster with these roles, run the following command: rosa create cluster --sts", "rosa list ocm-role", "rosa create ocm-role --mode auto --admin --yes", "I: Creating ocm role I: Creating role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-OCM-Role-12561000' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' I: Linking OCM role I: Successfully linked role-arn 'arn:aws:iam::000000000000:role/ManagedOpenShift-OCM-Role-12561000' with organization account '1MpZfntsZeUdjWHg7XRgP000000'", "rosa create ocm-role --mode manual --admin --yes", "rosa create ocm-role --mode auto --yes", "rosa list user-role", "rosa create user-role --mode auto --yes", "I: Creating User role I: Creating ocm user role using 'arn:aws:iam::000000000000:user/rosa-user' I: Created role 'ManagedOpenShift-User-rosa-user-Role' with ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' I: Linking User role I: Successfully linked role ARN 'arn:aws:iam::000000000000:role/ManagedOpenShift-User-rosa-user-Role' with account '1rbOQez0z5j1YolInhcXY000000'", "rosa create account-roles --mode auto", "rosa create operator-roles --mode auto --cluster <cluster-name> --yes", "I: Creating roles using 'arn:aws:iam::000000000000:user/rosauser' I: Created role 'rosacluster-b736-openshift-ingress-operator-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-ingress-operator-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cluster-csi-drivers-ebs-cloud-credent' I: Created role 'rosacluster-b736-openshift-cloud-network-config-controller-cloud' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-network-config-controller-cloud' I: Created role 'rosacluster-b736-openshift-machine-api-aws-cloud-credentials' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-machine-api-aws-cloud-credentials' I: Created role 'rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-cloud-credential-operator-cloud-crede' I: Created role 'rosacluster-b736-openshift-image-registry-installer-cloud-creden' with ARN 'arn:aws:iam::000000000000:role/rosacluster-b736-openshift-image-registry-installer-cloud-creden'", "rosa create oidc-provider --mode auto --cluster <cluster-name> --yes", "I: Creating OIDC provider using 'arn:aws:iam::000000000000:user/rosauser' I: Created OIDC provider with ARN 'arn:aws:iam::000000000000:oidc-provider/rh-oidc.s3.us-east-1.amazonaws.com/1tt4kvrr2kha2rgs8gjfvf0000000000'", "rosa list regions --hosted-cp", "#!/bin/bash set -e ########## This script will create the network requirements for a ROSA cluster. This will be a public cluster. This creates: - VPC - Public and private subnets - Internet Gateway - Relevant route tables - NAT Gateway # This will automatically use the region configured for the aws cli # ########## VPC_CIDR=10.0.0.0/16 PUBLIC_CIDR_SUBNET=10.0.1.0/24 PRIVATE_CIDR_SUBNET=10.0.0.0/24 Create VPC echo -n \"Creating VPC...\" VPC_ID=USD(aws ec2 create-vpc --cidr-block USDVPC_CIDR --query Vpc.VpcId --output text) Create tag name aws ec2 create-tags --resources USDVPC_ID --tags Key=Name,Value=USDCLUSTER_NAME Enable dns hostname aws ec2 modify-vpc-attribute --vpc-id USDVPC_ID --enable-dns-hostnames echo \"done.\" Create Public Subnet echo -n \"Creating public subnet...\" PUBLIC_SUBNET_ID=USD(aws ec2 create-subnet --vpc-id USDVPC_ID --cidr-block USDPUBLIC_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources USDPUBLIC_SUBNET_ID --tags Key=Name,Value=USDCLUSTER_NAME-public echo \"done.\" Create private subnet echo -n \"Creating private subnet...\" PRIVATE_SUBNET_ID=USD(aws ec2 create-subnet --vpc-id USDVPC_ID --cidr-block USDPRIVATE_CIDR_SUBNET --query Subnet.SubnetId --output text) aws ec2 create-tags --resources USDPRIVATE_SUBNET_ID --tags Key=Name,Value=USDCLUSTER_NAME-private echo \"done.\" Create an internet gateway for outbound traffic and attach it to the VPC. echo -n \"Creating internet gateway...\" IGW_ID=USD(aws ec2 create-internet-gateway --query InternetGateway.InternetGatewayId --output text) echo \"done.\" aws ec2 create-tags --resources USDIGW_ID --tags Key=Name,Value=USDCLUSTER_NAME aws ec2 attach-internet-gateway --vpc-id USDVPC_ID --internet-gateway-id USDIGW_ID > /dev/null 2>&1 echo \"Attached IGW to VPC.\" Create a route table for outbound traffic and associate it to the public subnet. echo -n \"Creating route table for public subnet...\" PUBLIC_ROUTE_TABLE_ID=USD(aws ec2 create-route-table --vpc-id USDVPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources USDPUBLIC_ROUTE_TABLE_ID --tags Key=Name,Value=USDCLUSTER_NAME echo \"done.\" aws ec2 create-route --route-table-id USDPUBLIC_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id USDIGW_ID > /dev/null 2>&1 echo \"Created default public route.\" aws ec2 associate-route-table --subnet-id USDPUBLIC_SUBNET_ID --route-table-id USDPUBLIC_ROUTE_TABLE_ID > /dev/null 2>&1 echo \"Public route table associated\" Create a NAT gateway in the public subnet for outgoing traffic from the private network. echo -n \"Creating NAT Gateway...\" NAT_IP_ADDRESS=USD(aws ec2 allocate-address --domain vpc --query AllocationId --output text) NAT_GATEWAY_ID=USD(aws ec2 create-nat-gateway --subnet-id USDPUBLIC_SUBNET_ID --allocation-id USDNAT_IP_ADDRESS --query NatGateway.NatGatewayId --output text) aws ec2 create-tags --resources USDNAT_IP_ADDRESS --resources USDNAT_GATEWAY_ID --tags Key=Name,Value=USDCLUSTER_NAME sleep 10 echo \"done.\" Create a route table for the private subnet to the NAT gateway. echo -n \"Creating a route table for the private subnet to the NAT gateway...\" PRIVATE_ROUTE_TABLE_ID=USD(aws ec2 create-route-table --vpc-id USDVPC_ID --query RouteTable.RouteTableId --output text) aws ec2 create-tags --resources USDPRIVATE_ROUTE_TABLE_ID USDNAT_IP_ADDRESS --tags Key=Name,Value=USDCLUSTER_NAME-private aws ec2 create-route --route-table-id USDPRIVATE_ROUTE_TABLE_ID --destination-cidr-block 0.0.0.0/0 --gateway-id USDNAT_GATEWAY_ID > /dev/null 2>&1 aws ec2 associate-route-table --subnet-id USDPRIVATE_SUBNET_ID --route-table-id USDPRIVATE_ROUTE_TABLE_ID > /dev/null 2>&1 echo \"done.\" echo \"***********VARIABLE VALUES*********\" echo \"VPC_ID=\"USDVPC_ID echo \"PUBLIC_SUBNET_ID=\"USDPUBLIC_SUBNET_ID echo \"PRIVATE_SUBNET_ID=\"USDPRIVATE_SUBNET_ID echo \"PUBLIC_ROUTE_TABLE_ID=\"USDPUBLIC_ROUTE_TABLE_ID echo \"PRIVATE_ROUTE_TABLE_ID=\"USDPRIVATE_ROUTE_TABLE_ID echo \"NAT_GATEWAY_ID=\"USDNAT_GATEWAY_ID echo \"IGW_ID=\"USDIGW_ID echo \"NAT_IP_ADDRESS=\"USDNAT_IP_ADDRESS echo \"Setup complete.\" echo \"\" echo \"To make the cluster create commands easier, please run the following commands to set the environment variables:\" echo \"export PUBLIC_SUBNET_ID=USDPUBLIC_SUBNET_ID\" echo \"export PRIVATE_SUBNET_ID=USDPRIVATE_SUBNET_ID\"", "export PUBLIC_SUBNET_ID=USDPUBLIC_SUBNET_ID export PRIVATE_SUBNET_ID=USDPRIVATE_SUBNET_ID", "echo \"Public Subnet: USDPUBLIC_SUBNET_ID\"; echo \"Private Subnet: USDPRIVATE_SUBNET_ID\"", "Public Subnet: subnet-0faeeeb0000000000 Private Subnet: subnet-011fe340000000000", "export OIDC_ID=USD(rosa create oidc-config --mode auto --managed --yes -o json | jq -r '.id')", "export CLUSTER_NAME=<cluster_name> export REGION=<VPC_region>", "rosa create account-roles --mode auto --yes", "rosa create cluster --cluster-name USDCLUSTER_NAME --subnet-ids USD{PUBLIC_SUBNET_ID},USD{PRIVATE_SUBNET_ID} --hosted-cp --region USDREGION --oidc-config-id USDOIDC_ID --sts --mode auto --yes", "rosa describe cluster --cluster USDCLUSTER_NAME", "rosa list clusters", "rosa logs install --cluster USDCLUSTER_NAME --watch", "rosa create admin --cluster=<cluster-name>", "W: It is recommended to add an identity provider to login to this cluster. See 'rosa create idp --help' for more information. I: Admin account has been added to cluster 'my-rosa-cluster'. It may take up to a minute for the account to become active. I: To login, run the following command: login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 --username cluster-admin --password FWGYL-2mkJI-00000-00000", "oc login https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443 > --username cluster-admin > --password FWGYL-2mkJI-00000-00000", "Login successful. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project \"default\".", "oc whoami", "cluster-admin", "get all -n openshift-apiserver", "rosa create idp --help", "rosa create idp --cluster=<cluster name> --interactive", "Type of identity provider: github Identity Provider Name: <IDP-name> Restrict to members of: organizations GitHub organizations: <organization-account-name>", "rosa grant user cluster-admin --user <idp_user_name> --cluster=<cluster-name>", "rosa grant user dedicated-admin --user <idp_user_name> --cluster=<cluster-name>", "rosa list users --cluster=<cluster-name>", "rosa list users --cluster=my-rosa-cluster ID GROUPS <idp_user_name> cluster-admins", "oc get all -n openshift-apiserver", "oc login --token=sha256~GBAfS4JQ0t1UTKYHbWAK6OUWGUkdMGz000000000000 --server=https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443", "Logged into \"https://api.my-rosa-cluster.abcd.p1.openshiftapps.com:6443\" as \"rosa-user\" using the token provided. You have access to 79 projects, the list has been suppressed. You can list all projects with ' projects' Using project \"default\".", "oc whoami", "rosa-user", "rosa describe cluster -c <cluster-name> | grep Console", "rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes>", "rosa create machinepool --cluster=my-rosa-cluster --name=new-mp --replicas=2", "I: Machine pool 'new-mp' created successfully on cluster 'my-rosa-cluster' I: To view all machine pools, run 'rosa list machinepools -c my-rosa-cluster'", "rosa create machinepool --cluster=<cluster-name> --name=<machinepool-name> --replicas=<number-nodes> --labels=`<key=pair>`", "rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-mp --replicas=2 --labels='app=db','tier=backend'", "I: Machine pool 'db-nodes-mp' created successfully on cluster 'my-rosa-cluster'", "rosa list machinepools --cluster=<cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a", "rosa list machinepools --cluster=<cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a", "rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> <machinepool-name>", "rosa edit machinepool --cluster=my-rosa-cluster --replicas 3 Default", "rosa describe cluster --cluster=<cluster-name> | grep Compute", "rosa describe cluster --cluster=my-rosa-cluster | grep Compute", "- Compute: 3 (m5.xlarge)", "rosa edit machinepool --cluster=<cluster-name> --replicas=<number-nodes> --labels='key=value' <machinepool-name>", "rosa edit machinepool --cluster=my-rosa-cluster --replicas=2 --labels 'foo=bar','baz=one' new-mp", "rosa create machinepool --cluster=<cluster-name> --name=<mp-name> --replicas=<number-nodes> --labels='<key=pair>' --instance-type=<type>", "rosa create machinepool --cluster=my-rosa-cluster --name=db-nodes-large-mp --replicas=2 --labels='app=db','tier=backend' --instance-type=m5.2xlarge", "rosa list instance-types", "rosa create machinepool -c <cluster-name> --interactive", "rosa list machinepools -c <cluster-name>", "rosa list machinepools -c <cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default No 2 m5.xlarge us-east-1a", "rosa edit machinepool -c <cluster-name> --enable-autoscaling <machinepool-name> --min-replicas=<num> --max-replicas=<num>", "rosa edit machinepool -c my-rosa-cluster --enable-autoscaling Default --min-replicas=2 --max-replicas=4", "rosa list machinepools -c <cluster-name>", "ID AUTOSCALING REPLICAS INSTANCE TYPE LABELS TAINTS AVAILABILITY ZONES Default Yes 2-4 m5.xlarge us-east-1a", "rosa upgrade cluster --help", "rosa list upgrade -c <cluster-name>", "rosa list upgrade -c <cluster-name> VERSION NOTES 4.14.7 recommended 4.14.6", "rosa upgrade cluster -c <cluster-name> --version <desired-version>", "rosa upgrade cluster -c <cluster-name> --version <desired-version> --schedule-date <future-date-for-update> --schedule-time <future-time-for-update>", "rosa list clusters", "rosa delete cluster --cluster <cluster-name>", "rosa list clusters", "rosa delete oidc-provider -c <clusterID> --mode auto --yes", "rosa delete operator-roles -c <clusterID> --mode auto --yes", "rosa delete account-roles --prefix <prefix> --mode auto --yes" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/tutorials/getting-started-with-rosa
3.7. Hardening TLS Configuration
3.7. Hardening TLS Configuration TLS ( Transport Layer Security ) is a cryptographic protocol used to secure network communications. When hardening system security settings by configuring preferred key-exchange protocols , authentication methods , and encryption algorithms , it is necessary to bear in mind that the broader the range of supported clients, the lower the resulting security. Conversely, strict security settings lead to a limited compatibility with clients, which can result in some users being locked out of the system. Be sure to target the strictest available configuration and only relax it when it is required for compatibility reasons. Note that the default settings provided by libraries included in Red Hat Enterprise Linux are secure enough for most deployments. The TLS implementations use secure algorithms where possible while not preventing connections from or to legacy clients or servers. Apply the hardened settings described in this section in environments with strict security requirements where legacy clients or servers that do not support secure algorithms or protocols are not expected or allowed to connect. 3.7.1. Choosing Algorithms to Enable There are several components that need to be selected and configured. Each of the following directly influences the robustness of the resulting configuration (and, consequently, the level of support in clients) or the computational demands that the solution has on the system. Protocol Versions The latest version of TLS provides the best security mechanism. Unless you have a compelling reason to include support for older versions of TLS (or even SSL ), allow your systems to negotiate connections using only the latest version of TLS . Do not allow negotiation using SSL version 2 or 3. Both of those versions have serious security vulnerabilities. Only allow negotiation using TLS version 1.0 or higher. The current version of TLS , 1.2, should always be preferred. Note Please note that currently, the security of all versions of TLS depends on the use of TLS extensions, specific ciphers (see below), and other workarounds. All TLS connection peers need to implement secure renegotiation indication ( RFC 5746 ), must not support compression, and must implement mitigating measures for timing attacks against CBC -mode ciphers (the Lucky Thirteen attack). TLS v1.0 clients need to additionally implement record splitting (a workaround against the BEAST attack). TLS v1.2 supports Authenticated Encryption with Associated Data ( AEAD ) mode ciphers like AES-GCM , AES-CCM , or Camellia-GCM , which have no known issues. All the mentioned mitigations are implemented in cryptographic libraries included in Red Hat Enterprise Linux. See Table 3.1, "Protocol Versions" for a quick overview of protocol versions and recommended usage. Table 3.1. Protocol Versions Protocol Version Usage Recommendation SSL v2 Do not use. Has serious security vulnerabilities. SSL v3 Do not use. Has serious security vulnerabilities. TLS v1.0 Use for interoperability purposes where needed. Has known issues that cannot be mitigated in a way that guarantees interoperability, and thus mitigations are not enabled by default. Does not support modern cipher suites. TLS v1.1 Use for interoperability purposes where needed. Has no known issues but relies on protocol fixes that are included in all the TLS implementations in Red Hat Enterprise Linux. Does not support modern cipher suites. TLS v1.2 Recommended version. Supports the modern AEAD cipher suites. Some components in Red Hat Enterprise Linux are configured to use TLS v1.0 even though they provide support for TLS v1.1 or even v1.2 . This is motivated by an attempt to achieve the highest level of interoperability with external services that may not support the latest versions of TLS . Depending on your interoperability requirements, enable the highest available version of TLS . Important SSL v3 is not recommended for use. However, if, despite the fact that it is considered insecure and unsuitable for general use, you absolutely must leave SSL v3 enabled, see Section 3.6, "Using stunnel" for instructions on how to use stunnel to securely encrypt communications even when using services that do not support encryption or are only capable of using obsolete and insecure modes of encryption. While not immediately insecure, cipher suites that offer less than 128 bits of security should not be considered for their short useful life. Algorithms that use 128 bit of security or more can be expected to be unbreakable for at least several years, and are thus strongly recommended. Note that while 3DES ciphers advertise the use of 168 bits, they actually offer 112 bits of security. Always give preference to cipher suites that support (perfect) forward secrecy ( PFS ), which ensures the confidentiality of encrypted data even in case the server key is compromised. This rules out the fast RSA key exchange, but allows for the use of ECDHE and DHE . Of the two, ECDHE is the faster and therefore the preferred choice. Note also that when using the ECDHE key exchange with ECDSA certificates, the transaction is even faster than pure RSA key exchange. To provide support for legacy clients, you can install two pairs of certificates and keys on a server: one with ECDSA keys (for new clients) and one with RSA keys (for legacy ones). Public Key Length When using RSA keys, always prefer key lengths of at least 3072 bits signed by at least SHA-256, which is sufficiently large for true 128 bits of security. Warning Keep in mind that the security of your system is only as strong as the weakest link in the chain. For example, a strong cipher alone does not guarantee good security. The keys and the certificates are just as important, as well as the hash functions and keys used by the Certification Authority ( CA ) to sign your keys.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-Hardening_TLS_Configuration
Monitoring high availability services
Monitoring high availability services Red Hat OpenStack Services on OpenShift 18.0 Monitoring high availability services in a Red Hat OpenStack Services on OpenShift environment OpenStack Documentation Team [email protected]
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/monitoring_high_availability_services/index
6.4. Backing Up and Restoring the Cluster Database
6.4. Backing Up and Restoring the Cluster Database The Cluster Configuration Tool automatically retains backup copies of the three most recently used configuration files (besides the currently used configuration file). Retaining the backup copies is useful if the cluster does not function correctly because of misconfiguration and you need to return to a working configuration. Each time you save a configuration file, the Cluster Configuration Tool saves backup copies of the three most recently used configuration files as /etc/cluster/cluster.conf.bak.1 , /etc/cluster/cluster.conf.bak.2 , and /etc/cluster/cluster.conf.bak.3 . The backup file /etc/cluster/cluster.conf.bak.1 is the newest backup, /etc/cluster/cluster.conf.bak.2 is the second newest backup, and /etc/cluster/cluster.conf.bak.3 is the third newest backup. If a cluster member becomes inoperable because of misconfiguration, restore the configuration file according to the following steps: At the Cluster Configuration Tool tab of the Red Hat Cluster Suite management GUI, click File => Open . Clicking File => Open causes the system-config-cluster dialog box to be displayed. At the the system-config-cluster dialog box, select a backup file (for example, /etc/cluster/cluster.conf.bak.1 ). Verify the file selection in the Selection box and click OK . Click File => Save As . Clicking File => Save As causes the system-config-cluster dialog box to be displayed. At the the system-config-cluster dialog box, select /etc/cluster/cluster.conf and click OK . (Verify the file selection in the Selection box.) Clicking OK causes an Information dialog box to be displayed. At that dialog box, click OK . Propagate the updated configuration file throughout the cluster by clicking Send to Cluster . Note The Cluster Configuration Tool does not display the Send to Cluster button if the cluster is new and has not been started yet, or if the node from which you are running the Cluster Configuration Tool is not a member of the cluster. If the Send to Cluster button is not displayed, you can still use the Cluster Configuration Tool ; however, you cannot propagate the configuration. You can still save the configuration file. For information about using the Cluster Configuration Tool for a new cluster configuration, refer to Chapter 5, Configuring Red Hat Cluster With system-config-cluster . Clicking Send to Cluster causes a Warning dialog box to be displayed. Click Yes to propagate the configuration. Click the Cluster Management tab and verify that the changes have been propagated to the cluster members.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_administration/s1-admin-backup-ca
function::cpuid
function::cpuid Name function::cpuid - Returns the current cpu number Synopsis Arguments None Description This function returns the current cpu number. Deprecated in SystemTap 1.4 and removed in SystemTap 1.5.
[ "cpuid:long()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-cpuid
33.4. Resolving Problems in System Recovery Modes
33.4. Resolving Problems in System Recovery Modes This section provides several procedures that explain how to resolve some of the most common problems that needs to be addressed in some of the system recovery modes. The following procedure shows how to reset a root password: Procedure 33.4. Resetting a Root Password Boot to single-user mode as described in Procedure 33.2, "Booting into Single-User Mode" . Run the passwd command from the maintenance shell command line. One of the most common causes for an unbootable system is overwriting of the Master Boot Record (MBR) that originally contained the GRUB boot loader. If the boot loader is overwritten, you cannot boot Red Hat Enterprise Linux unless you reconfigure the boot loader in rescue mode . To reinstall GRUB on the MBR of your hard drive, proceed with the following procedure: Procedure 33.5. Reinstalling the GRUB Boot Loader Boot to rescue mode as described in Procedure 33.1, "Booting into Rescue Mode" . Ensure that you mount the system's root partition in read-write mode. Execute the following command to change the root partition: Run the following command to reinstall the GRUB boot loader: where boot_part is your boot partition (typically, /dev/sda ). Review the /boot/grub/grub.conf file, as additional entries may be needed for GRUB to control additional operating systems. Reboot the system. Another common problem that would render your system unbootable is a change of your root partition number. This can usually happen when resizing a partition or creating a new partition after installation. If the partition number of your root partition changes, the GRUB boot loader might not be able to find it to mount the partition. To fix this problem,boot into rescue mode and modify the /boot/grub/grub.conf file. A malfunctioning or missing driver can prevent a system from booting normally. You can use the RPM package manager to remove malfunctioning drivers or to add updated or missing drivers in rescue mode . If you cannot remove a malfunctioning driver for some reason, you can instead blacklist the driver so that it does not load at boot time. Note When you install a driver from a driver disc, the driver disc updates all initramfs images on the system to use this driver. If a problem with a driver prevents a system from booting, you cannot rely on booting the system from another initramfs image. To remove a malfunctioning driver that prevents the system from booting, follow this procedure: Procedure 33.6. Remove a Driver in Rescue Mode Boot to rescue mode as described in Procedure 33.1, "Booting into Rescue Mode" . Ensure that you mount the system's root partition in read-write mode. Change the root directory to /mnt/sysimage/ : Run the following command to remove the driver package: Exit the chroot environment: Reboot the system. To install a missing driver that prevents the system from booting, follow this procedure: Procedure 33.7. Installing a Driver in Rescue Mode Boot to rescue mode as described in Procedure 33.1, "Booting into Rescue Mode" . Ensure that you mount the system's root partition in read-write mode. Mount a media with an RPM package that contains the driver and copy the package to a location of your choice under the /mnt/sysimage/ directory, for example: /mnt/sysimage/root/drivers/ . Change the root directory to /mnt/sysimage/ : Run the following command to install the driver package: Note that /root/drivers/ in this chroot environment is /mnt/sysimage/root/drivers/ in the original rescue environment. Exit the chroot environment: Reboot the system. To blacklist a driver that prevents the system from booting and to ensure that this driver cannot be loaded after the root device is mounted, follow this procedure: Procedure 33.8. Blacklisting a Driver in Rescue Mode Boot to rescue mode with the command linux rescue rdblacklist= driver_name , where driver_name is the driver that you need to blacklist. Follow the instructions in Procedure 33.1, "Booting into Rescue Mode" and ensure that you mount the system's root partition in read-write mode. Open the /boot/grub/grub.conf file in the vi editor: Identify the default kernel used to boot the system. Each kernel is specified in the grub.conf file with a group of lines that begins title . The default kernel is specified by the default parameter near the start of the file. A value of 0 refers to the kernel described in the first group of lines, a value of 1 refers to the kernel described in the second group, and higher values refer to subsequent kernels in turn. Edit the kernel line of the group to include the option rdblacklist= driver_name , where driver_name is the driver that you need to blacklist. For example: Save the file and exit the vi editor by typing: :wq Run the following command to create a new file /etc/modprobe.d/ driver_name .conf that will ensure blacklisting of the driver after the root partition is mounted: Reboot the system.
[ "sh-3.00b# chroot /mnt/sysimage", "sh-3.00b# /sbin/grub-install boot_part", "sh-3.00b# chroot /mnt/sysimage", "sh-3.00b# rpm -e driver_name", "sh-3.00b# exit", "sh-3.00b# chroot /mnt/sysimage", "sh-3.00b# rpm -ihv /root/drivers/ package_name", "sh-3.00b# exit", "sh-3.00b# vi /boot/grub/grub.conf", "kernel /vmlinuz-2.6.32-71.18-2.el6.i686 ro root=/dev/sda1 rhgb quiet rdblacklist= driver_name", ":wq", "echo \"install driver_name \" > /mnt/sysimage/etc/modprobe.d/ driver_name .conf" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-resolving_problems_in_system_recovery_modes
Chapter 8. Message delivery
Chapter 8. Message delivery 8.1. Sending messages To send a message, create a client, connection and a sender Example: Sending messages Connection connection = client.connect(serverHost, serverPort, options); SenderOptions senderOptions = new SenderOptions(); Sender sender = connection.openSender(address, senderOptions); Message<String> message = Message.create("Hello World!"); Tracker tracker = sender.send(message); For more information, see the Send.java example . 8.2. Tracking sent messages When the message is sent a Tracker is returned which can be used to track the message or settle it if not using auto settlement Example: waiting for the broker to settle the message tracker.awaitSettlement(); Example: settling the messsage tracker.settle(); 8.3. Receiving messages To receive a message, create a connection, client and receiver. Example: Receiving messages Connection connection = client.connect(serverHost, serverPort, options); Receiver receiver = connection.openReceiver(address); Delivery delivery = receiver.receive(); Message<object> received = delivery.message(); The Receiver.accept() call tells the remote peer that the message was received and processed. For more information, see the Receive.java example . 8.4. Acknowledging received messages The Delivery object can be used to accept, reject modify the delivery. Example: Acknowledging received messages ---- delivery.accept() ---- Example: Rejecting received messages ---- delivery.reject() ---- Example: Releasing received messages ---- delivery.Release() ----
[ "Connection connection = client.connect(serverHost, serverPort, options); SenderOptions senderOptions = new SenderOptions(); Sender sender = connection.openSender(address, senderOptions); Message<String> message = Message.create(\"Hello World!\"); Tracker tracker = sender.send(message);", "tracker.awaitSettlement();", "tracker.settle();", "Connection connection = client.connect(serverHost, serverPort, options); Receiver receiver = connection.openReceiver(address); Delivery delivery = receiver.receive(); Message<object> received = delivery.message();", "---- delivery.accept() ----", "---- delivery.reject() ----", "---- delivery.Release() ----" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_qpid_protonj2/1.0/html/using_qpid_protonj2/message_delivery
Chapter 1. Red Hat build of Apache Camel for Spring Boot 4.4 release notes
Chapter 1. Red Hat build of Apache Camel for Spring Boot 4.4 release notes 1.1. Features in Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel for Spring Boot introduces Camel support for Spring Boot which provides auto-configuration of Camel, and starters for many Camel components. The opinionated auto-configuration of the Camel context auto-detects Camel routes available in the Spring context and registers key Camel utilities (like producer template, consumer template and the type converter) as beans. 1.2. Supported platforms, configurations, databases, and extensions for Red Hat build of Apache Camel for Spring Boot For information about supported platforms, configurations, and databases in Red Hat build of Apache Camel for Spring Boot, see the Supported Configuration page on the Customer Portal (login required). For a list of Red Hat Red Hat build of Apache Camel for Spring Boot extensions, see the Red Hat build of Apache Camel for Spring Boot Reference (login required). 1.3. The javax to jakarta Package Namespace Change The Java EE move to the Eclipse Foundation and the establishment of Jakarta EE, since Jakarta EE 9, packages used for all EE APIs have changed to jakarta.* Code snippets in documentation have been updated to use the jakarta.* namespace, but you of course need to take care and review your own applications. Note This change does not affect javax packages that are part of Java SE. When migrating applications to EE 10, you need to: Update any import statements or other source code uses of EE API classes from the javax package to jakarta . Change any EE-specified system properties or other configuration properties whose names begin with javax. to begin with jakarta. . Use the META-INF/services/jakarta.[rest_of_name] name format to identify implementation classes in your applications that use the implement EE interfaces or abstract classes bootstrapped with the java.util.ServiceLoader mechanism. 1.3.1. Migration tools Source code migration: How to use Red Hat Migration Toolkit for Auto-Migration of an Application to the Jakarta EE 10 Namespace Bytecode transforms: For cases where source code migration is not an option, the open source Eclipse Transformer Additional resources Background: Update on Jakarta EE Rights to Java Trademarks Red Hat Customer Portal: Red Hat JBoss EAP Application Migration from Jakarta EE 8 to EE 10 Jakarta EE: Javax to Jakarta Namespace Ecosystem Progress 1.4. Important notes for Red Hat build of Apache Camel for Spring Boot 1.4.1. Support for IBM Power and IBM Z Red Hat build of Camel Spring Boot is now supported on IBM Power and IBM Z. 1.4.2. Support for EIP circuit breaker The Circuit Breaker EIP for Camel Spring Boot supports Resilience4j configuration. This configuration provides integration with Resilience4j to be used as Circuit Breaker in Camel routes. 1.4.3. Support for Stateful transactions The Red Hat build of Camel Example Spring Boot provides a Camel Spring Boot JTA quickstart . This quickstart demonstrates how to run a Camel Service on Spring Boot that supports JTA transactions on two external transactional resources: a database (MySQL) and a message broker (Artemis). These external resources are provided by OpenShift which must be started before running this quickstart. 1.5. Fixed issues for Red Hat build of Apache Camel for Spring Boot The following sections list the issues that have been resolved in Red Hat build of Apache Camel for Spring Boot. Section 1.5.1, "Red Hat build of Apache Camel for Spring Boot version 4.4.4 fixed issues" Section 1.5.2, "Red Hat build of Apache Camel for Spring Boot version 4.4.3 fixed issues" Section 1.5.3, "Red Hat build of Apache Camel for Spring Boot version 4.4.2 fixed issues" Section 1.5.4, "Red Hat build of Apache Camel for Spring Boot version 4.4.1 fixed issues" Section 1.5.5, "Red Hat build of Apache Camel for Spring Boot version 4.4.0 Enhancements" Section 1.5.6, "Red Hat build of Apache Camel for Spring Boot version 4.4.0 fixed issues" 1.5.1. Red Hat build of Apache Camel for Spring Boot version 4.4.4 fixed issues The following sections list the issues that have been resolved in Red Hat build of Apache Camel for Spring Boot version 4.4.4. Table 1.1. Red Hat build of Apache Camel for Spring Boot version 4.4.4 resolved issues Issue Description CSB-6003 CVE-2024-51132 ca.uhn.hapi.fhir/org.hl7.fhir.dstu2: arbitrary code execution via specially-crafted request CSB-6004 CVE-2024-51132 ca.uhn.hapi.fhir/org.hl7.fhir.dstu2016may: arbitrary code execution via specially-crafted request CSB-6006 CVE-2024-51132 ca.uhn.hapi.fhir/org.hl7.fhir.dstu3: arbitrary code execution via specially-crafted request CSB-6008 CVE-2024-51132 ca.uhn.hapi.fhir/org.hl7.fhir.r4: arbitrary code execution via specially-crafted request CSB-6010 CVE-2024-51132 ca.uhn.hapi.fhir/org.hl7.fhir.r5: arbitrary code execution via specially-crafted request CSB-6012 CVE-2024-51132 ca.uhn.hapi.fhir/org.hl7.fhir.utilities: arbitrary code execution via specially-crafted request CSB-6015 CVE-2024-52007 ca.uhn.hapi.fhir/org.hl7.fhir.dstu2016may: XXE vulnerability in XSLT parsing in org.hl7.fhir.core CSB-6016 CVE-2024-52007 ca.uhn.hapi.fhir/org.hl7.fhir.dstu3: XXE vulnerability in XSLT parsing in org.hl7.fhir.core CSB-6017 CVE-2024-52007 ca.uhn.hapi.fhir/org.hl7.fhir.r4: XXE vulnerability in XSLT parsing in org.hl7.fhir.core CSB-6018 CVE-2024-52007 ca.uhn.hapi.fhir/org.hl7.fhir.r5: XXE vulnerability in XSLT parsing in org.hl7.fhir.core CSB-6019 CVE-2024-52007 ca.uhn.hapi.fhir/org.hl7.fhir.utilities: XXE vulnerability in XSLT parsing in org.hl7.fhir.core CSB-6091 Upgrade to Spring Boot 3.2.11 1.5.2. Red Hat build of Apache Camel for Spring Boot version 4.4.3 fixed issues The following sections list the issues that have been resolved in Red Hat build of Apache Camel for Spring Boot version 4.4.3. Table 1.2. Red Hat build of Apache Camel for Spring Boot version 4.4.3 resolved issues Issue Description CSB-4672 Define Agroal version in CSB platform BOM CSB-5338 [CAMEL-20790]kafka batching consumer polls randomly failing with NPE under load CSB-5388 CVE-2023-52428 com.nimbusds/nimbus-jose-jwt: large JWE p2c header value causes Denial of Service CSB-5416 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.dstu2016may: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5419 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.dstu3: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5422 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.r4: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5425 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.r5: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5428 CVE-2024-45294 ca.uhn.hapi.fhir/org.hl7.fhir.utilities: XXE vulnerability in XSLT transforms in org.hl7.fhir.core CSB-5492 CVE-2024-38816 org.springframework/spring-webmvc: Path Traversal Vulnerability in Spring Applications Using RouterFunctions and FileSystemResource CSB-5531 Camel route coverage is not working after upgrading Camel from 4.0 to 4.4 CSB-5556 CVE-2024-7254 protobuf: StackOverflow vulnerability in Protocol Buffers CSB-5568 camel-cics: the protocol option has been hardcoded in the CICSConfiguration class CSB-5571 CVE-2024-38809 org.springframework/spring-web: Spring Framework DoS via conditional HTTP request CSB-5584 Excessing locking in camel jaxb under load CSB-5603 CVE-2021-44549 org.eclipse.angus/angus-mail: Enabling Secure Server Identity Checks for Safer SMTPS Communication CSB-5662 CVE-2024-47561 org.apache.avro/avro: Schema parsing may trigger Remote Code Execution (RCE) CSB-5673 Address CXF Async Calls with OpenTelemetry 1.5.3. Red Hat build of Apache Camel for Spring Boot version 4.4.2 fixed issues The following sections list the issues that have been resolved in Red Hat build of Apache Camel for Spring Boot version 4.4.2. Table 1.3. Red Hat build of Apache Camel for Spring Boot version 4.4.2 resolved issues Issue Description CSB-4960 CVE-2024-41172 org.apache.cxf/cxf-rt-transports-http: unrestricted memory consumption in CXF HTTP clients CSB-4981 OOM using RecipientList CSB-5028 CVE-2024-7885 undertow: Improper State Management in Proxy Protocol parsing causes information leakage CSB-5082 CVE-2024-38808 org.springframework/spring-expression: From NVD collector CSB-5094 Upgrade CSB 4.4.x to Spring Boot 3.2.9 CSB-5313 artemis-quorum-api was removed in artemis 2.33+ in favor of artemis-lockmanager CAMEL-21044 azure-servicebus: FQNS not set correctly when credentialType is AZURE_IDENTITY CAMEL-21053 camel-xslt - All exchange properties should be avaiable CAMEL-21057 REST OpenApi fails to resolve host from the URL CAMEL-21101 Camel-Hashicorp-Vault: Get Secret operation doesn't take into account the secretPath configuration parameter 1.5.4. Red Hat build of Apache Camel for Spring Boot version 4.4.1 fixed issues The following sections list the issues that have been resolved in Red Hat build of Apache Camel for Spring Boot version 4.4.1. Table 1.4. Red Hat build of Apache Camel for Spring Boot version 4.4.1 resolved issues Issue Description CSB-1950 [CSB Examples] - javax dependency requested for camel-jira example CSB-3055 Camel AWS Kinesis: support checkpoint CSB-3096 CVE-2022-41678 activemq: Apache ActiveMQ: Deserialization vulnerability on Jolokia that allows authenticated users to perform RCE CSB-3222 The camel-spring-boot-bom still references upstream Artemis client libraries and cause error if mixed use them CSB-3319 CVE-2023-51079 mvel: TimeOut error when calling ParseTools.subCompileExpression() function CSB-3455 CVE-2024-1023 vert.x: io.vertx/vertx-core: memory leak due to the use of Netty FastThreadLocal data structures in Vertx CSB-3666 CVE-2024-1300 vertx-core: io.vertx:vertx-core: memory leak when a TCP server is configured with TLS and SNI support CSB-3778 CVE-2024-22201 jetty: stop accepting new connections from valid clients CSB-3841 CVE-2024-1597 pgjdbc: PostgreSQL JDBC Driver allows attacker to inject SQL if using PreferQueryMode=SIMPLE CSB-3844 CVE-2024-1597 pgjdbc: PostgreSQL JDBC Driver allows attacker to inject SQL if using PreferQueryMode=SIMPLE CSB-3945 CVE-2024-22257 spring-security: Broken Access Control With Direct Use of AuthenticatedVoter CSB-4010 CVE-2024-29025 netty-codec-http: Allocation of Resources Without Limits or Throttling CSB-4027 CVE-2024-23081 threetenbp: null pointer exception CSB-4046 Saxon library used by camel-saxon wrongly transform xml node CSB-4105 Include jackson-bom in the list of artifacts that we are overriding in platform bom CSB-4176 CVE-2024-30171 org.bouncycastle-bcprov-jdk18on: bc-java: BouncyCastle vulnerable to a timing variant of Bleichenbacher (Marvin Attack) CSB-4249 Bug on Camel documentation on "Setting up SSL for HTTP Client" CSB-4353 camel-jbang - generated pom.xml with "--camel-spring-boot-version" option includes garbage characters CSB-4356 XPath conversions failing in CSB 4.4 CSB-4525 [camel-cics] reset message body when CICS transaction failed CSB-4533 failed route should be visible in spring-boot actuator/camelroutes CSB-4589 Generated pom.xml file by camel-jbang export command is not suitable for Red Hat products CSB-4596 camel export command with "camel-spring-boot-version" option does not work CSB-4618 Unexpected change of behavior on method Message.getBody(Class) CSB-4624 CVE-2024-5971 undertow: response write hangs in case of Java 17 TLSv1.3 NewSessionTicket CSB-4642 request-reply over JMS example should use replyToConcurrentConsumers instead of concurrentConsumers CSB-4652 CVE-2024-30172 org.bouncycastle:bcprov-jdk18on: Infinite loop in ED25519 verification in the ScalarUtil class CSB-4658 CVE-2024-29857 org.bouncycastle:bcprov-jdk18on: org.bouncycastle: Importing an EC certificate with crafted F2m parameters may lead to Denial of Service CSB-4669 CVE-2024-6162 undertow: url-encoded request path information can be broken on ajp-listener CSB-4676 Missing Jackson Jakarta RS XML provider from Maven repository CSB-4751 CAMEL-20921 - Route configuration is not loaded on a Camel application XML file CSB-4776 Upgrade to boucy castle 1.78 break camel-crypto CSB-4823 Unsupported components show 4.4.0-SNAPSHOT version 1.5.5. Red Hat build of Apache Camel for Spring Boot version 4.4.0 Enhancements The following sections list the issues that have been resolved in Red Hat build of Apache Camel for Spring Boot version 4.4.0. Table 1.5. Red Hat build of Apache Camel for Spring Boot version 4.4.0 Enhancements Issue Description CSB-470 Support Hawtio console for Camel for Spring Boot CSB-1246 camel-olingo4 support CSB-1693 Adding a Kafka Batch Consumer CSB-2460 [RFE] Support component camel-smb CSB-2479 Enhancing XML IO DSL to support beans like in YAML DSL CSB-2649 Camel for Spring Boot support for IBM Z/P CSB-2841 Provide support to configure algorithm for camel-ssh component CSB-2968 Add support for camel-flink CSB-2973 Add Azure SAS support for azure blob storage CSB-3025 Create and support a new Camel CICS component CSB-3061 Support component camel-splunk CSB-3236 Offline Maven Builder Script CSB-3244 Support component camel-jasypt CSB-3357 Support component camel-kudu CSB-3331 Support cxf-integration-tracing-opentelemetry CSB-3371 Support component camel-groovy CSB-3462 BeanIO support CSB-4117 camel-cics - support connectivity via channels 1.5.6. Red Hat build of Apache Camel for Spring Boot version 4.4.0 fixed issues Table 1.6. Red Hat build of Apache Camel for Spring Boot version 4.4.0 resolved issues Issue Description CSB-1913 CVE-2023-35116 jackson-databind: denial of service via cylic dependencies CSB-2007 CVE-2023-2976 guava: insecure temporary directory creation CSB-2041 AWS SQS component, OCP probes cause POD error CSB-2139 [Micrometer Observability] Unable to see trace id and span id in MDC CSB-2644 Please provide examples that show Camel AMQP/JMS used with a connection pool CSB-2846 CVE-2023-5632 mosquitto: Possible Denial of Service due to excessive CPE consumption CSB-3042 [camel-mail] java.lang.ClassNotFoundException: org.eclipse.angus.mail.imap.IMAPStore CSB-3294 Dependency convergence error for org.ow2.asm:asm when using CXF and JSON Path CSB-3298 Dependency convergence error for org.bouncycastle:bcprov-jdk18on:jar:1.72 CSB-3302 Add support for findAndModify Operation CSB-3316 CVE-2023-51074 json-path: stack-based buffer overflow in Criteria.parse method CSB-3331 Support cxf-integration-tracing-opentelemetry CSB-3438 CVE-2024-21733 tomcat: Leaking of unrelated request bodies in default error page CSB-3454 camel-bean - Allow to configure bean introspection cache on component CSB-3601 Dependency convergence errors when using cxf-rt-rs-service-description-openapi-v3:4.0.2.fuse-redhat-00046 and camel-openapi-java-starter:4.0.0.redhat-00039 CSB-3713 CVE-2023-45860 Hazelcast: Permission checking in CSV File Source connector CSB-3716 AMQP publisher application is losing messages with local JMS transaction enabled CSB-3722 CVE-2024-26308 commons-compress: OutOfMemoryError unpacking broken Pack200 file CSB-3725 commons-compress: Denial of service caused by an infinite loop for a corrupted DUMP file [rhint-camel-spring-boot-4] CSB-3731 restConfiguration section is ignored when using XML DSL IO CSB-3765 Issue while marshalling/ummarshalling XML to JSON. CSB-3837 CVE-2023-5685 xnio: StackOverflowException when the chain of notifier states becomes problematically big CSB-3851 onException handler does not set content in the body response when used with servlet/platform-http CSB-3884 [Camel-sap] Unable to connect to SAP server through CSB configuration properties CSB-3892 camel-file - Can ant filter be optimized when using min/max depth with orphan marker file check CSB-3916 NPE occurs If user uses OpenTelemetryTracingStrategy and opentelemetry.exclude-patterns to exclude "direct*" CSB-3922 OpenTelemetryTracingStrategy separates a trace into 2 branches with opentelemetry.exclude-patterns "process*" or "bean*" CSB-3925 Request to offer connection pooling in camel-cics CSB-4022 Put a max default configurable limit on the Jose P2C parameter & Only explicitly return the stylesheet in WadlGenerator and not other URLs CSB-4092 Type Conversion Error from byte[] to Long in Camel 4 from Kafka Topic for JMS* headers CSB-4095 camel-salesforce - startup error CSB-4102 CVE-2024-22262 springframework: URL Parsing with Host Validation 1.6. Known issues for Red Hat build of Apache Camel for Spring Boot The following sections list known issues for Red Hat build of Apache Camel for Spring Boot. 1.6.1. Red Hat build of Apache Camel for Spring Boot version 4.4 known issues CSB-4318 Fail to deploy on OCP using Openshift Maven Plugin if spring.boot.actuator.autoconfigure is not in the dependencies Jkube maven plugin uses the following condition to check if the application exposes health endpoint (using SpringBootHealthCheckEnricher ). Both classes are in the classpath: org.springframework.boot.actuate.health.HealthIndicator org.springframework.web.context.support.GenericWebApplicationContext However, the /actuator/health wil be not exposed without the configuration of the actuator. This creates discordance between the readiness/liveness probes configured by JKube (they both uses the above endpoint) and what the application is exposing. This misconfiguration causes a failing deployment config on OpenShift Container Platform since the generated pod will never be in Ready status since the probe`s call for an endpoint is not configured. So in order to make the application work on OpenShift Container Platform, which is deployed using JKube (openshift-maven-plugin), it is necessary to have both web and actuator autoconfiguration in the dependencies. Following example shows how to configure web and actuator autoconfiguration. Example <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> Update the archetype as shown below. The applications built from the following archetype will be deployed correctly using JKube. <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-undertow</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency> This issue affects the custom applications with missing one of the above dependencies. 1.7. Additional resources Supported Configurations Getting Started with Red Hat build of Apache Camel for Spring Boot Migrating to Red Hat build of Apache Camel for Spring Boot Red Hat build of Apache Camel for Spring Boot Reference
[ "<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>", "<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId> <exclusions> <exclusion> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-tomcat</artifactId> </exclusion> </exclusions> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-undertow</artifactId> </dependency> <dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-actuator</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/release_notes_for_red_hat_build_of_apache_camel_for_spring_boot/camel-spring-boot-relnotes_csb
3.3. RHEA-2011:1653 - new package: libunistring
3.3. RHEA-2011:1653 - new package: libunistring A new libunistring package is now available for Red Hat Enterprise Linux 6. This portable C library implements the UTF-8, UTF-16 and UTF-32 Unicode string types, together with functions for character processing (names, classifications, and properties) and functions for string processing (iteration, formatted output, width, word breaks, line breaks, normalization, case folding, and regular expressions). This enhancement update adds the libunistring package to Red Hat Enterprise Linux 6. The libunistring package has been added as a dependency for the System Security Services Daemon (SSSD) in order to process internationalized HBAC rules on FreeIPA servers. (BZ# 726463 ) All users who require libunistring should install this new package.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/libunistring
Chapter 3. Related information
Chapter 3. Related information For further information on using NVIDA vGPU on RHEL with KVM, see: the NVIDIA GPU Software Release Notes . NVIDIA Virtual GPU Software Documentation at https://docs.nvidia.com .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/setting_up_an_nvidia_gpu_for_a_virtual_machine_in_red_hat_virtualization/related-information-general_nvidia_vgpu
Chapter 4. Enabling and configuring Data Grid statistics and JMX monitoring
Chapter 4. Enabling and configuring Data Grid statistics and JMX monitoring Data Grid can provide Cache Manager and cache statistics as well as export JMX MBeans. 4.1. Enabling statistics in embedded caches Configure Data Grid to export statistics for the Cache Manager and embedded caches. Procedure Open your Data Grid configuration for editing. Add the statistics="true" attribute or the .statistics(true) method. Save and close your Data Grid configuration. Embedded cache statistics XML <infinispan> <cache-container statistics="true"> <distributed-cache statistics="true"/> <replicated-cache statistics="true"/> </cache-container> </infinispan> GlobalConfigurationBuilder GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder().cacheContainer().statistics(true); DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); Configuration builder = new ConfigurationBuilder(); builder.statistics().enable(); 4.2. Configuring Data Grid metrics Data Grid generates metrics that are compatible with any monitoring system. Gauges provide values such as the average number of nanoseconds for write operations or JVM uptime. Histograms provide details about operation execution times such as read, write, and remove times. By default, Data Grid generates gauges when you enable statistics but you can also configure it to generate histograms. Note Data Grid metrics are provided at the vendor scope. Metrics related to the JVM are provided in the base scope. Prerequisites You must add Micrometer Core and Micrometer Registry Prometheus JARs to your classpath to export Data Grid metrics for embedded caches. Procedure Open your Data Grid configuration for editing. Add the metrics element or object to the cache container. Enable or disable gauges with the gauges attribute or field. Enable or disable histograms with the histograms attribute or field. Save and close your client configuration. Metrics configuration XML <infinispan> <cache-container statistics="true"> <metrics gauges="true" histograms="true" /> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "metrics" : { "gauges" : "true", "histograms" : "true" } } } } YAML infinispan: cacheContainer: statistics: "true" metrics: gauges: "true" histograms: "true" GlobalConfigurationBuilder GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() //Computes and collects statistics for the Cache Manager. .statistics().enable() //Exports collected statistics as gauge and histogram metrics. .metrics().gauges(true).histograms(true) .build(); Additional resources Micrometer Prometheus 4.3. Registering JMX MBeans Data Grid can register JMX MBeans that you can use to collect statistics and perform administrative operations. You must also enable statistics otherwise Data Grid provides 0 values for all statistic attributes in JMX MBeans. Important Use JMX Mbeans for collecting statistics only when Data Grid is embedded in applications and not with a remote Data Grid server. When you use JMX Mbeans for collecting statistics from a remote Data Grid server, the data received from JMX Mbeans might differ from the data received from other APIs such as REST. In such cases the data received from the other APIs is more accurate. Procedure Open your Data Grid configuration for editing. Add the jmx element or object to the cache container and specify true as the value for the enabled attribute or field. Add the domain attribute or field and specify the domain where JMX MBeans are exposed, if required. Save and close your client configuration. JMX configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" GlobalConfigurationBuilder GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain("org.mydomain"); 4.3.1. Enabling JMX remote ports Provide unique remote JMX ports to expose Data Grid MBeans through connections in JMXServiceURL format. You can enable remote JMX ports using one of the following approaches: Enable remote JMX ports that require authentication to one of the Data Grid Server security realms. Enable remote JMX ports manually using the standard Java management configuration options. Prerequisites For remote JMX with authentication, define JMX specific user roles using the default security realm. Users must have controlRole with read/write access or the monitorRole with read-only access to access any JMX resources. Data Grid automatically maps global ADMIN and MONITOR permissions to the JMX controlRole and monitorRole roles. Procedure Start Data Grid Server with a remote JMX port enabled using one of the following ways: Enable remote JMX through port 9999 . Warning Using remote JMX with SSL disabled is not intended for production environments. Pass the following system properties to Data Grid Server at startup. Warning Enabling remote JMX with no authentication or SSL is not secure and not recommended in any environment. Disabling authentication and SSL allows unauthorized users to connect to your server and access the data hosted there. Additional resources Creating security realms 4.3.2. Data Grid MBeans Data Grid exposes JMX MBeans that represent manageable resources. org.infinispan:type=Cache Attributes and operations available for cache instances. org.infinispan:type=CacheManager Attributes and operations available for Cache Managers, including Data Grid cache and cluster health statistics. For a complete list of available JMX MBeans along with descriptions and available operations and attributes, see the Data Grid JMX Components documentation. Additional resources Data Grid JMX Components 4.3.3. Registering MBeans in custom MBean servers Data Grid includes an MBeanServerLookup interface that you can use to register MBeans in custom MBeanServer instances. Prerequisites Create an implementation of MBeanServerLookup so that the getMBeanServer() method returns the custom MBeanServer instance. Configure Data Grid to register JMX MBeans. Procedure Open your Data Grid configuration for editing. Add the mbean-server-lookup attribute or field to the JMX configuration for the Cache Manager. Specify fully qualified name (FQN) of your MBeanServerLookup implementation. Save and close your client configuration. JMX MBean server lookup configuration XML <infinispan> <cache-container statistics="true"> <jmx enabled="true" domain="example.com" mbean-server-lookup="com.example.MyMBeanServerLookup"/> </cache-container> </infinispan> JSON { "infinispan" : { "cache-container" : { "statistics" : "true", "jmx" : { "enabled" : "true", "domain" : "example.com", "mbean-server-lookup" : "com.example.MyMBeanServerLookup" } } } } YAML infinispan: cacheContainer: statistics: "true" jmx: enabled: "true" domain: "example.com" mbeanServerLookup: "com.example.MyMBeanServerLookup" GlobalConfigurationBuilder GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain("org.mydomain") .mBeanServerLookup(new com.acme.MyMBeanServerLookup()); 4.4. Exporting metrics during a state transfer operation You can export time metrics for clustered caches that Data Grid redistributes across nodes. A state transfer operation occurs when a clustered cache topology changes, such as a node joining or leaving a cluster. During a state transfer operation, Data Grid exports metrics from each cache, so that you can determine a cache's status. A state transfer exposes attributes as properties, so that Data Grid can export metrics from each cache. Note You cannot perform a state transfer operation in invalidation mode. Data Grid generates time metrics that are compatible with the REST API and the JMX API. Prerequisites Configure Data Grid metrics. Enable metrics for your cache type, such as embedded cache or remote cache. Initiate a state transfer operation by changing your clustered cache topology. Procedure Choose one of the following methods: Configure Data Grid to use the REST API to collect metrics. Configure Data Grid to use the JMX API to collect metrics. Additional resources Enabling and configuring Data Grid statistics and JMX monitoring (Data Grid caches) StateTransferManager (Data Grid 15.0 API) 4.5. Monitoring the status of cross-site replication Monitor the site status of your backup locations to detect interruptions in the communication between the sites. When a remote site status changes to offline , Data Grid stops replicating your data to the backup location. Your data become out of sync and you must fix the inconsistencies before bringing the clusters back online. Monitoring cross-site events is necessary for early problem detection. Use one of the following monitoring strategies: Monitoring cross-site replication with the REST API Monitoring cross-site replication with the Prometheus metrics or any other monitoring system Monitoring cross-site replication with the REST API Monitor the status of cross-site replication for all caches using the REST endpoint. You can implement a custom script to poll the REST endpoint or use the following example. Prerequisites Enable cross-site replication. Procedure Implement a script to poll the REST endpoint. The following example demonstrates how you can use a Python script to poll the site status every five seconds. #!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/container/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None # Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' # Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 # Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC) When a site status changes from online to offline or vice-versa, the function on_event is invoked. If you want to use this script, you must specify the following variables: USERNAME and PASSWORD : The username and password of Data Grid user with permission to access the REST endpoint. POLL_INTERVAL_SEC : The number of seconds between polls. SERVERS : The list of Data Grid Servers at this site. The script only requires a single valid response but the list is provided to allow fail over. REMOTE_SITES : The list of remote sites to monitor on these servers. CACHES : The list of cache names to monitor. Additional resources REST API: Getting status of backup locations Monitoring cross-site replication with the Prometheus metrics Prometheus, and other monitoring systems, let you configure alerts to detect when a site status changes to offline . Tip Monitoring cross-site latency metrics can help you to discover potential issues. Prerequisites Enable cross-site replication. Procedure Configure Data Grid metrics. Configure alerting rules using the Prometheus metrics format. For the site status, use 1 for online and 0 for offline . For the expr filed, use the following format: vendor_cache_manager_default_cache_<cache name>_x_site_admin_<site name>_status . In the following example, Prometheus alerts you when the NYC site gets offline for cache named work or sessions . groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0 The following image shows an alert that the NYC site is offline for cache work . Figure 4.1. Prometheus Alert Additional resources Configuring Data Grid metrics Prometheus Alerting Overview Grafana Alerting Documentation Openshift Managing Alerts
[ "<infinispan> <cache-container statistics=\"true\"> <distributed-cache statistics=\"true\"/> <replicated-cache statistics=\"true\"/> </cache-container> </infinispan>", "GlobalConfigurationBuilder global = GlobalConfigurationBuilder.defaultClusteredBuilder().cacheContainer().statistics(true); DefaultCacheManager cacheManager = new DefaultCacheManager(global.build()); Configuration builder = new ConfigurationBuilder(); builder.statistics().enable();", "<infinispan> <cache-container statistics=\"true\"> <metrics gauges=\"true\" histograms=\"true\" /> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"metrics\" : { \"gauges\" : \"true\", \"histograms\" : \"true\" } } } }", "infinispan: cacheContainer: statistics: \"true\" metrics: gauges: \"true\" histograms: \"true\"", "GlobalConfiguration globalConfig = new GlobalConfigurationBuilder() //Computes and collects statistics for the Cache Manager. .statistics().enable() //Exports collected statistics as gauge and histogram metrics. .metrics().gauges(true).histograms(true) .build();", "<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\"/> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\" } } } }", "infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\"", "GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain(\"org.mydomain\");", "bin/server.sh --jmx 9999", "bin/server.sh -Dcom.sun.management.jmxremote.port=9999 -Dcom.sun.management.jmxremote.authenticate=false -Dcom.sun.management.jmxremote.ssl=false", "<infinispan> <cache-container statistics=\"true\"> <jmx enabled=\"true\" domain=\"example.com\" mbean-server-lookup=\"com.example.MyMBeanServerLookup\"/> </cache-container> </infinispan>", "{ \"infinispan\" : { \"cache-container\" : { \"statistics\" : \"true\", \"jmx\" : { \"enabled\" : \"true\", \"domain\" : \"example.com\", \"mbean-server-lookup\" : \"com.example.MyMBeanServerLookup\" } } } }", "infinispan: cacheContainer: statistics: \"true\" jmx: enabled: \"true\" domain: \"example.com\" mbeanServerLookup: \"com.example.MyMBeanServerLookup\"", "GlobalConfiguration global = GlobalConfigurationBuilder.defaultClusteredBuilder() .jmx().enable() .domain(\"org.mydomain\") .mBeanServerLookup(new com.acme.MyMBeanServerLookup());", "#!/usr/bin/python3 import time import requests from requests.auth import HTTPDigestAuth class InfinispanConnection: def __init__(self, server: str = 'http://localhost:11222', cache_manager: str = 'default', auth: tuple = ('admin', 'change_me')) -> None: super().__init__() self.__url = f'{server}/rest/v2/container/x-site/backups/' self.__auth = auth self.__headers = { 'accept': 'application/json' } def get_sites_status(self): try: rsp = requests.get(self.__url, headers=self.__headers, auth=HTTPDigestAuth(self.__auth[0], self.__auth[1])) if rsp.status_code != 200: return None return rsp.json() except: return None Specify credentials for Data Grid user with permission to access the REST endpoint USERNAME = 'admin' PASSWORD = 'change_me' Set an interval between cross-site status checks POLL_INTERVAL_SEC = 5 Provide a list of servers SERVERS = [ InfinispanConnection('http://127.0.0.1:11222', auth=(USERNAME, PASSWORD)), InfinispanConnection('http://127.0.0.1:12222', auth=(USERNAME, PASSWORD)) ] #Specify the names of remote sites REMOTE_SITES = [ 'nyc' ] #Provide a list of caches to monitor CACHES = [ 'work', 'sessions' ] def on_event(site: str, cache: str, old_status: str, new_status: str): # TODO implement your handling code here print(f'site={site} cache={cache} Status changed {old_status} -> {new_status}') def __handle_mixed_state(state: dict, site: str, site_status: dict): if site not in state: state[site] = {c: 'online' if c in site_status['online'] else 'offline' for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, 'online' if cache in site_status['online'] else 'offline') def __handle_online_or_offline_state(state: dict, site: str, new_status: str): if site not in state: state[site] = {c: new_status for c in CACHES} return for cache in CACHES: __update_cache_state(state, site, cache, new_status) def __update_cache_state(state: dict, site: str, cache: str, new_status: str): old_status = state[site].get(cache) if old_status != new_status: on_event(site, cache, old_status, new_status) state[site][cache] = new_status def update_state(state: dict): rsp = None for conn in SERVERS: rsp = conn.get_sites_status() if rsp: break if rsp is None: print('Unable to fetch site status from any server') return for site in REMOTE_SITES: site_status = rsp.get(site, {}) new_status = site_status.get('status') if new_status == 'mixed': __handle_mixed_state(state, site, site_status) else: __handle_online_or_offline_state(state, site, new_status) if __name__ == '__main__': _state = {} while True: update_state(_state) time.sleep(POLL_INTERVAL_SEC)", "groups: - name: Cross Site Rules rules: - alert: Cache Work and Site NYC expr: vendor_cache_manager_default_cache_work_x_site_admin_nyc_status == 0 - alert: Cache Sessions and Site NYC expr: vendor_cache_manager_default_cache_sessions_x_site_admin_nyc_status == 0" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/embedding_data_grid_in_java_applications/statistics-jmx
Installing GitOps
Installing GitOps Red Hat OpenShift GitOps 1.15 Installing the OpenShift GitOps Operator, logging in to the Argo CD instance, and installing the GitOps CLI Red Hat OpenShift Documentation Team
[ "edit argocd <name of argo cd> -n namespace", "oc get argocd -n openshift-gitops openshift-gitops -o json | jq '.spec.redis.resources'", "{ \"limits\": { 1 \"cpu\": \"500m\", \"memory\": \"256Mi\" }, \"requests\": { 2 \"cpu\": \"250m\", \"memory\": \"128Mi\" } }", "oc patch argocd -n openshift-gitops openshift-gitops --type json -p '[{\"op\": \"replace\", \"path\": \"/spec/redis/resources/limits/memory\", \"value\": \"8Gi\"}, {\"op\": \"replace\", \"path\": \"/spec/redis/resources/requests/memory\", \"value\": \"256Mi\"}]'", "argocd.argoproj.io/openshift-gitops patched", "oc label namespace <namespace> openshift.io/cluster-monitoring=true", "namespace/<namespace> labeled", "oc create ns openshift-gitops-operator", "namespace/openshift-gitops-operator created", "oc label namespace <namespace> openshift.io/cluster-monitoring=true", "namespace/<namespace> labeled", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: upgradeStrategy: Default", "oc apply -f gitops-operator-group.yaml", "operatorgroup.operators.coreos.com/openshift-gitops-operator created", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: openshift-gitops-operator namespace: openshift-gitops-operator spec: channel: latest 1 installPlanApproval: Automatic name: openshift-gitops-operator 2 source: redhat-operators 3 sourceNamespace: openshift-marketplace 4", "oc apply -f openshift-gitops-sub.yaml", "subscription.operators.coreos.com/openshift-gitops-operator created", "oc get pods -n openshift-gitops", "NAME READY STATUS RESTARTS AGE cluster-b5798d6f9-zr576 1/1 Running 0 65m openshift-gitops-application-controller-0 1/1 Running 0 53m openshift-gitops-applicationset-controller-6447b8dfdd-5ckgh 1/1 Running 0 65m openshift-gitops-dex-server-569b498bd9-vf6mr 1/1 Running 0 65m openshift-gitops-redis-74bd8d7d96-49bjf 1/1 Running 0 65m openshift-gitops-repo-server-c999f75d5-l4rsg 1/1 Running 0 65m openshift-gitops-server-5785f7668b-wj57t 1/1 Running 0 53m", "oc get pods -n openshift-gitops-operator", "NAME READY STATUS RESTARTS AGE openshift-gitops-operator-controller-manager-664966d547-vr4vb 2/2 Running 0 65m", "tar xvzf <file>", "sudo mv argocd /usr/local/bin/argocd", "sudo chmod +x /usr/local/bin/argocd", "argocd version --client", "argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1", "subscription-manager register", "subscription-manager refresh", "subscription-manager list --available --matches '*gitops*'", "subscription-manager attach --pool=<pool_id>", "subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-x86_64-rpms\"", "subscription-manager repos --enable=\"gitops-1.15-for-rhel-8-x86_64-rpms\"", "subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-s390x-rpms\"", "subscription-manager repos --enable=\"gitops-1.15-for-rhel-8-s390x-rpms\"", "subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-ppc64le-rpms\"", "subscription-manager repos --enable=\"gitops-1.15-for-rhel-8-ppc64le-rpms\"", "subscription-manager repos --enable=\"gitops-<gitops_version>-for-rhel-<rhel_version>-aarch64-rpms\"", "subscription-manager repos --enable=\"gitops-1.15-for-rhel-8-aarch64-rpms\"", "yum install openshift-gitops-argocd-cli", "argocd version --client", "argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1", "C:\\> move argocd.exe <directory>", "argocd version --client", "argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1", "tar xvzf <file>", "sudo mv argocd /usr/local/bin/argocd", "sudo chmod +x /usr/local/bin/argocd", "argocd version --client", "argocd: v2.9.5+f943664 BuildDate: 2024-02-15T05:19:27Z GitCommit: f9436641a616d277ab1f98694e5ce4c986d4ea05 GitTreeState: clean GoVersion: go1.20.10 Compiler: gc Platform: linux/amd64 ExtraBuildInfo: openshift-gitops-version: 1.12.0, release: 0015022024 1" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_gitops/1.15/html-single/installing_gitops/index
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/using_source-to-image_for_openshift_with_red_hat_build_of_openjdk_11/making-open-source-more-inclusive
Chapter 48. Using the standalone library perspective
Chapter 48. Using the standalone library perspective You can use the library perspective of Business Central to select a project you want to edit. You can also perform all the authoring functions on the selected project. The standalone library perspective can be used in two ways, with and without using the header=UberfireBreadcrumbsContainer parameter. The difference is that the address with the header parameter will display a breadcrumb trail on top of the library perspective. Using this link you can create additional Spaces for your projects. Procedure Log in to Business Central. In a web browser, enter the appropriate web address: For accessing the standalone library perspective without the header parameter http://localhost:8080/business-central/kie-wb.jsp?standalone=true&perspective=LibraryPerspective The standalone library perspective without the breadcrumb trail opens in the browser. For accessing the standalone library perspective with the header parameter http://localhost:8080/business-central/kie-wb.jsp?standalone=true&perspective=LibraryPerspective&header=UberfireBreadcrumbsContainer The standalone library perspective with the breadcrumb trail opens in the browser.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/managing_red_hat_process_automation_manager_and_kie_server_settings/using-standalone-perspectives-library-proc
Chapter 4. Executing tasks using Red Hat Insights
Chapter 4. Executing tasks using Red Hat Insights You can execute tasks on remote systems in the Red Hat Hybrid Cloud Console directly from Red Hat Insights Tasks. Tasks you can execute are: RHEL pre-upgrade analysis utility tasks. Pre-conversion analysis utility tasks. Convert to RHEL from CentOS Linux 7. Note Prerequisites and actions required to execute specific Insights tasks will vary. Here are general instructions to execute a task. Prerequisites You are logged in to the Red Hat Hybrid Cloud Console. You are a member of a User Access group with the Tasks administrator role. You have connected systems and addressed dependencies for Remote Host Configuration (rhc), rhc-worker-playbook and ansible-core , as needed. See You have addressed dependencies for Satellite 6.11+. Services in the Insights Automation Toolkit have similar dependency requirements that must be met before Red Hat Insights users can execute playbooks for remediations and tasks. Procedure Navigate to Automation Toolkit > Tasks . Select a task to execute and click Run Task . Optional: Edit the default task name to customize it for your needs. Note After you execute the task, you will not be able to change the task name again. Make note of any task-specific prerequisites shown in the brief description of the task. Select the systems on which to execute the task. You can use filters to search and filter systems by: Name Operating System Tag Click Execute task . The task executes on the selected systems. You might see a pop-up that shows that your task is running. Click View Progress to view the task details page which shows how the task is executing on each of your selected systems. Review Status and Message details. If shown, click the Show more icon beside the system name to find more information about messages. Click Tasks to go to the task detail view to see more information about how the task executed on the selected systems. Click the Activity tab to see the status of all the task you have executed. Tasks are in chronological order, by the most recent date and time. Note A Completed status indicates that the task executed, but does not indicate that the task accomplished its goal. Optional: Click the task you executed to return to the task detail view to see more information about how the task executed on the selected systems. Steps You might need to resolve errors, such as an error that occurs because you need to install a software package on your systems before a task can successfully execute. After you resolve those errors, you can execute the task again on the same systems. Optional: To execute a task again, click Run Task again. The previously selected systems are still selected, and you can also add additional systems, if needed.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/assessing_and_remediating_system_issues_using_red_hat_insights_tasks_with_fedramp/executing-tasks_overview-tasks
Chapter 11. Managing Ceph Object Gateway using the dashboard
Chapter 11. Managing Ceph Object Gateway using the dashboard As a storage administrator, the Ceph Object Gateway functions of the dashboard allow you to manage and monitor the Ceph Object Gateway. You can also create the Ceph Object Gateway services with Secure Sockets Layer (SSL) using the dashboard. For example, monitoring functions allow you to view details about a gateway daemon such as its zone name, or performance graphs of GET and PUT rates. Management functions allow you to view, create, and edit both users and buckets. Ceph Object Gateway functions are divided between user functions and bucket functions. 11.1. Manually adding Ceph object gateway login credentials to the dashboard The Red Hat Ceph Storage Dashboard can manage the Ceph Object Gateway, also known as the RADOS Gateway, or RGW. When Ceph Object Gateway is deployed with cephadm , the Ceph Object Gateway credentials used by the dashboard is automatically configured. You can also manually force the Ceph object gateway credentials to the Ceph dashboard using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Ceph Object Gateway is installed. Procedure Log into the Cephadm shell: Example Set up the credentials manually: Example This creates a Ceph Object Gateway user with UID dashboard for each realm in the system. Optional: If you have configured a custom admin resource in your Ceph Object Gateway admin API, you have to also set the the admin resource: Syntax Example Optional: If you are using HTTPS with a self-signed certificate, disable certificate verification in the dashboard to avoid refused connections. Refused connections can happen when the certificate is signed by an unknown Certificate Authority, or if the host name used does not match the host name in the certificate. Syntax Example Optional: If the Object Gateway takes too long to process requests and the dashboard runs into timeouts, you can set the timeout value: Syntax The default value of 45 seconds. Example 11.2. Creating the Ceph Object Gateway services with SSL using the dashboard After installing a Red Hat Ceph Storage cluster, you can create the Ceph Object Gateway service with SSL using two methods: Using the command-line interface. Using the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. SSL key from Certificate Authority (CA). Note Obtain the SSL certificate from a CA that matches the hostname of the gateway host. Red Hat recommends obtaining a certificate from a CA that has subject alternate name fields and a wildcard for use with S3-style subdomains. Procedure Log in to the Dashboard. From the Cluster drop-down menu, select Services . Click +Create . In the Create Service window, select rgw service. Select SSL and upload the Certificate in .pem format. Figure 11.1. Creating Ceph Object Gateway service Click Create Service . Check the Ceph Object Gateway service is up and running. Additional Resources See the Configuring SSL for Beast section in the Red Hat Ceph Storage Object Gateway Guide . 11.3. Configuring high availability for the Ceph Object Gateway on the dashboard The ingress service provides a highly available endpoint for the Ceph Object Gateway. You can create and configure the ingress service using the Ceph Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. A minimum of two Ceph Object Gateway daemons running on different hosts. Dashboard is installed. A running rgw service. Procedure Log in to the Dashboard. From the Cluster drop-down menu, select Services . Click Create . In the Create Service window, select ingress service. Select backend service and edit the required parameters. Figure 11.2. Creating ingress service Click Create Service . You get a notification that the ingress service was created successfully. Additional Resources See High availability for the Ceph Object Gateway for more information about the ingress service. 11.4. Managing Ceph Object Gateway users on the dashboard As a storage administrator, the Red Hat Ceph Storage Dashboard allows you to view and manage Ceph Object Gateway users. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. 11.4.1. Creating Ceph object gateway users on the dashboard You can create Ceph object gateway users on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users and then Click Create . In the Create User window, set the following parameters: Set the user name, full name, and edit the maximum number of buckets if required. Optional: Set an email address or suspended status. Optional: Set a custom access key and secret key by unchecking Auto-generate key . Optional: Set a user quota. Check Enabled under User quota . Uncheck Unlimited size or Unlimited objects . Enter the required values for Max. size or Max. objects . Optional: Set a bucket quota. Check Enabled under Bucket quota . Uncheck Unlimited size or Unlimited objects : Enter the required values for Max. size or Max. objects : Click Create User . Figure 11.3. Create Ceph object gateway user You get a notification that the user was created successfully. Additional Resources See the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. See the Red Hat Ceph Storage Object Gateway Guide for more information. 11.4.2. Creating Ceph object gateway subusers on the dashboard A subuser is associated with a user of the S3 interface. You can create a sub user for a specific Ceph object gateway user on the Red Hat Ceph Storage dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users . Select the user by clicking its row. From Edit drop-down menu, select Edit . In the Edit User window, click +Create Subuser . In the Create Subuser dialog box, enter the user name and select the appropriate permissions. Check the Auto-generate secret box and then click Create Subuser . Figure 11.4. Create Ceph object gateway subuser Note By clicking Auto-generate-secret checkbox, the secret key for object gateway is generated automatically. In the Edit User window, click the Edit user button You get a notification that the user was updated successfully. 11.4.3. Editing Ceph object gateway users on the dashboard You can edit Ceph object gateway users on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. A Ceph object gateway user is created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users . To edit the user capabilities, click its row. From the Edit drop-down menu, select Edit . In the Edit User window, edit the required parameters. Click Edit User . Figure 11.5. Edit Ceph object gateway user You get a notification that the user was updated successfully. Additional Resources See the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. See the Red Hat Ceph Storage Object Gateway Guide for more information. 11.4.4. Deleting Ceph object gateway users on the dashboard You can delete Ceph object gateway users on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. A Ceph object gateway user is created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users . To delete the user, click its row. From the Edit drop-down menu, select Delete . In the Edit User window, edit the required parameters. In the Delete user dialog window, Click the Yes, I am sure box and then Click Delete User to save the settings: Figure 11.6. Delete Ceph object gateway user Additional Resources See the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. See the Red Hat Ceph Storage Object Gateway Guide for more information. 11.5. Managing Ceph Object Gateway buckets on the dashboard As a storage administrator, the Red Hat Ceph Storage Dashboard allows you to view and manage Ceph Object Gateway buckets. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. At least one Ceph Object Gateway user is created. Object gateway login credentials are added to the dashboard. 11.5.1. Creating Ceph object gateway buckets on the dashboard You can create Ceph object gateway buckets on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created and not suspended. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Buckets and then click Create . In the Create Bucket window, enter a value for Name and select a user that is not suspended. Select a placement target. Figure 11.7. Create Ceph object gateway bucket Note A bucket's placement target is selected on creation and can not be modified. Optional: Enable Locking for the objects in the bucket. Locking can only be enabled while creating a bucket. Once locking is enabled, you also have to choose the lock mode, Compliance or Governance and the lock retention period in either days or years, not both. Optional: Enable Security to encrypt the objects in the bucket. To enable encryption on a bucket, you need to set the configuration values for SSE-S3. To set the configuration values, hover the cursor over the question mark and click Click here . In the Update RGW Encryption Configurations window, select SSE-S3 as the Encryption Type , provide the required details, and click Submit . Figure 11.8. Encrypt objects in the bucket Note When using SSE-S3 encryption type, Ceph manages the encryption keys that are stored in the vault by the user. Click Create bucket . You get a notification that the bucket was created successfully. 11.5.2. Editing Ceph object gateway buckets on the dashboard You can edit Ceph object gateway buckets on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created and not suspended. A Ceph Object Gateway bucket created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Buckets . To edit the bucket, click it's row. From the Edit drop-down select Edit . In the Edit bucket window, edit the Owner by selecting the user from the dropdown. Figure 11.9. Edit Ceph object gateway bucket Optional: Enable Versioning if you want to enable versioning state for all the objects in an existing bucket. To enable versioning, you must be the owner of the bucket. If Locking is enabled during bucket creation, you cannot disable the versioning. All objects added to the bucket will receive a unique version ID. If the versioning state has not been set on a bucket, then the bucket will not have a versioning state. Optional: Check Delete enabled for Multi-Factor Authentication . Multi-Factor Authentication(MFA) ensures that users need to use a one-time password(OTP) when removing objects on certain buckets. Enter a value for Token Serial Number and Token PIN . Note The buckets must be configured with versioning and MFA enabled which can be done through the S3 API. Click Edit Bucket . You get a notification that the bucket was updated successfully. 11.5.3. Deleting Ceph object gateway buckets on the dashboard You can delete Ceph object gateway buckets on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created and not suspended. A Ceph Object Gateway bucket created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Buckets . To delete the bucket, click it's row. From the Edit drop-down select Delete . In the Delete Bucket dialog box, Click the Yes, I am sure box and then Click Delete bucket to save the settings: Figure 11.10. Delete Ceph object gateway bucket 11.6. Monitoring multi-site object gateway configuration on the Ceph dashboard The Red Hat Ceph Storage dashboard supports monitoring the users and buckets of one zone in another zone in a multi-site object gateway configuration. For example, if the users and buckets are created in a zone in the primary site, you can monitor those users and buckets in the secondary zone in the secondary site. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. Procedure On the Dashboard landing page of the secondary site, in the vertical menu bar, click Object Gateway drop-down list. Select Buckets . You can see those object gateway buckets on the secondary landing page that were created for the object gateway users on the primary site. Figure 11.11. Multisite object gateway monitoring Additional Resources For more information on configuring multi-site, see the Multi-site configuration and administration section of the Red Hat Ceph Storage Object Gateway guide. For more information on adding object gateway login credentials to the dashboard, see the Manually adding Ceph Object Gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating object gateway users on the dashboard, see the Creating Ceph Object Gateway users on the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating object gateway buckets on the dashboard, see the Creating Ceph Object Gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard guide. 11.7. Managing buckets of a multi-site object configuration on the Ceph dashboard As a storage administrator, you can edit buckets of one zone in another zone on the Red Hat Ceph Storage Dashboard. However, you can delete buckets of secondary sites in the primary site. You cannot delete the buckets of master zones of primary sites in other sites. For example, If the buckets are created in a zone in the secondary site, you can edit and delete those buckets in the master zone in the primary site. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. At least rgw-manager level of access on the Ceph dashboard. 11.7.1. Editing buckets of a multi-site object gateway configuration on the Ceph dashboard You can edit and update the details of the buckets of one zone in another zone on the Red Hat Ceph Storage Dashboard in a multiste object gateway configuration. You can edit the owner, versioning, multi-factor authentication and locking features of the buckets with this feature of the dashboard. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. At least rgw-manager level of access on the Ceph dashboard. Procedure On the Dashboard landing page of the secondary site, in the vertical menu bar, click Object Gateway drop-down list. Select Buckets . You can see those object gateway buckets on the secondary landing page that were created for the object gateway users on the primary site. Figure 11.12. Monitoring object gateway monitoring Click the row of the bucket that you want to edit. From the Edit drop-down menu, select Edit . In the Edit Bucket window, edit the required parameters and click Edit Bucket . Figure 11.13. Edit buckets in a multi-site Verification You will get a notification that the bucket is updated successfully. Additional Resources For more information on configuring multi-site, see the Multi-site configuration and administration section of the Red Hat Ceph Storage Object Gateway guide. For more information on adding object gateway login credentials to the dashboard, see the Manually adding Ceph Object Gateway login credentials to the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating object gateway users on the dashboard, see the Creating Ceph Object Gateway users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating object gateway buckets on the dashboard, see the Creating Ceph Object Gateway buckets on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on system roles, see the Managing roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . 11.7.2. Deleting buckets of a multi-site object gateway configuration on the Ceph dashboard You can delete buckets of secondary sites in primary sites on the Red Hat Ceph Storage Dashboard in a multiste object gateway configuration. IMPORTANT: Red hat does not recommend to delete buckets of primary site from secondary sites. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. At least rgw-manager level of access on the Ceph dashboard. Procedure On the Dashboard landing page of the primary site, in the vertical menu bar, click Object Gateway drop-down list. Select Buckets . You can see those object gateway buckets of the secondary site here. Click the row of the bucket that you want to delete. From the Edit drop-down menu, select Delete . In the Delete Bucket dialog box, select Yes, I am sure checkbox, and click Delete Bucket . Verification The selected row of the bucket is deleted successfully. Additional Resources For more information on configuring multi-site, see the Multi-site configuration and administration section of the Red Hat Ceph Storage Object Gateway guide. For more information on adding object gateway login credentials to the dashboard, see the Manually adding Ceph Object Gateway login credentials to the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating object gateway users on the dashboard, see the Creating Ceph Object Gateway users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating object gateway buckets on the dashboard, see the Creating Ceph Object Gateway buckets on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on system roles, see the Managing roles on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide .
[ "cephadm shell", "ceph dashboard set-rgw-credentials", "ceph dashboard set-rgw-api-admin-resource RGW_API_ADMIN_RESOURCE", "ceph dashboard set-rgw-api-admin-resource admin Option RGW_API_ADMIN_RESOURCE updated", "ceph dashboard set-rgw-api-ssl-verify false", "ceph dashboard set-rgw-api-ssl-verify False Option RGW_API_SSL_VERIFY updated", "ceph dashboard set-rest-requests-timeout _TIME_IN_SECONDS_", "ceph dashboard set-rest-requests-timeout 240" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/6/html/dashboard_guide/management-of-ceph-object-gateway-using-the-dashboard
1.3.7. Updating a Working Copy
1.3.7. Updating a Working Copy To update a working copy and get the latest changes from a CVS repository, change to the directory with the working copy and run the following command: cvs update Example 1.23. Updating a working copy Imagine that the directory with your working copy of a CVS repository has the following contents: Also imagine that somebody recently added ChangeLog to the repository, removed the TODO file from it, and made some changes to Makefile . To update this working copy, type:
[ "project]USD ls AUTHORS CVS doc INSTALL LICENSE Makefile README src TODO", "myproject]USD cvs update cvs update: Updating . U ChangeLog U Makefile cvs update: TODO is no longer in the repository cvs update: Updating doc cvs update: Updating src" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/developer_guide/sect-revision_control_systems-cvs-update
Chapter 5. Removed functionality
Chapter 5. Removed functionality This section provides an overview of functionality that has been removed in all minor releases up to this release of Red Hat Ceph Storage. s3cmd RPM is unavailable in Ceph's Tools repository The s3cmd RPM is no longer available in Ceph's Tools repository. Users can download the unsupported community packages from PyPI or EPEL .
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/5.3_release_notes/removed-functionality
Chapter 4. Installation
Chapter 4. Installation This chapter describes the installation of the additional SAP HANA instance. 4.1. Check the 2-node Base Installation with a failover test Verify that the installation is done based on Automating SAP HANA Scale-Up System Replication using the RHEL HA Add-On . To be able to use SAP HANA Multitarget System Replication , the version of resource-agents-sap-hana must be 0.162.1 or later. This can be checked, as shown below: # rpm -q resource-agents-sap-hana You can run a failover test to ensure that the environment is working. You can move the SAPHana resource, which is also described in Failover the SAPHana Resource using Move . 4.2. Install SAP HANA on third site On the third site, you also need to install SAP HANA using the same version and parameters as for the SAP HANA instances on the two-node Pacemaker cluster as shown below: Parameter Value SID RH2 InstanceNumber 02 <sid>adm user ID rh2adm 999 sapsys group ID sapsys 999 The SAP HANA installation is done using hdblcm . For more details, see SAP HANA Installation using hdbclm . Optionally, the installation can also be done using Ansible. In the examples in this chapter, we are using: hosts:clusternode1 on site DC1, clusternode2 on site DC2 and remotehost3 on site DC3 SID RH2 adminuser rh2adm 4.3. Setup SAP HANA System Replication to the third site In the existing installation, there is already SAP HANA System Replication configured between the primary and secondary SAP HANA instance in a two-node cluster. SAP HANA System Replication is enabled on the up and running primary SAP HANA database instance. This chapter describes how to register the third SAP HANA instance as an additional secondary HANA System Replication site on node remotehost3 at site DC3. This step is similar to the registration of the original secondary HANA instance (DC2) on node clusternode2. More details are described in the following chapters. If you need further information, you can also check General Prerequisites for Configuring SAP HANA System Replication . 4.3.1. Check the primary database You must check that the other databases are running and the system replication is working properly. Please refer to: Check database Check SAP HANA System Replication status Discover primary and secondary SAP HANA database You can discover the primary HANA instance with: 4.3.2. Copy database keys Before you are able to register a new secondary HANA instance, the database keys of the primary HANA instance need to be copied to the new additional HANA replication site. In our example, the hostname of the third site is remotehost3. For example, on the primary node clusternode1 run: clusternode1:rh2adm> scp -rp /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_USD{SAPSYSTEMNAME}.DAT remotehost3:/usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_USD{SAPSYSTEMNAME}.DAT clusternode1:rh2adm> scp -rp /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_USD{SAPSYSTEMNAME}.KEY remotehost3:/usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_USD{SAPSYSTEMNAME}.KEY 4.3.3. Register the additional HANA instance as a secondary HANA replication site You need to know the hostname of the node that is running the primary database . To monitor the registration, you can run the following command in a separate terminal on the primary node: clusternode1:rh2adm> watch python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/python_support/systemReplicationStatus.py This will show you the progress and any errors if they occur. To register the HANA instance on the third site (DC3) as an additional secondary SAP HANA instance, run the following command on the third site host remotehost3: remotehost3:rh2adm> hdbnsutil -sr_register --name=DC3 --remoteHost=clusternode1 --remoteInstance=USD{TINSTANCE} --replicationMode=async --operationMode=logreplay --online In this example, DC3 is the name of the third site, clusternode1 is the hostname of the primary node. If the database instance is already running, you don't have to stop it, you can use the option --online , which will register the instance while it is online. The necessary restart (stop and start) of the instance will then be initiated by hdbnsutil itself. Note The option --online works in any case, both when the HANA instance is online and offline (this option is available with SAP HANA 2.0 SPS04 and later). If the HANA instance is offline, you have to start it after the third node is registered. You can find additional information in SAP HANA System Replication . 4.3.4. Add SAP HANA Multitarget System Replication autoregister support We are using a SAP HANA System Replication option called register_secondaries_on_takeover = true . This will automatically re-register a secondary HANA instance with the new primary site in case of a failover between the primary site and the other secondary site. This option must be added to the global.ini file on all potential primary sites. All HANA instances should have this entry in their global.ini : [system_replication] register_secondaries_on_takeover = true The following two chapters describe the global.ini configuration in detail. Caution Despite the parameter, if the additional secondary HANA instance on the third node is down when the failover is initiated, this HANA instance needs to be re-registered manually. 4.3.5. Configure global.ini on the pacemaker nodes The option register_secondaries_on_takeover = true needs to be added to the global.ini of the SAP HANA instances, which are managed by the pacemaker cluster. Please edit the file global.ini always on the respective node, and do not copy the file from another node. Note The global.ini file should only be edited if the HANA instance of a site has stopped processing. Edit the global.ini as the rh2adm user: clusternode1:rh2adm> vim /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini Example: # global.ini last modified 2023-07-14 16:31:14.120444 by hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=02 --replicationMode=syncmem --operationMode=logreplay --name=DC2 [multidb] mode = multidb database_isolation = low singletenant = yes [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /hana/shared/myHooks execution_order = 1 [persistence] basepath_datavolumes = /hana/data/RH2 basepath_logvolumes = /hana/log/RH2 log_mode = normal enable_auto_log_backup = true [system_replication] register_secondaries_on_takeover = true timetravel_logreplay_mode = auto operation_mode = logreplay mode = primary actual_mode = syncmem site_id = 1 site_name = DC2 [system_replication_site_masters] 2 = clusternode1:30201 [trace] ha_dr_saphanasr = info This option is active as soon as the SAP HANA database instance is started. 4.3.6. Configure global.ini on the third site Edit the global.ini as a <sid>adm user: remotehost3:rh2adm> vim /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini On remotehost3, the ha_dr_provider_SAPHanaSR section is not used. Example of global.ini on remotehost3: # global.ini last modified 2023-06-22 17:22:54.154508 by hdbnameserver [multidb] mode = multidb database_isolation = low singletenant = yes [persistence] basepath_datavolumes = /hana/data/RH2 basepath_logvolumes = /hana/log/RH2 log_mode = normal enable_auto_log_backup = true [system_replication] operation_mode = logreplay register_secondaries_on_takeover = true reconnect_time_interval = 5 timetravel_logreplay_mode = auto site_id = 3 mode = syncmem actual_mode = syncmem site_name = DC3 [system_replication_site_masters] 2 = clusternode1:30201 4.3.7. Verify installation After the installation, you have to check if all HANA instances are up and running and that HANA System Replication is working between them. The easiest way is to check the systemReplicationStatus as described in more detail in Check the System Replication status . Please also refer to Check Database status , for further information. For HANA System Replication to work correctly, please ensure that the "log_mode" parameter is set to "normal". Please refer to Checking the log_mode of the SAP HANA database , for more information. To verify that the setup is working as expected, please run the test cases as described in the following chapters.
[ "rpm -q resource-agents-sap-hana", "clusternode1:rh2adm> hdbnsutil -sr_state | egrep -e \"primary masters|^mode\" mode: primary", "clusternode1:rh2adm> scp -rp /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_USD{SAPSYSTEMNAME}.DAT remotehost3:/usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/data/SSFS_USD{SAPSYSTEMNAME}.DAT clusternode1:rh2adm> scp -rp /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_USD{SAPSYSTEMNAME}.KEY remotehost3:/usr/sap/USD{SAPSYSTEMNAME}/SYS/global/security/rsecssfs/key/SSFS_USD{SAPSYSTEMNAME}.KEY", "clusternode1:rh2adm> watch python /usr/sap/USD{SAPSYSTEMNAME}/HDBUSD{TINSTANCE}/python_support/systemReplicationStatus.py", "remotehost3:rh2adm> hdbnsutil -sr_register --name=DC3 --remoteHost=clusternode1 --remoteInstance=USD{TINSTANCE} --replicationMode=async --operationMode=logreplay --online", "[system_replication] register_secondaries_on_takeover = true", "clusternode1:rh2adm> vim /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini", "global.ini last modified 2023-07-14 16:31:14.120444 by hdbnsutil -sr_register --remoteHost=remotehost3 --remoteInstance=02 --replicationMode=syncmem --operationMode=logreplay --name=DC2 [multidb] mode = multidb database_isolation = low singletenant = yes [ha_dr_provider_SAPHanaSR] provider = SAPHanaSR path = /hana/shared/myHooks execution_order = 1 [persistence] basepath_datavolumes = /hana/data/RH2 basepath_logvolumes = /hana/log/RH2 log_mode = normal enable_auto_log_backup = true [system_replication] register_secondaries_on_takeover = true timetravel_logreplay_mode = auto operation_mode = logreplay mode = primary actual_mode = syncmem site_id = 1 site_name = DC2 [system_replication_site_masters] 2 = clusternode1:30201 [trace] ha_dr_saphanasr = info", "remotehost3:rh2adm> vim /usr/sap/USD{SAPSYSTEMNAME}/SYS/global/hdb/custom/config/global.ini", "global.ini last modified 2023-06-22 17:22:54.154508 by hdbnameserver [multidb] mode = multidb database_isolation = low singletenant = yes [persistence] basepath_datavolumes = /hana/data/RH2 basepath_logvolumes = /hana/log/RH2 log_mode = normal enable_auto_log_backup = true [system_replication] operation_mode = logreplay register_secondaries_on_takeover = true reconnect_time_interval = 5 timetravel_logreplay_mode = auto site_id = 3 mode = syncmem actual_mode = syncmem site_name = DC3 [system_replication_site_masters] 2 = clusternode1:30201" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/8/html/configuring_sap_hana_scale-up_multitarget_system_replication_for_disaster_recovery/asmb_installation_v8-configuring-hana-scale-up-multitarget-system-replication-disaster-recovery
Chapter 35. identity
Chapter 35. identity This chapter describes the commands under the identity command. 35.1. identity provider create Create new identity provider Usage: Table 35.1. Positional arguments Value Summary <name> New identity provider name (must be unique) Table 35.2. Command arguments Value Summary -h, --help Show this help message and exit --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --description <description> New identity provider description --domain <domain> Domain to associate with the identity provider. if not specified, a domain will be created automatically. (Name or ID) --authorization-ttl <authorization-ttl> Time to keep the role assignments for users authenticating via this identity provider. When not provided, global default configured in the Identity service will be used. Available since Identity API version 3.14 (Ussuri). --enable Enable identity provider (default) --disable Disable the identity provider Table 35.3. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 35.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 35.5. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 35.6. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 35.2. identity provider delete Delete identity provider(s) Usage: Table 35.7. Positional arguments Value Summary <identity-provider> Identity provider(s) to delete Table 35.8. Command arguments Value Summary -h, --help Show this help message and exit 35.3. identity provider list List identity providers Usage: Table 35.9. Command arguments Value Summary -h, --help Show this help message and exit --id <id> The identity providers' id attribute --enabled The identity providers that are enabled will be returned Table 35.10. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated --sort-ascending Sort the column(s) in ascending order --sort-descending Sort the column(s) in descending order Table 35.11. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 35.12. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 35.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 35.4. identity provider set Set identity provider properties Usage: Table 35.14. Positional arguments Value Summary <identity-provider> Identity provider to modify Table 35.15. Command arguments Value Summary -h, --help Show this help message and exit --description <description> Set identity provider description --remote-id <remote-id> Remote ids to associate with the identity provider (repeat option to provide multiple values) --remote-id-file <file-name> Name of a file that contains many remote ids to associate with the identity provider, one per line --authorization-ttl <authorization-ttl> Time to keep the role assignments for users authenticating via this identity provider. Available since Identity API version 3.14 (Ussuri). --enable Enable the identity provider --disable Disable the identity provider 35.5. identity provider show Display identity provider details Usage: Table 35.16. Positional arguments Value Summary <identity-provider> Identity provider to display Table 35.17. Command arguments Value Summary -h, --help Show this help message and exit Table 35.18. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated to show multiple columns Table 35.19. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 35.20. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 35.21. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack identity provider create [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--remote-id <remote-id> | --remote-id-file <file-name>] [--description <description>] [--domain <domain>] [--authorization-ttl <authorization-ttl>] [--enable | --disable] <name>", "openstack identity provider delete [-h] <identity-provider> [<identity-provider> ...]", "openstack identity provider list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--sort-ascending | --sort-descending] [--id <id>] [--enabled]", "openstack identity provider set [-h] [--description <description>] [--remote-id <remote-id> | --remote-id-file <file-name>] [--authorization-ttl <authorization-ttl>] [--enable | --disable] <identity-provider>", "openstack identity provider show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] <identity-provider>" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/command_line_interface_reference/identity
Chapter 1. Deploying Data Grid clusters as Helm chart releases
Chapter 1. Deploying Data Grid clusters as Helm chart releases Build, configure, and deploy Data Grid clusters with Helm. Data Grid provides a Helm chart that packages resources for running Data Grid clusters on OpenShift. Install the Data Grid chart to create a Helm release, which instantiates a Data Grid cluster in your OpenShift project. 1.1. Installing the Data Grid chart through the OpenShift console Use the OpenShift Web Console to install the Data Grid chart from the Red Hat developer catalog. Installing the chart creates a Helm release that deploys a Data Grid cluster. Prerequisites Have access to OpenShift. Procedure Log in to the OpenShift Web Console. Select the Developer perspective. Open the Add view and then select Helm Chart to browse the Red Hat developer catalog. Locate and select the Data Grid chart. Specify a name for the chart and select a version. Define values in the following sections of the Data Grid chart: Images configures the container images to use when creating pods for your Data Grid cluster. Deploy configures your Data Grid cluster. Tip To find descriptions for each value, select the YAML view option and access the schema. Edit the yaml configuration to customize your Data Grid chart. Select Install . Verification Select the Helm view in the Developer perspective. Select the Helm release you created to view details, resources, and other information. 1.2. Installing the Data Grid chart on the command line Use the command line to install the Data Grid chart on OpenShift and instantiate a Data Grid cluster. Installing the chart creates a Helm release that deploys a Data Grid cluster. Prerequisites Install the helm client. Add the OpenShift Helm Charts repository . Have access to an OpenShift cluster. Have an oc client. Procedure Create a values file that configures your Data Grid cluster. For example, the following values file creates a cluster with two nodes: Install the Data Grid chart and specify your values file. Tip Use the --set flag to override configuration values for the deployment. For example, to create a cluster with three nodes: Verification Watch the pods to ensure all nodes in the Data Grid cluster are created successfully. 1.3. Upgrading Data Grid Helm releases Modify your Data Grid cluster configuration at runtime by upgrading Helm releases. Prerequisites Deploy the Data Grid chart. Have a helm client. Have an oc client. Procedure Modify the values file for your Data Grid deployment as appropriate. Use the helm client to apply your changes, for example: Verification Watch the pods rebuild to ensure all changes are applied to your Data Grid cluster successfully. 1.4. Uninstalling Data Grid Helm releases Uninstall a release of the Data Grid chart to remove pods and other deployment artifacts. Note This procedure shows you how to uninstall a Data Grid deployment on the command line but you can use the OpenShift Web Console instead. Refer to the OpenShift documentation for specific instructions. Prerequisites Deploy the Data Grid chart. Have a helm client. Have an oc client. Procedure List the installed Data Grid Helm releases. Use the helm client to uninstall a release and remove the Data Grid cluster: USD helm uninstall <helm_release_name> Use the oc client to remove the generated secret. USD oc delete secret <helm_release_name>-generated-secret 1.5. Deployment configuration values Deployment configuration values let you customize Data Grid clusters. Tip You can also find field and value descriptions in the Data Grid chart README . Field Description Default value deploy.clusterDomain Specifies the internal Kubernetes cluster domain. cluster.local deploy.replicas Specifies the number of nodes in your Data Grid cluster, with a pod created for each node. 1 deploy.container.extraJvmOpts Passes JVM options to Data Grid Server. No default value. deploy.container.libraries Libraries to be downloaded before server startup. Specify multiple, space-separated artifacts represented as URLs or as Maven coordinates. Archive artifacts in .tar, .tar.gz or .zip formats will be extracted. No default value. deploy.container.storage.ephemeral Defines whether storage is ephemeral or permanent. The default value is false , which means data is permanent. Set the value to true to use ephemeral storage, which means all data is deleted when clusters shut down or restart. deploy.container.storage.size Defines how much storage is allocated to each Data Grid pod. 1Gi deploy.container.storage.storageClassName Specifies the name of a StorageClass object to use for the persistent volume claim (PVC). No default value. By default, the persistent volume claim uses the storage class that has the storageclass.kubernetes.io/is-default-class annotation set to true . If you include this field, you must specify an existing storage class as the value. deploy.container.resources.limits.cpu Defines the CPU limit, in CPU units, for each Data Grid pod. 500m deploy.container.resources.limits.memory Defines the maximum amount of memory, in bytes, for each Data Grid pod. 512Mi deploy.container.resources.requests.cpu Specifies the maximum CPU requests, in CPU units, for each Data Grid pod. 500m deploy.container.resources.requests.memory Specifies the maximum memory requests, in bytes, for each Data Grid pod. 512Mi deploy.security.secretName Specifies the name of a secret that creates credentials and configures security authorization. No default value. If you create a custom security secret then deploy.security.batch does not take effect. deploy.security.batch Provides a batch file for the Data Grid command line interface (CLI) to create credentials and configure security authorization at startup. No default value. deploy.expose.type Specifies the service that exposes Hot Rod and REST endpoints on the network and provides access to your Data Grid cluster, including the Data Grid Console. Route Valid options are: "" (empty value), Route , LoadBalancer , and NodePort . Set an empty value ( "" ) if you do not want to expose Data Grid on the network. deploy.expose.nodePort Specifies a network port for node port services within the default range of 30000 to 32767. 0 If you do not specify a port, the platform selects an available one. deploy.expose.host Optionally specifies the hostname where the Route is exposed. No default value. deploy.expose.annotations Adds annotations to the service that exposes Data Grid on the network. No default value. deploy.logging.categories Configures Data Grid cluster log categories and levels. No default value. deploy.podLabels Adds labels to each Data Grid pod that you create. No default value. deploy.svcLabels Adds labels to each service that you create. No default value. deploy.resourceLabels Adds labels to all Data Grid resources including pods and services. No default value. deploy.makeDataDirWritable Allows write access to the data directory for each Data Grid Server node. false If you set the value to true , Data Grid creates an initContainer that runs chmod -R on the /opt/infinispan/server/data directory to change permissions. deploy.securityContext Configures the securityContext used by the StatefulSet pods. {} This can be used to change the group of mounted file systems. Set securityContext.fsGroup to 185 if you need to explicitly match the group owner for /opt/infinispan/server/data to the default Data Grid's group deploy.monitoring.enabled Enable or disable monitoring using ServiceMonitor . false Users must have monitoring-edit role assigned by the admin to deploy the Helm chart with ServiceMonitor enabled. deploy.nameOverride Specifies a name for all Data Grid cluster resources. Helm Chart release name. deploy.infinispan Data Grid Server configuration. Data Grid provides default server configuration. For more information about configuring server instances, see Data Grid Server configuration values .
[ "cat > infinispan-values.yaml<<EOF #Build configuration images: server: registry.redhat.io/datagrid/datagrid-8-rhel8:latest initContainer: registry.access.redhat.com/ubi8-micro #Deployment configuration deploy: #Add a user with full security authorization. security: batch: \"user create admin -p changeme\" #Create a cluster with two pods. replicas: 2 #Specify the internal Kubernetes cluster domain. clusterDomain: cluster.local EOF", "helm install infinispan openshift-helm-charts/redhat-data-grid --values infinispan-values.yaml", "--set deploy.replicas=3", "oc get pods -w", "helm upgrade infinispan openshift-helm-charts/redhat-data-grid --values infinispan-values.yaml", "oc get pods -w", "helm list", "helm uninstall <helm_release_name>", "oc delete secret <helm_release_name>-generated-secret" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/building_and_deploying_data_grid_clusters_with_helm/install
Preface
Preface Red Hat OpenShift Data Foundation supports deployment on existing Red Hat OpenShift Service on AWS with hosted control planes. Note Only internal OpenShift Data Foundation clusters are supported on AWS. See Planning your deployment and Preparing to deploy OpenShift Data Foundation for more information about deployment requirements. To deploy OpenShift Data Foundation, start with the requirements in Preparing to deploy OpenShift Data Foundation chapter and then follow the deployment process in Deploying using dynamic storage devices .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_using_red_hat_openshift_service_on_aws_with_hosted_control_planes/preface-rosahcp
Chapter 1. Preparing to deploy OpenShift Data Foundation
Chapter 1. Preparing to deploy OpenShift Data Foundation Deploying OpenShift Data Foundation on OpenShift Container Platform using dynamic storage devices provides you with the option to create internal cluster resources. This will result in the internal provisioning of the base services, which helps to make additional storage classes available to applications. Before you begin the deployment of Red Hat OpenShift Data Foundation, follow these steps: Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) HashiCorp Vault, follow these steps: Ensure that you have a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . When the Token authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Token authentication using KMS . When the Kubernetes authentication method is selected for encryption then refer to Enabling cluster-wide encryption with the Kubernetes authentication using KMS . Ensure that you are using signed certificates on your Vault servers. Optional: If you want to enable cluster-wide encryption using the external Key Management System (KMS) Thales CipherTrust Manager, you must first enable the Key Management Interoperability Protocol (KMIP) and use signed certificates on your server. Follow these steps: Create a KMIP client if one does not exist. From the user interface, select KMIP Client Profile Add Profile . Add the CipherTrust username to the Common Name field during profile creation. Create a token by navigating to KMIP Registration Token New Registration Token . Copy the token for the step. To register the client, navigate to KMIP Registered Clients Add Client . Specify the Name . Paste the Registration Token from the step, then click Save . Download the Private Key and Client Certificate by clicking Save Private Key and Save Certificate respectively. To create a new KMIP interface, navigate to Admin Settings Interfaces Add Interface . Select KMIP Key Management Interoperability Protocol and click . Select a free Port . Select Network Interface as all . Select Interface Mode as TLS, verify client cert, user name taken from client cert, auth request is optional . (Optional) You can enable hard delete to delete both metadata and material when the key is deleted. It is disabled by default. Select the CA to be used, and click Save . To get the server CA certificate, click on the Action menu (...) on the right of the newly created interface, and click Download Certificate . Optional: If StorageClass encryption is to be enabled during deployment, create a key to act as the Key Encryption Key (KEK): Navigate to Keys Add Key . Enter Key Name . Set the Algorithm and Size to AES and 256 respectively. Enable Create a key in Pre-Active state and set the date and time for activation. Ensure that Encrypt and Decrypt are enabled under Key Usage . Copy the ID of the newly created Key to be used as the Unique Identifier during deployment. Minimum starting node requirements An OpenShift Data Foundation cluster will be deployed with minimum configuration when the standard deployment resource requirement is not met. See Resource requirements section in Planning guide. Disaster recovery requirements [Technology Preview] Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced subscription A valid Red Hat Advanced Cluster Management for Kubernetes subscription To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . For detailed requirements, see Configuring OpenShift Data Foundation Disaster Recovery for OpenShift Workloads guide, and Requirements and recommendations section of the Install guide in Red Hat Advanced Cluster Management for Kubernetes documentation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_and_managing_openshift_data_foundation_using_red_hat_openstack_platform/preparing_to_deploy_openshift_data_foundation
Chapter 1. AWS DynamoDB Sink
Chapter 1. AWS DynamoDB Sink Send data to AWS DynamoDB service. The sent data will insert/update/delete an item on the given AWS DynamoDB table. Access Key/Secret Key are the basic method for authenticating to the AWS DynamoDB service. These parameters are optional, because the Kamelet also provides the following option 'useDefaultCredentialsProvider'. When using a default Credentials Provider the AWS DynamoDB client will load the credentials through this provider and won't use the static credential. This is the reason for not having access key and secret key as mandatory parameters for this Kamelet. This Kamelet expects a JSON field as body. The mapping between the JSON fields and table attribute values is done by key, so if you have the input as follows: {"username":"oscerd", "city":"Rome"} The Kamelet will insert/update an item in the given AWS DynamoDB table and set the attributes 'username' and 'city' respectively. Please note that the JSON object must include the primary key values that define the item. 1.1. Configuration Options The following table summarizes the configuration options available for the aws-ddb-sink Kamelet: Property Name Description Type Default Example region * AWS Region The AWS region to connect to string "eu-west-1" table * Table Name of the DynamoDB table to look at string accessKey Access Key The access key obtained from AWS string operation Operation The operation to perform (one of PutItem, UpdateItem, DeleteItem) string "PutItem" "PutItem" overrideEndpoint Endpoint Overwrite Set the need for overiding the endpoint URI. This option needs to be used in combination with uriEndpointOverride setting. boolean false secretKey Secret Key The secret key obtained from AWS string uriEndpointOverride Overwrite Endpoint URI Set the overriding endpoint URI. This option needs to be used in combination with overrideEndpoint option. string useDefaultCredentialsProvider Default Credentials Provider Set whether the DynamoDB client should expect to load credentials through a default credentials provider or to expect static credentials to be passed in. boolean false writeCapacity Write Capacity The provisioned throughput to reserved for writing resources to your table integer 1 Note Fields marked with an asterisk (*) are mandatory. 1.2. Dependencies At runtime, the aws-ddb-sink Kamelet relies upon the presence of the following dependencies: mvn:org.apache.camel.kamelets:camel-kamelets-utils:1.8.0 camel:core camel:jackson camel:aws2-ddb camel:kamelet 1.3. Usage This section describes how you can use the aws-ddb-sink . 1.3.1. Knative Sink You can use the aws-ddb-sink Kamelet as a Knative sink by binding it to a Knative object. aws-ddb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table" 1.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 1.3.1.2. Procedure for using the cluster CLI Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: 1.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: This command creates the KameletBinding in the current namespace on the cluster. 1.3.2. Kafka Sink You can use the aws-ddb-sink Kamelet as a Kafka sink by binding it to a Kafka topic. aws-ddb-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: "eu-west-1" table: "The Table" 1.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 1.3.2.2. Procedure for using the cluster CLI Save the aws-ddb-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: 1.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: This command creates the KameletBinding in the current namespace on the cluster. 1.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/aws-ddb-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: \"eu-west-1\" table: \"The Table\"", "apply -f aws-ddb-sink-binding.yaml", "kamel bind channel:mychannel aws-ddb-sink -p \"sink.region=eu-west-1\" -p \"sink.table=The Table\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: aws-ddb-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: aws-ddb-sink properties: region: \"eu-west-1\" table: \"The Table\"", "apply -f aws-ddb-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic aws-ddb-sink -p \"sink.region=eu-west-1\" -p \"sink.table=The Table\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/aws-ddb-sink
Troubleshooting Red Hat build of OpenJDK 8 for Windows
Troubleshooting Red Hat build of OpenJDK 8 for Windows Red Hat build of OpenJDK 8 Red Hat Developer Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/8/html/troubleshooting_red_hat_build_of_openjdk_8_for_windows/index
Chapter 71. Running the test scenarios
Chapter 71. Running the test scenarios After creating a test scenario template and defining the test scenarios, you can run the tests to validate your business rules and data. Procedure To run defined test scenarios, do any of the following tasks: To execute all the available test scenarios in your project inside multiple assets, in the upper-right corner of your project page, click Test . Figure 71.1. Run all the test scenarios from the project view To execute all available test scenarios defined in a .scesim file, at the top of the Test Scenario designer, click the Run Test icon. To run a single test scenario defined in a single .scesim file, right-click the row of the test scenario you want to run and select Run scenario . The Test Report panel displays the overview of the tests and the scenario status. After the tests execute, if the values entered in the test scenario table do not match with the expected values, then the corresponding cells are highlighted. If tests fail, you can do the following tasks to troubleshoot the failure: To review the error message in the pop-up window, hover your mouse cursor over the highlighted cell. To open the Alerts panel at the bottom of the designer or the project view for the error messages, click View Alerts . Make the necessary changes and run the test again until the scenario passes.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/test-designer-run-test-proc
19.5. Network Options
19.5. Network Options This section provides information about network options. TAP network -netdev tap ,id=<id>][,<options>...] The following options are supported (all use name=value format): ifname fd script downscript sndbuf vnet_hdr vhost vhostfd vhostforce
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sec-qemu_kvm_whitelist_network_options
Chapter 6. AMQ Streams Operators
Chapter 6. AMQ Streams Operators AMQ Streams supports Kafka using Operators to deploy and manage the components and dependencies of Kafka to OpenShift. Operators are a method of packaging, deploying, and managing an OpenShift application. AMQ Streams Operators extend OpenShift functionality, automating common and complex tasks related to a Kafka deployment. By implementing knowledge of Kafka operations in code, Kafka administration tasks are simplified and require less manual intervention. Operators AMQ Streams provides Operators for managing a Kafka cluster running within an OpenShift cluster. Cluster Operator Deploys and manages Apache Kafka clusters, Kafka Connect, Kafka MirrorMaker, Kafka Bridge, Kafka Exporter, Cruise Control, and the Entity Operator Entity Operator Comprises the Topic Operator and User Operator Topic Operator Manages Kafka topics User Operator Manages Kafka users The Cluster Operator can deploy the Topic Operator and User Operator as part of an Entity Operator configuration at the same time as a Kafka cluster. Operators within the AMQ Streams architecture 6.1. Cluster Operator AMQ Streams uses the Cluster Operator to deploy and manage clusters for: Kafka (including ZooKeeper, Entity Operator, Kafka Exporter, and Cruise Control) Kafka Connect Kafka MirrorMaker Kafka Bridge Custom resources are used to deploy the clusters. For example, to deploy a Kafka cluster: A Kafka resource with the cluster configuration is created within the OpenShift cluster. The Cluster Operator deploys a corresponding Kafka cluster, based on what is declared in the Kafka resource. The Cluster Operator can also deploy (through configuration of the Kafka resource): A Topic Operator to provide operator-style topic management through KafkaTopic custom resources A User Operator to provide operator-style user management through KafkaUser custom resources The Topic Operator and User Operator function within the Entity Operator on deployment. You can use the Cluster Operator with a deployment of AMQ Streams Drain Cleaner to help with pod evictions. By deploying the AMQ Streams Drain Cleaner, you can use the Cluster Operator to move Kafka pods instead of OpenShift. AMQ Streams Drain Cleaner annotates pods being evicted with a rolling update annotation. The annotation informs the Cluster Operator to perform the rolling update. Example architecture for the Cluster Operator 6.2. Topic Operator The Topic Operator provides a way of managing topics in a Kafka cluster through OpenShift resources. Example architecture for the Topic Operator The role of the Topic Operator is to keep a set of KafkaTopic OpenShift resources describing Kafka topics in-sync with corresponding Kafka topics. Specifically, if a KafkaTopic is: Created, the Topic Operator creates the topic Deleted, the Topic Operator deletes the topic Changed, the Topic Operator updates the topic Working in the other direction, if a topic is: Created within the Kafka cluster, the Operator creates a KafkaTopic Deleted from the Kafka cluster, the Operator deletes the KafkaTopic Changed in the Kafka cluster, the Operator updates the KafkaTopic This allows you to declare a KafkaTopic as part of your application's deployment and the Topic Operator will take care of creating the topic for you. Your application just needs to deal with producing or consuming from the necessary topics. The Topic Operator maintains information about each topic in a topic store , which is continually synchronized with updates from Kafka topics or OpenShift KafkaTopic custom resources. Updates from operations applied to a local in-memory topic store are persisted to a backup topic store on disk. If a topic is reconfigured or reassigned to other brokers, the KafkaTopic will always be up to date. 6.3. User Operator The User Operator manages Kafka users for a Kafka cluster by watching for KafkaUser resources that describe Kafka users, and ensuring that they are configured properly in the Kafka cluster. For example, if a KafkaUser is: Created, the User Operator creates the user it describes Deleted, the User Operator deletes the user it describes Changed, the User Operator updates the user it describes Unlike the Topic Operator, the User Operator does not sync any changes from the Kafka cluster with the OpenShift resources. Kafka topics can be created by applications directly in Kafka, but it is not expected that the users will be managed directly in the Kafka cluster in parallel with the User Operator. The User Operator allows you to declare a KafkaUser resource as part of your application's deployment. You can specify the authentication and authorization mechanism for the user. You can also configure user quotas that control usage of Kafka resources to ensure, for example, that a user does not monopolize access to a broker. When the user is created, the user credentials are created in a Secret . Your application needs to use the user and its credentials for authentication and to produce or consume messages. In addition to managing credentials for authentication, the User Operator also manages authorization rules by including a description of the user's access rights in the KafkaUser declaration. 6.4. Feature gates in AMQ Streams Operators You can enable and disable some features of operators using feature gates . Feature gates are set in the operator configuration and have three stages of maturity: alpha, beta, or General Availability (GA). For more information, see Feature gates .
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/amq_streams_on_openshift_overview/overview-components_str
Chapter 2. Prerequisites
Chapter 2. Prerequisites Installer-provisioned installation of OpenShift Container Platform requires: One provisioner node with Red Hat Enterprise Linux (RHEL) 9.x installed. The provisioner can be removed after installation. Three control plane nodes Baseboard management controller (BMC) access to each node At least one network: One required routable network One optional provisioning network One optional management network Before starting an installer-provisioned installation of OpenShift Container Platform, ensure the hardware environment meets the following requirements. 2.1. Node requirements Installer-provisioned installation involves a number of hardware node requirements: CPU architecture: All nodes must use x86_64 or aarch64 CPU architecture. Similar nodes: Red Hat recommends nodes have an identical configuration per role. That is, Red Hat recommends nodes be the same brand and model with the same CPU, memory, and storage configuration. Baseboard Management Controller: The provisioner node must be able to access the baseboard management controller (BMC) of each OpenShift Container Platform cluster node. You may use IPMI, Redfish, or a proprietary protocol. Latest generation: Nodes must be of the most recent generation. Installer-provisioned installation relies on BMC protocols, which must be compatible across nodes. Additionally, RHEL 9.x ships with the most recent drivers for RAID controllers. Ensure that the nodes are recent enough to support RHEL 9.x for the provisioner node and RHCOS 9.x for the control plane and worker nodes. Registry node: (Optional) If setting up a disconnected mirrored registry, it is recommended the registry reside in its own node. Provisioner node: Installer-provisioned installation requires one provisioner node. Control plane: Installer-provisioned installation requires three control plane nodes for high availability. You can deploy an OpenShift Container Platform cluster with only three control plane nodes, making the control plane nodes schedulable as worker nodes. Smaller clusters are more resource efficient for administrators and developers during development, production, and testing. Worker nodes: While not required, a typical production cluster has two or more worker nodes. Important Do not deploy a cluster with only one worker node, because the cluster will deploy with routers and ingress traffic in a degraded state. Network interfaces: Each node must have at least one network interface for the routable baremetal network. Each node must have one network interface for a provisioning network when using the provisioning network for deployment. Using the provisioning network is the default configuration. Note Only one network card (NIC) on the same subnet can route traffic through the gateway. By default, Address Resolution Protocol (ARP) uses the lowest numbered NIC. Use a single NIC for each node in the same subnet to ensure that network load balancing works as expected. When using multiple NICs for a node in the same subnet, use a single bond or team interface. Then add the other IP addresses to that interface in the form of an alias IP address. If you require fault tolerance or load balancing at the network interface level, use an alias IP address on the bond or team interface. Alternatively, you can disable a secondary NIC on the same subnet or ensure that it has no IP address. Unified Extensible Firmware Interface (UEFI): Installer-provisioned installation requires UEFI boot on all OpenShift Container Platform nodes when using IPv6 addressing on the provisioning network. In addition, UEFI Device PXE Settings must be set to use the IPv6 protocol on the provisioning network NIC, but omitting the provisioning network removes this requirement. Important When starting the installation from virtual media such as an ISO image, delete all old UEFI boot table entries. If the boot table includes entries that are not generic entries provided by the firmware, the installation might fail. Secure Boot: Many production scenarios require nodes with Secure Boot enabled to verify the node only boots with trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. You may deploy with Secure Boot manually or managed. Manually: To deploy an OpenShift Container Platform cluster with Secure Boot manually, you must enable UEFI boot mode and Secure Boot on each control plane node and each worker node. Red Hat supports Secure Boot with manually enabled UEFI and Secure Boot only when installer-provisioned installations use Redfish virtual media. See "Configuring nodes for Secure Boot manually" in the "Configuring nodes" section for additional details. Managed: To deploy an OpenShift Container Platform cluster with managed Secure Boot, you must set the bootMode value to UEFISecureBoot in the install-config.yaml file. Red Hat only supports installer-provisioned installation with managed Secure Boot on 10th generation HPE hardware and 13th generation Dell hardware running firmware version 2.75.75.75 or greater. Deploying with managed Secure Boot does not require Redfish virtual media. See "Configuring managed Secure Boot" in the "Setting up the environment for an OpenShift installation" section for details. Note Red Hat does not support managing self-generated keys, or other keys, for Secure Boot. 2.2. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 2.1. Minimum resource requirements Machine Operating System CPU [1] RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHEL 4 16 GB 100 GB 300 Control plane RHCOS 4 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One CPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = CPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note As of OpenShift Container Platform version 4.13, RHCOS is based on RHEL version 9.2, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. 2.3. Planning a bare metal cluster for OpenShift Virtualization If you will use OpenShift Virtualization, it is important to be aware of several requirements before you install your bare metal cluster. If you want to use live migration features, you must have multiple worker nodes at the time of cluster installation . This is because live migration requires the cluster-level high availability (HA) flag to be set to true. The HA flag is set when a cluster is installed and cannot be changed afterwards. If there are fewer than two worker nodes defined when you install your cluster, the HA flag is set to false for the life of the cluster. Note You can install OpenShift Virtualization on a single-node cluster, but single-node OpenShift does not support high availability. Live migration requires shared storage. Storage for OpenShift Virtualization must support and use the ReadWriteMany (RWX) access mode. If you plan to use Single Root I/O Virtualization (SR-IOV), ensure that your network interface controllers (NICs) are supported by OpenShift Container Platform. Additional resources Preparing your cluster for OpenShift Virtualization About Single Root I/O Virtualization (SR-IOV) hardware networks Connecting a virtual machine to an SR-IOV network 2.4. Firmware requirements for installing with virtual media The installation program for installer-provisioned OpenShift Container Platform clusters validates the hardware and firmware compatibility with Redfish virtual media. The installation program does not begin installation on a node if the node firmware is not compatible. The following tables list the minimum firmware versions tested and verified to work for installer-provisioned OpenShift Container Platform clusters deployed by using Redfish virtual media. Note Red Hat does not test every combination of firmware, hardware, or other third-party components. For further information about third-party support, see Red Hat third-party support policy . For information about updating the firmware, see the hardware documentation for the nodes or contact the hardware vendor. Table 2.2. Firmware compatibility for HP hardware with Redfish virtual media Model Management Firmware versions 10th Generation iLO5 2.63 or later Table 2.3. Firmware compatibility for Dell hardware with Redfish virtual media Model Management Firmware versions 15th Generation iDRAC 9 v6.10.30.00 and v7.00.60.00 14th Generation iDRAC 9 v6.10.30.00 13th Generation iDRAC 8 v2.75.75.75 or later Table 2.4. Firmware compatibility for Cisco UCS hardware with Redfish virtual media Model Management Firmware versions UCS UCSX-210C-M6 CIMC 5.2(2) or later Additional resources Unable to discover new bare metal hosts using the BMC 2.5. Network requirements Installer-provisioned installation of OpenShift Container Platform involves multiple network requirements. First, installer-provisioned installation involves an optional non-routable provisioning network for provisioning the operating system on each bare-metal node. Second, installer-provisioned installation involves a routable baremetal network. 2.5.1. Ensuring required ports are open Certain ports must be open between cluster nodes for installer-provisioned installations to complete successfully. In certain situations, such as using separate subnets for far edge worker nodes, you must ensure that the nodes in these subnets can communicate with nodes in the other subnets on the following required ports. Table 2.5. Required ports Port Description 67 , 68 When using a provisioning network, cluster nodes access the dnsmasq DHCP server over their provisioning network interfaces using ports 67 and 68 . 69 When using a provisioning network, cluster nodes communicate with the TFTP server on port 69 using their provisioning network interfaces. The TFTP server runs on the bootstrap VM. The bootstrap VM runs on the provisioner node. 80 When not using the image caching option or when using virtual media, the provisioner node must have port 80 open on the baremetal machine network interface to stream the Red Hat Enterprise Linux CoreOS (RHCOS) image from the provisioner node to the cluster nodes. 123 The cluster nodes must access the NTP server on port 123 using the baremetal machine network. 5050 The Ironic Inspector API runs on the control plane nodes and listens on port 5050 . The Inspector API is responsible for hardware introspection, which collects information about the hardware characteristics of the bare-metal nodes. 5051 Port 5050 uses port 5051 as a proxy. 6180 When deploying with virtual media and not using TLS, the provisioner node and the control plane nodes must have port 6180 open on the baremetal machine network interface so that the baseboard management controller (BMC) of the worker nodes can access the RHCOS image. Starting with OpenShift Container Platform 4.13, the default HTTP port is 6180 . 6183 When deploying with virtual media and using TLS, the provisioner node and the control plane nodes must have port 6183 open on the baremetal machine network interface so that the BMC of the worker nodes can access the RHCOS image. 6385 The Ironic API server runs initially on the bootstrap VM and later on the control plane nodes and listens on port 6385 . The Ironic API allows clients to interact with Ironic for bare-metal node provisioning and management, including operations such as enrolling new nodes, managing their power state, deploying images, and cleaning the hardware. 6388 Port 6385 uses port 6388 as a proxy. 8080 When using image caching without TLS, port 8080 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 8083 When using the image caching option with TLS, port 8083 must be open on the provisioner node and accessible by the BMC interfaces of the cluster nodes. 9999 By default, the Ironic Python Agent (IPA) listens on TCP port 9999 for API calls from the Ironic conductor service. Communication between the bare-metal node where IPA is running and the Ironic conductor service uses this port. 2.5.2. Increase the network MTU Before deploying OpenShift Container Platform, increase the network maximum transmission unit (MTU) to 1500 or more. If the MTU is lower than 1500, the Ironic image that is used to boot the node might fail to communicate with the Ironic inspector pod, and inspection will fail. If this occurs, installation stops because the nodes are not available for installation. 2.5.3. Configuring NICs OpenShift Container Platform deploys with two networks: provisioning : The provisioning network is an optional non-routable network used for provisioning the underlying operating system on each node that is a part of the OpenShift Container Platform cluster. The network interface for the provisioning network on each cluster node must have the BIOS or UEFI configured to PXE boot. The provisioningNetworkInterface configuration setting specifies the provisioning network NIC name on the control plane nodes, which must be identical on the control plane nodes. The bootMACAddress configuration setting provides a means to specify a particular NIC on each node for the provisioning network. The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . baremetal : The baremetal network is a routable network. You can use any NIC to interface with the baremetal network provided the NIC is not configured to use the provisioning network. Important When using a VLAN, each NIC must be on a separate VLAN corresponding to the appropriate network. 2.5.4. DNS requirements Clients access the OpenShift Container Platform cluster nodes over the baremetal network. A network administrator must configure a subdomain or subzone where the canonical name extension is the cluster name. <cluster_name>.<base_domain> For example: test-cluster.example.com OpenShift Container Platform includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. After the nodes are registered with the API, the cluster can disperse node information without using CoreDNS-mDNS. This eliminates the network traffic associated with multicast DNS. CoreDNS requires both TCP and UDP connections to the upstream DNS server to function correctly. Ensure the upstream DNS server can receive both TCP and UDP connections from OpenShift Container Platform cluster nodes. In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard ingress API A/AAAA records are used for name resolution and PTR records are used for reverse name resolution. Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records or DHCP to set the hostnames for all the nodes. Installer-provisioned installation includes functionality that uses cluster membership information to generate A/AAAA records. This resolves the node names to their IP addresses. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 2.6. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. An A/AAAA record and a PTR record identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Routes *.apps.<cluster_name>.<base_domain>. The wildcard A/AAAA record refers to the application ingress load balancer. The application ingress load balancer targets the nodes that run the Ingress Controller pods. The Ingress Controller pods run on the worker nodes by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Tip You can use the dig command to verify DNS resolution. 2.5.5. Dynamic Host Configuration Protocol (DHCP) requirements By default, installer-provisioned installation deploys ironic-dnsmasq with DHCP enabled for the provisioning network. No other DHCP servers should be running on the provisioning network when the provisioningNetwork configuration setting is set to managed , which is the default value. If you have a DHCP server running on the provisioning network, you must set the provisioningNetwork configuration setting to unmanaged in the install-config.yaml file. Network administrators must reserve IP addresses for each node in the OpenShift Container Platform cluster for the baremetal network on an external DHCP server. 2.5.6. Reserving IP addresses for nodes with the DHCP server For the baremetal network, a network administrator must reserve several IP addresses, including: Two unique virtual IP addresses. One virtual IP address for the API endpoint. One virtual IP address for the wildcard ingress endpoint. One IP address for the provisioner node. One IP address for each control plane node. One IP address for each worker node, if applicable. Reserving IP addresses so they become static IP addresses Some administrators prefer to use static IP addresses so that each node's IP address remains constant in the absence of a DHCP server. To configure static IP addresses with NMState, see "(Optional) Configuring node network interfaces" in the "Setting up the environment for an OpenShift installation" section. Networking between external load balancers and control plane nodes External load balancing services and the control plane nodes must run on the same L2 network, and on the same VLAN when using VLANs to route traffic between the load balancing services and the control plane nodes. Important The storage interface requires a DHCP reservation or a static IP. The following table provides an exemplary embodiment of fully qualified domain names. The API and name server addresses begin with canonical name extensions. The hostnames of the control plane and worker nodes are exemplary, so you can use any host naming convention you prefer. Usage Host Name IP API api.<cluster_name>.<base_domain> <ip> Ingress LB (apps) *.apps.<cluster_name>.<base_domain> <ip> Provisioner node provisioner.<cluster_name>.<base_domain> <ip> Control-plane-0 openshift-control-plane-0.<cluster_name>.<base_domain> <ip> Control-plane-1 openshift-control-plane-1.<cluster_name>-.<base_domain> <ip> Control-plane-2 openshift-control-plane-2.<cluster_name>.<base_domain> <ip> Worker-0 openshift-worker-0.<cluster_name>.<base_domain> <ip> Worker-1 openshift-worker-1.<cluster_name>.<base_domain> <ip> Worker-n openshift-worker-n.<cluster_name>.<base_domain> <ip> Note If you do not create DHCP reservations, the installation program requires reverse DNS resolution to set the hostnames for the Kubernetes API node, the provisioner node, the control plane nodes, and the worker nodes. 2.5.7. Provisioner node requirements You must specify the MAC address for the provisioner node in your installation configuration. The bootMacAddress specification is typically associated with PXE network booting. However, the Ironic provisioning service also requires the bootMacAddress specification to identify nodes during the inspection of the cluster, or during node redeployment in the cluster. The provisioner node requires layer 2 connectivity for network booting, DHCP and DNS resolution, and local network communication. The provisioner node requires layer 3 connectivity for virtual media booting. 2.5.8. Network Time Protocol (NTP) Each OpenShift Container Platform node in the cluster must have access to an NTP server. OpenShift Container Platform nodes use NTP to synchronize their clocks. For example, cluster nodes use SSL/TLS certificates that require validation, which might fail if the date and time between the nodes are not in sync. Important Define a consistent clock date and time format in each cluster node's BIOS settings, or installation might fail. You can reconfigure the control plane nodes to act as NTP servers on disconnected clusters, and reconfigure worker nodes to retrieve time from the control plane nodes. 2.5.9. Port access for the out-of-band management IP address The out-of-band management IP address is on a separate network from the node. To ensure that the out-of-band management can communicate with the provisioner node during installation, the out-of-band management IP address must be granted access to port 6180 on the provisioner node and on the OpenShift Container Platform control plane nodes. TLS port 6183 is required for virtual media installation, for example, by using Redfish. Additional resources Using DNS forwarding 2.6. Configuring nodes Configuring nodes when using the provisioning network Each node in the cluster requires the following configuration for proper installation. Warning A mismatch between nodes will cause an installation failure. While the cluster nodes can contain more than two NICs, the installation process only focuses on the first two NICs. In the following table, NIC1 is a non-routable network ( provisioning ) that is only used for the installation of the OpenShift Container Platform cluster. NIC Network VLAN NIC1 provisioning <provisioning_vlan> NIC2 baremetal <baremetal_vlan> The Red Hat Enterprise Linux (RHEL) 9.x installation process on the provisioner node might vary. To install Red Hat Enterprise Linux (RHEL) 9.x using a local Satellite server or a PXE server, PXE-enable NIC2. PXE Boot order NIC1 PXE-enabled provisioning network 1 NIC2 baremetal network. PXE-enabled is optional. 2 Note Ensure PXE is disabled on all other NICs. Configure the control plane and worker nodes as follows: PXE Boot order NIC1 PXE-enabled (provisioning network) 1 Configuring nodes without the provisioning network The installation process requires one NIC: NIC Network VLAN NICx baremetal <baremetal_vlan> NICx is a routable network ( baremetal ) that is used for the installation of the OpenShift Container Platform cluster, and routable to the internet. Important The provisioning network is optional, but it is required for PXE booting. If you deploy without a provisioning network, you must use a virtual media BMC addressing option such as redfish-virtualmedia or idrac-virtualmedia . Configuring nodes for Secure Boot manually Secure Boot prevents a node from booting unless it verifies the node is using only trusted software, such as UEFI firmware drivers, EFI applications, and the operating system. Note Red Hat only supports manually configured Secure Boot when deploying with Redfish virtual media. To enable Secure Boot manually, refer to the hardware guide for the node and execute the following: Procedure Boot the node and enter the BIOS menu. Set the node's boot mode to UEFI Enabled . Enable Secure Boot. Important Red Hat does not support Secure Boot with self-generated keys. 2.7. Out-of-band management Nodes typically have an additional NIC used by the baseboard management controllers (BMCs). These BMCs must be accessible from the provisioner node. Each node must be accessible via out-of-band management. When using an out-of-band management network, the provisioner node requires access to the out-of-band management network for a successful OpenShift Container Platform installation. The out-of-band management setup is out of scope for this document. Using a separate management network for out-of-band management can enhance performance and improve security. However, using the provisioning network or the bare metal network are valid options. Note The bootstrap VM features a maximum of two network interfaces. If you configure a separate management network for out-of-band management, and you are using a provisioning network, the bootstrap VM requires routing access to the management network through one of the network interfaces. In this scenario, the bootstrap VM can then access three networks: the bare metal network the provisioning network the management network routed through one of the network interfaces 2.8. Required data for installation Prior to the installation of the OpenShift Container Platform cluster, gather the following information from all cluster nodes: Out-of-band management IP Examples Dell (iDRAC) IP HP (iLO) IP Fujitsu (iRMC) IP When using the provisioning network NIC ( provisioning ) MAC address NIC ( baremetal ) MAC address When omitting the provisioning network NIC ( baremetal ) MAC address 2.9. Validation checklist for nodes When using the provisioning network ❏ NIC1 VLAN is configured for the provisioning network. ❏ NIC1 for the provisioning network is PXE-enabled on the provisioner, control plane, and worker nodes. ❏ NIC2 VLAN is configured for the baremetal network. ❏ PXE has been disabled on all other NICs. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation. When omitting the provisioning network ❏ NIC1 VLAN is configured for the baremetal network. ❏ DNS is configured with API and Ingress endpoints. ❏ Control plane and worker nodes are configured. ❏ All nodes accessible via out-of-band management. ❏ (Optional) A separate management network has been created. ❏ Required data for installation.
[ "<cluster_name>.<base_domain>", "test-cluster.example.com" ]
https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/deploying_installer-provisioned_clusters_on_bare_metal/ipi-install-prerequisites
Chapter 5. Using the Pipelines as Code resolver
Chapter 5. Using the Pipelines as Code resolver The Pipelines as Code resolver ensures that a running pipeline run does not conflict with others. 5.1. About the Pipelines as Code resolver To split your pipeline and pipeline run, store the files in the .tekton/ directory or its subdirectories. If Pipelines as Code observes a pipeline run with a reference to a task or a pipeline in any YAML file located in the .tekton/ directory, Pipelines as Code automatically resolves the referenced task to provide a single pipeline run with an embedded spec in a PipelineRun object. If Pipelines as Code cannot resolve the referenced tasks in the Pipeline or PipelineSpec definition, the run fails before applying any changes to the cluster. You can see the issue on your Git provider platform and inside the events of the target namespace where the Repository CR is located. The resolver skips resolving if it observes the following type of tasks: A task or pipeline bundle. A custom task with an API version that does not have a tekton.dev/ prefix. The resolver uses such tasks literally, without any transformation. To test your pipeline run locally before sending it in a pull request, use the tkn pac resolve command. You can also reference remote pipelines and tasks. 5.2. Using remote task annotations with Pipelines as Code Pipelines as Code supports fetching remote tasks or pipelines by using annotations in a pipeline run. If you reference a remote task in a pipeline run, or a pipeline in a PipelineRun or a PipelineSpec object, the Pipelines as Code resolver automatically includes it. If there is any error while fetching the remote tasks or parsing them, Pipelines as Code stops processing the tasks. To include remote tasks, refer to the following examples of annotation: Reference remote tasks in Tekton Hub Reference a single remote task in Tekton Hub. ... pipelinesascode.tekton.dev/task: "git-clone" 1 ... 1 Pipelines as Code includes the latest version of the task from the Tekton Hub. Reference multiple remote tasks from Tekton Hub ... pipelinesascode.tekton.dev/task: "[git-clone, golang-test, tkn]" ... Reference multiple remote tasks from Tekton Hub using the -<NUMBER> suffix. ... pipelinesascode.tekton.dev/task: "git-clone" pipelinesascode.tekton.dev/task-1: "golang-test" pipelinesascode.tekton.dev/task-2: "tkn" 1 ... 1 By default, Pipelines as Code interprets the string as the latest task to fetch from Tekton Hub. Reference a specific version of a remote task from Tekton Hub. ... pipelinesascode.tekton.dev/task: "[git-clone:0.1]" 1 ... 1 Refers to the 0.1 version of the git-clone remote task from Tekton Hub. Remote tasks using URLs ... pipelinesascode.tekton.dev/task: "<https://remote.url/task.yaml>" 1 ... 1 The public URL to the remote task. Note If you use GitHub and the remote task URL uses the same host as the Repository custom resource definition (CRD), Pipelines as Code uses the GitHub token and fetches the URL using the GitHub API. For example, if you have a repository URL similar to https://github.com/<organization>/<repository> and the remote HTTP URL references a GitHub blob similar to https://github.com/<organization>/<repository>/blob/<mainbranch>/<path>/<file> , Pipelines as Code fetches the task definition files from that private repository with the GitHub App token. When you work on a public GitHub repository, Pipelines as Code acts similarly for a GitHub raw URL such as https://raw.githubusercontent.com/<organization>/<repository>/<mainbranch>/<path>/<file> . GitHub App tokens are scoped to the owner or organization where the repository is located. When you use the GitHub webhook method, you can fetch any private or public repository on any organization where the personal token is allowed. Reference a task from a YAML file inside your repository ... pipelinesascode.tekton.dev/task: "<share/tasks/git-clone.yaml>" 1 ... 1 Relative path to the local file containing the task definition. 5.3. Using remote pipeline annotations with Pipelines as Code You can share a pipeline definition across multiple repositories by using the remote pipeline annotation. ... pipelinesascode.tekton.dev/pipeline: "<https://git.provider/raw/pipeline.yaml>" 1 ... 1 URL to the remote pipeline definition. You can also provide locations for files inside the same repository. Note You can reference only one pipeline definition using the annotation. 5.3.1. Overriding a task in a remote pipeline By default, if you use a remote pipeline annotation in a pipeline run, Pipelines as Code uses all the tasks that are a part of the remote pipeline. You can override a task in a remote pipeline by adding a task annotation to the pipeline run. The added task must have the same name as a task in the remote pipeline. For example, you might use the following pipeline run definition: Example pipeline run definition referencing a remote pipeline and overriding a task apiVersion: tekton.dev/v1 kind: PipelineRun metadata: annotations: pipelinesascode.tekton.dev/pipeline: "https://git.provider/raw/pipeline.yaml" pipelinesascode.tekton.dev/task: "./my-git-clone-task.yaml" For this example, assume the remote task found at https://git.provider/raw/pipeline.yaml includes a task named git-clone and the task that the my-git-clone-task.yaml file defines is also named git-clone . In this case, the pipeline run executes the remote pipeline, but replaces the task named git-clone in the pipeline with the task you defined.
[ "pipelinesascode.tekton.dev/task: \"git-clone\" 1", "pipelinesascode.tekton.dev/task: \"[git-clone, golang-test, tkn]\"", "pipelinesascode.tekton.dev/task: \"git-clone\" pipelinesascode.tekton.dev/task-1: \"golang-test\" pipelinesascode.tekton.dev/task-2: \"tkn\" 1", "pipelinesascode.tekton.dev/task: \"[git-clone:0.1]\" 1", "pipelinesascode.tekton.dev/task: \"<https://remote.url/task.yaml>\" 1", "pipelinesascode.tekton.dev/task: \"<share/tasks/git-clone.yaml>\" 1", "pipelinesascode.tekton.dev/pipeline: \"<https://git.provider/raw/pipeline.yaml>\" 1", "apiVersion: tekton.dev/v1 kind: PipelineRun metadata: annotations: pipelinesascode.tekton.dev/pipeline: \"https://git.provider/raw/pipeline.yaml\" pipelinesascode.tekton.dev/task: \"./my-git-clone-task.yaml\"" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_pipelines/1.18/html/pipelines_as_code/using-pac-resolver_using-repository-crd
Preface
Preface Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/pr01
Chapter 1. Use Case Considerations
Chapter 1. Use Case Considerations Because Amazon Web Services is an image-only service, there are common Satellite use cases that do not work, or require extra configuration in an Amazon Web Service environment. If you plan to use Satellite on AWS, ensure that the use case scenarios that you want to use are available in an AWS environment. 1.1. Use Cases Known to Work You can perform the following Red Hat Satellite use cases on AWS: Managing Red Hat Subscriptions Importing Content Managing Errata Registering a Host Manually Red Hat Insights Realm Integration via IdM OpenSCAP Remote Execution Subscriptions Not all Red Hat subscriptions are eligible to run in public cloud environments. For more information about subscription eligibility, see the Red Hat Cloud Access Page . You can create additional organizations and then import additional manifests to the organizations. For more information, see Creating an Organization in Administering Red Hat Satellite . Multi-homed Satellite and Capsule Multi-homed Satellite is not supported. Multi-homed Capsule is supported, to implement this, you can configure Capsules with a load balancer. For more information, see Configuring Capsules with a Load Balancer . You must do this when Satellite Server or Capsule Server has different internal and external DNS host names and there is no site-to-site VPN connection between the locations where you deploy Satellite Server and Capsule Server. On demand content sources You can use the On demand download policy to reduce the storage footprint of the server that runs Satellite. When you set the download policy to On Demand , content syncs to Satellite Server or Capsule Server when a content host requests it. For more information, see Importing Content in Managing content . 1.2. Use Cases that Do Not Work In AWS, you cannot manage the DHCP. Because of this, most of Satellite Server's kickstart and PXE provisioning models are unusable. This includes: PXE Provisioning Discovery and Discovery Rules ISO Provisioning methods. PXE-Less Discovery (iPXE) Per-host ISO Generic ISO Full-host ISO
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/deploying_red_hat_satellite_on_amazon_web_services/use_case_considerations
RPM Packaging Guide
RPM Packaging Guide Red Hat Enterprise Linux 7 Basic and advanced software packaging scenarios using the RPM package manager Customer Content Services [email protected] Marie Dolezelova Red Hat Customer Content Services [email protected] Maxim Svistunov Red Hat Customer Content Services Adam Miller Red Hat Adam Kvitek Red Hat Customer Content Services Petr Kovar Red Hat Customer Content Services Miroslav Suchy Red Hat
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/rpm_packaging_guide/index
Chapter 7. Apps APIs
Chapter 7. Apps APIs 7.1. Apps APIs 7.1.1. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object 7.1.2. DaemonSet [apps/v1] Description DaemonSet represents the configuration of a daemon set. Type object 7.1.3. Deployment [apps/v1] Description Deployment enables declarative updates for Pods and ReplicaSets. Type object 7.1.4. ReplicaSet [apps/v1] Description ReplicaSet ensures that a specified number of pod replicas are running at any given time. Type object 7.1.5. StatefulSet [apps/v1] Description StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. Type object 7.2. ControllerRevision [apps/v1] Description ControllerRevision implements an immutable snapshot of state data. Clients are responsible for serializing and deserializing the objects that contain their internal state. Once a ControllerRevision has been successfully created, it can not be updated. The API Server will fail validation of all requests that attempt to mutate the Data field. ControllerRevisions may, however, be deleted. Note that, due to its use by both the DaemonSet and StatefulSet controllers for update and rollback, this object is beta. However, it may be subject to name and representation changes in future releases, and clients should not depend on its stability. It is primarily for internal use by controllers. Type object Required revision 7.2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources data RawExtension Data is the serialized representation of the state. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata revision integer Revision indicates the revision of the state represented by Data. 7.2.2. API endpoints The following API endpoints are available: /apis/apps/v1/controllerrevisions GET : list or watch objects of kind ControllerRevision /apis/apps/v1/watch/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions DELETE : delete collection of ControllerRevision GET : list or watch objects of kind ControllerRevision POST : create a ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions GET : watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} DELETE : delete a ControllerRevision GET : read the specified ControllerRevision PATCH : partially update the specified ControllerRevision PUT : replace the specified ControllerRevision /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} GET : watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. 7.2.2.1. /apis/apps/v1/controllerrevisions Table 7.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ControllerRevision Table 7.2. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty 7.2.2.2. /apis/apps/v1/watch/controllerrevisions Table 7.3. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 7.4. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.2.3. /apis/apps/v1/namespaces/{namespace}/controllerrevisions Table 7.5. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.6. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ControllerRevision Table 7.7. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.8. Body parameters Parameter Type Description body DeleteOptions schema Table 7.9. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ControllerRevision Table 7.10. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.11. HTTP responses HTTP code Reponse body 200 - OK ControllerRevisionList schema 401 - Unauthorized Empty HTTP method POST Description create a ControllerRevision Table 7.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.13. Body parameters Parameter Type Description body ControllerRevision schema Table 7.14. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 202 - Accepted ControllerRevision schema 401 - Unauthorized Empty 7.2.2.4. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions Table 7.15. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.16. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead. Table 7.17. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.2.2.5. /apis/apps/v1/namespaces/{namespace}/controllerrevisions/{name} Table 7.18. Global path parameters Parameter Type Description name string name of the ControllerRevision namespace string object name and auth scope, such as for teams and projects Table 7.19. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ControllerRevision Table 7.20. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.21. Body parameters Parameter Type Description body DeleteOptions schema Table 7.22. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ControllerRevision Table 7.23. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ControllerRevision Table 7.24. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.25. Body parameters Parameter Type Description body Patch schema Table 7.26. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ControllerRevision Table 7.27. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.28. Body parameters Parameter Type Description body ControllerRevision schema Table 7.29. HTTP responses HTTP code Reponse body 200 - OK ControllerRevision schema 201 - Created ControllerRevision schema 401 - Unauthorized Empty 7.2.2.6. /apis/apps/v1/watch/namespaces/{namespace}/controllerrevisions/{name} Table 7.30. Global path parameters Parameter Type Description name string name of the ControllerRevision namespace string object name and auth scope, such as for teams and projects Table 7.31. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ControllerRevision. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.32. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.3. DaemonSet [apps/v1] Description DaemonSet represents the configuration of a daemon set. Type object 7.3.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DaemonSetSpec is the specification of a daemon set. status object DaemonSetStatus represents the current status of a daemon set. 7.3.1.1. .spec Description DaemonSetSpec is the specification of a daemon set. Type object Required selector template Property Type Description minReadySeconds integer The minimum number of seconds for which a newly created DaemonSet pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready). revisionHistoryLimit integer The number of old history to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector A label query over pods that are managed by the daemon set. Must match in order to be controlled. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template PodTemplateSpec An object that describes the pod that will be created. The DaemonSet will create exactly one copy of this pod on every node that matches the template's node selector (or on every node if no node selector is specified). The only allowed template.spec.restartPolicy value is "Always". More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template updateStrategy object DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet. 7.3.1.2. .spec.updateStrategy Description DaemonSetUpdateStrategy is a struct used to control the update strategy for a DaemonSet. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of daemon set rolling update. type string Type of daemon set update. Can be "RollingUpdate" or "OnDelete". Default is RollingUpdate. Possible enum values: - "OnDelete" Replace the old daemons only when it's killed - "RollingUpdate" Replace the old daemons by new ones using rolling update i.e replace them on each node one after the other. 7.3.1.3. .spec.updateStrategy.rollingUpdate Description Spec to control the desired behavior of daemon set rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of nodes with an existing available DaemonSet pod that can have an updated DaemonSet pod during during an update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up to a minimum of 1. Default value is 0. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their a new pod created before the old pod is marked as deleted. The update starts by launching new pods on 30% of nodes. Once an updated pod is available (Ready for at least minReadySeconds) the old DaemonSet pod on that node is marked deleted. If the old pod becomes unavailable for any reason (Ready transitions to false, is evicted, or is drained) an updated pod is immediatedly created on that node without considering surge limits. Allowing surge implies the possibility that the resources consumed by the daemonset on any given node can double if the readiness check fails, and so resource intensive daemonsets should take into account that they may cause evictions during disruption. maxUnavailable IntOrString The maximum number of DaemonSet pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of total number of DaemonSet pods at the start of the update (ex: 10%). Absolute number is calculated from percentage by rounding up. This cannot be 0 if MaxSurge is 0 Default value is 1. Example: when this is set to 30%, at most 30% of the total number of nodes that should be running the daemon pod (i.e. status.desiredNumberScheduled) can have their pods stopped for an update at any given time. The update starts by stopping at most 30% of those DaemonSet pods and then brings up new DaemonSet pods in their place. Once the new pods are available, it then proceeds onto other DaemonSet pods, thus ensuring that at least 70% of original number of DaemonSet pods are available at all times during the update. 7.3.1.4. .status Description DaemonSetStatus represents the current status of a daemon set. Type object Required currentNumberScheduled numberMisscheduled desiredNumberScheduled numberReady Property Type Description collisionCount integer Count of hash collisions for the DaemonSet. The DaemonSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision. conditions array Represents the latest available observations of a DaemonSet's current state. conditions[] object DaemonSetCondition describes the state of a DaemonSet at a certain point. currentNumberScheduled integer The number of nodes that are running at least 1 daemon pod and are supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ desiredNumberScheduled integer The total number of nodes that should be running the daemon pod (including nodes correctly running the daemon pod). More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ numberAvailable integer The number of nodes that should be running the daemon pod and have one or more of the daemon pod running and available (ready for at least spec.minReadySeconds) numberMisscheduled integer The number of nodes that are running the daemon pod, but are not supposed to run the daemon pod. More info: https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/ numberReady integer numberReady is the number of nodes that should be running the daemon pod and have one or more of the daemon pod running with a Ready Condition. numberUnavailable integer The number of nodes that should be running the daemon pod and have none of the daemon pod running and available (ready for at least spec.minReadySeconds) observedGeneration integer The most recent generation observed by the daemon set controller. updatedNumberScheduled integer The total number of nodes that are running updated daemon pod 7.3.1.5. .status.conditions Description Represents the latest available observations of a DaemonSet's current state. Type array 7.3.1.6. .status.conditions[] Description DaemonSetCondition describes the state of a DaemonSet at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of DaemonSet condition. 7.3.2. API endpoints The following API endpoints are available: /apis/apps/v1/daemonsets GET : list or watch objects of kind DaemonSet /apis/apps/v1/watch/daemonsets GET : watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/daemonsets DELETE : delete collection of DaemonSet GET : list or watch objects of kind DaemonSet POST : create a DaemonSet /apis/apps/v1/watch/namespaces/{namespace}/daemonsets GET : watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} DELETE : delete a DaemonSet GET : read the specified DaemonSet PATCH : partially update the specified DaemonSet PUT : replace the specified DaemonSet /apis/apps/v1/watch/namespaces/{namespace}/daemonsets/{name} GET : watch changes to an object of kind DaemonSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status GET : read status of the specified DaemonSet PATCH : partially update status of the specified DaemonSet PUT : replace status of the specified DaemonSet 7.3.2.1. /apis/apps/v1/daemonsets Table 7.33. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind DaemonSet Table 7.34. HTTP responses HTTP code Reponse body 200 - OK DaemonSetList schema 401 - Unauthorized Empty 7.3.2.2. /apis/apps/v1/watch/daemonsets Table 7.35. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.36. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.3.2.3. /apis/apps/v1/namespaces/{namespace}/daemonsets Table 7.37. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.38. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of DaemonSet Table 7.39. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.40. Body parameters Parameter Type Description body DeleteOptions schema Table 7.41. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind DaemonSet Table 7.42. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.43. HTTP responses HTTP code Reponse body 200 - OK DaemonSetList schema 401 - Unauthorized Empty HTTP method POST Description create a DaemonSet Table 7.44. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.45. Body parameters Parameter Type Description body DaemonSet schema Table 7.46. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 202 - Accepted DaemonSet schema 401 - Unauthorized Empty 7.3.2.4. /apis/apps/v1/watch/namespaces/{namespace}/daemonsets Table 7.47. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.48. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of DaemonSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.49. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.3.2.5. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name} Table 7.50. Global path parameters Parameter Type Description name string name of the DaemonSet namespace string object name and auth scope, such as for teams and projects Table 7.51. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a DaemonSet Table 7.52. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.53. Body parameters Parameter Type Description body DeleteOptions schema Table 7.54. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified DaemonSet Table 7.55. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified DaemonSet Table 7.56. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.57. Body parameters Parameter Type Description body Patch schema Table 7.58. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified DaemonSet Table 7.59. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.60. Body parameters Parameter Type Description body DaemonSet schema Table 7.61. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty 7.3.2.6. /apis/apps/v1/watch/namespaces/{namespace}/daemonsets/{name} Table 7.62. Global path parameters Parameter Type Description name string name of the DaemonSet namespace string object name and auth scope, such as for teams and projects Table 7.63. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind DaemonSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.64. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.3.2.7. /apis/apps/v1/namespaces/{namespace}/daemonsets/{name}/status Table 7.65. Global path parameters Parameter Type Description name string name of the DaemonSet namespace string object name and auth scope, such as for teams and projects Table 7.66. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified DaemonSet Table 7.67. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified DaemonSet Table 7.68. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.69. Body parameters Parameter Type Description body Patch schema Table 7.70. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified DaemonSet Table 7.71. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.72. Body parameters Parameter Type Description body DaemonSet schema Table 7.73. HTTP responses HTTP code Reponse body 200 - OK DaemonSet schema 201 - Created DaemonSet schema 401 - Unauthorized Empty 7.4. Deployment [apps/v1] Description Deployment enables declarative updates for Pods and ReplicaSets. Type object 7.4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object DeploymentSpec is the specification of the desired behavior of the Deployment. status object DeploymentStatus is the most recently observed status of the Deployment. 7.4.1.1. .spec Description DeploymentSpec is the specification of the desired behavior of the Deployment. Type object Required selector template Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) paused boolean Indicates that the deployment is paused. progressDeadlineSeconds integer The maximum time in seconds for a deployment to make progress before it is considered to be failed. The deployment controller will continue to process failed deployments and a condition with a ProgressDeadlineExceeded reason will be surfaced in the deployment status. Note that progress will not be estimated during the time a deployment is paused. Defaults to 600s. replicas integer Number of desired pods. This is a pointer to distinguish between explicit zero and not specified. Defaults to 1. revisionHistoryLimit integer The number of old ReplicaSets to retain to allow rollback. This is a pointer to distinguish between explicit zero and not specified. Defaults to 10. selector LabelSelector Label selector for pods. Existing ReplicaSets whose pods are selected by this will be the ones affected by this deployment. It must match the pod template's labels. strategy object DeploymentStrategy describes how to replace existing pods with new ones. template PodTemplateSpec Template describes the pods that will be created. The only allowed template.spec.restartPolicy value is "Always". 7.4.1.2. .spec.strategy Description DeploymentStrategy describes how to replace existing pods with new ones. Type object Property Type Description rollingUpdate object Spec to control the desired behavior of rolling update. type string Type of deployment. Can be "Recreate" or "RollingUpdate". Default is RollingUpdate. Possible enum values: - "Recreate" Kill all existing pods before creating new ones. - "RollingUpdate" Replace the old ReplicaSets by new one using rolling update i.e gradually scale down the old ReplicaSets and scale up the new one. 7.4.1.3. .spec.strategy.rollingUpdate Description Spec to control the desired behavior of rolling update. Type object Property Type Description maxSurge IntOrString The maximum number of pods that can be scheduled above the desired number of pods. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). This can not be 0 if MaxUnavailable is 0. Absolute number is calculated from percentage by rounding up. Defaults to 25%. Example: when this is set to 30%, the new ReplicaSet can be scaled up immediately when the rolling update starts, such that the total number of old and new pods do not exceed 130% of desired pods. Once old pods have been killed, new ReplicaSet can be scaled up further, ensuring that total number of pods running at any time during the update is at most 130% of desired pods. maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding down. This can not be 0 if MaxSurge is 0. Defaults to 25%. Example: when this is set to 30%, the old ReplicaSet can be scaled down to 70% of desired pods immediately when the rolling update starts. Once new pods are ready, old ReplicaSet can be scaled down further, followed by scaling up the new ReplicaSet, ensuring that the total number of pods available at all times during the update is at least 70% of desired pods. 7.4.1.4. .status Description DeploymentStatus is the most recently observed status of the Deployment. Type object Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this deployment. collisionCount integer Count of hash collisions for the Deployment. The Deployment controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ReplicaSet. conditions array Represents the latest available observations of a deployment's current state. conditions[] object DeploymentCondition describes the state of a deployment at a certain point. observedGeneration integer The generation observed by the deployment controller. readyReplicas integer readyReplicas is the number of pods targeted by this Deployment with a Ready Condition. replicas integer Total number of non-terminated pods targeted by this deployment (their labels match the selector). unavailableReplicas integer Total number of unavailable pods targeted by this deployment. This is the total number of pods that are still required for the deployment to have 100% available capacity. They may either be pods that are running but not yet available or pods that still have not been created. updatedReplicas integer Total number of non-terminated pods targeted by this deployment that have the desired template spec. 7.4.1.5. .status.conditions Description Represents the latest available observations of a deployment's current state. Type array 7.4.1.6. .status.conditions[] Description DeploymentCondition describes the state of a deployment at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. lastUpdateTime Time The last time this condition was updated. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of deployment condition. 7.4.2. API endpoints The following API endpoints are available: /apis/apps/v1/deployments GET : list or watch objects of kind Deployment /apis/apps/v1/watch/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments DELETE : delete collection of Deployment GET : list or watch objects of kind Deployment POST : create a Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments GET : watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/deployments/{name} DELETE : delete a Deployment GET : read the specified Deployment PATCH : partially update the specified Deployment PUT : replace the specified Deployment /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} GET : watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status GET : read status of the specified Deployment PATCH : partially update status of the specified Deployment PUT : replace status of the specified Deployment 7.4.2.1. /apis/apps/v1/deployments Table 7.74. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind Deployment Table 7.75. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty 7.4.2.2. /apis/apps/v1/watch/deployments Table 7.76. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 7.77. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.4.2.3. /apis/apps/v1/namespaces/{namespace}/deployments Table 7.78. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.79. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of Deployment Table 7.80. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.81. Body parameters Parameter Type Description body DeleteOptions schema Table 7.82. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind Deployment Table 7.83. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.84. HTTP responses HTTP code Reponse body 200 - OK DeploymentList schema 401 - Unauthorized Empty HTTP method POST Description create a Deployment Table 7.85. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.86. Body parameters Parameter Type Description body Deployment schema Table 7.87. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 202 - Accepted Deployment schema 401 - Unauthorized Empty 7.4.2.4. /apis/apps/v1/watch/namespaces/{namespace}/deployments Table 7.88. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.89. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of Deployment. deprecated: use the 'watch' parameter with a list operation instead. Table 7.90. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.4.2.5. /apis/apps/v1/namespaces/{namespace}/deployments/{name} Table 7.91. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 7.92. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a Deployment Table 7.93. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.94. Body parameters Parameter Type Description body DeleteOptions schema Table 7.95. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified Deployment Table 7.96. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified Deployment Table 7.97. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.98. Body parameters Parameter Type Description body Patch schema Table 7.99. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified Deployment Table 7.100. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.101. Body parameters Parameter Type Description body Deployment schema Table 7.102. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty 7.4.2.6. /apis/apps/v1/watch/namespaces/{namespace}/deployments/{name} Table 7.103. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 7.104. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind Deployment. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.105. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.4.2.7. /apis/apps/v1/namespaces/{namespace}/deployments/{name}/status Table 7.106. Global path parameters Parameter Type Description name string name of the Deployment namespace string object name and auth scope, such as for teams and projects Table 7.107. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified Deployment Table 7.108. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified Deployment Table 7.109. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.110. Body parameters Parameter Type Description body Patch schema Table 7.111. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified Deployment Table 7.112. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.113. Body parameters Parameter Type Description body Deployment schema Table 7.114. HTTP responses HTTP code Reponse body 200 - OK Deployment schema 201 - Created Deployment schema 401 - Unauthorized Empty 7.5. ReplicaSet [apps/v1] Description ReplicaSet ensures that a specified number of pod replicas are running at any given time. Type object 7.5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta If the Labels of a ReplicaSet are empty, they are defaulted to be the same as the Pod(s) that the ReplicaSet manages. Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ReplicaSetSpec is the specification of a ReplicaSet. status object ReplicaSetStatus represents the current status of a ReplicaSet. 7.5.1.1. .spec Description ReplicaSetSpec is the specification of a ReplicaSet. Type object Required selector Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing, for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) replicas integer Replicas is the number of desired replicas. This is a pointer to distinguish between explicit zero and unspecified. Defaults to 1. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller selector LabelSelector Selector is a label query over pods that should match the replica count. Label keys and values that must match in order to be controlled by this replica set. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors template PodTemplateSpec Template is the object that describes the pod that will be created if insufficient replicas are detected. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller#pod-template 7.5.1.2. .status Description ReplicaSetStatus represents the current status of a ReplicaSet. Type object Required replicas Property Type Description availableReplicas integer The number of available replicas (ready for at least minReadySeconds) for this replica set. conditions array Represents the latest available observations of a replica set's current state. conditions[] object ReplicaSetCondition describes the state of a replica set at a certain point. fullyLabeledReplicas integer The number of pods that have labels matching the labels of the pod template of the replicaset. observedGeneration integer ObservedGeneration reflects the generation of the most recently observed ReplicaSet. readyReplicas integer readyReplicas is the number of pods targeted by this ReplicaSet with a Ready Condition. replicas integer Replicas is the most recently observed number of replicas. More info: https://kubernetes.io/docs/concepts/workloads/controllers/replicationcontroller/#what-is-a-replicationcontroller 7.5.1.3. .status.conditions Description Represents the latest available observations of a replica set's current state. Type array 7.5.1.4. .status.conditions[] Description ReplicaSetCondition describes the state of a replica set at a certain point. Type object Required type status Property Type Description lastTransitionTime Time The last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of replica set condition. 7.5.2. API endpoints The following API endpoints are available: /apis/apps/v1/replicasets GET : list or watch objects of kind ReplicaSet /apis/apps/v1/watch/replicasets GET : watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/replicasets DELETE : delete collection of ReplicaSet GET : list or watch objects of kind ReplicaSet POST : create a ReplicaSet /apis/apps/v1/watch/namespaces/{namespace}/replicasets GET : watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/replicasets/{name} DELETE : delete a ReplicaSet GET : read the specified ReplicaSet PATCH : partially update the specified ReplicaSet PUT : replace the specified ReplicaSet /apis/apps/v1/watch/namespaces/{namespace}/replicasets/{name} GET : watch changes to an object of kind ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status GET : read status of the specified ReplicaSet PATCH : partially update status of the specified ReplicaSet PUT : replace status of the specified ReplicaSet 7.5.2.1. /apis/apps/v1/replicasets Table 7.115. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind ReplicaSet Table 7.116. HTTP responses HTTP code Reponse body 200 - OK ReplicaSetList schema 401 - Unauthorized Empty 7.5.2.2. /apis/apps/v1/watch/replicasets Table 7.117. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.118. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.5.2.3. /apis/apps/v1/namespaces/{namespace}/replicasets Table 7.119. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.120. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of ReplicaSet Table 7.121. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.122. Body parameters Parameter Type Description body DeleteOptions schema Table 7.123. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind ReplicaSet Table 7.124. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.125. HTTP responses HTTP code Reponse body 200 - OK ReplicaSetList schema 401 - Unauthorized Empty HTTP method POST Description create a ReplicaSet Table 7.126. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.127. Body parameters Parameter Type Description body ReplicaSet schema Table 7.128. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 202 - Accepted ReplicaSet schema 401 - Unauthorized Empty 7.5.2.4. /apis/apps/v1/watch/namespaces/{namespace}/replicasets Table 7.129. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.130. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.131. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.5.2.5. /apis/apps/v1/namespaces/{namespace}/replicasets/{name} Table 7.132. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 7.133. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a ReplicaSet Table 7.134. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.135. Body parameters Parameter Type Description body DeleteOptions schema Table 7.136. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ReplicaSet Table 7.137. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ReplicaSet Table 7.138. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.139. Body parameters Parameter Type Description body Patch schema Table 7.140. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ReplicaSet Table 7.141. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.142. Body parameters Parameter Type Description body ReplicaSet schema Table 7.143. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty 7.5.2.6. /apis/apps/v1/watch/namespaces/{namespace}/replicasets/{name} Table 7.144. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 7.145. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind ReplicaSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.146. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.5.2.7. /apis/apps/v1/namespaces/{namespace}/replicasets/{name}/status Table 7.147. Global path parameters Parameter Type Description name string name of the ReplicaSet namespace string object name and auth scope, such as for teams and projects Table 7.148. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified ReplicaSet Table 7.149. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ReplicaSet Table 7.150. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.151. Body parameters Parameter Type Description body Patch schema Table 7.152. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ReplicaSet Table 7.153. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.154. Body parameters Parameter Type Description body ReplicaSet schema Table 7.155. HTTP responses HTTP code Reponse body 200 - OK ReplicaSet schema 201 - Created ReplicaSet schema 401 - Unauthorized Empty 7.6. StatefulSet [apps/v1] Description StatefulSet represents a set of pods with consistent identities. Identities are defined as: - Network: A single stable DNS and hostname. - Storage: As many VolumeClaims as requested. The StatefulSet guarantees that a given network identity will always map to the same storage identity. Type object 7.6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object A StatefulSetSpec is the specification of a StatefulSet. status object StatefulSetStatus represents the current state of a StatefulSet. 7.6.1.1. .spec Description A StatefulSetSpec is the specification of a StatefulSet. Type object Required selector template serviceName Property Type Description minReadySeconds integer Minimum number of seconds for which a newly created pod should be ready without any of its container crashing for it to be considered available. Defaults to 0 (pod will be considered available as soon as it is ready) ordinals object StatefulSetOrdinals describes the policy used for replica ordinal assignment in this StatefulSet. persistentVolumeClaimRetentionPolicy object StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. podManagementPolicy string podManagementPolicy controls how pods are created during initial scale up, when replacing pods on nodes, or when scaling down. The default policy is OrderedReady , where pods are created in increasing order (pod-0, then pod-1, etc) and the controller will wait until each pod is ready before continuing. When scaling down, the pods are removed in the opposite order. The alternative policy is Parallel which will create pods in parallel to match the desired scale without waiting, and on scale down will delete all pods at once. Possible enum values: - "OrderedReady" will create pods in strictly increasing order on scale up and strictly decreasing order on scale down, progressing only when the pod is ready or terminated. At most one pod will be changed at any time. - "Parallel" will create and delete pods as soon as the stateful set replica count is changed, and will not wait for pods to be ready or complete termination. replicas integer replicas is the desired number of replicas of the given Template. These are replicas in the sense that they are instantiations of the same Template, but individual replicas also have a consistent identity. If unspecified, defaults to 1. revisionHistoryLimit integer revisionHistoryLimit is the maximum number of revisions that will be maintained in the StatefulSet's revision history. The revision history consists of all revisions not represented by a currently applied StatefulSetSpec version. The default value is 10. selector LabelSelector selector is a label query over pods that should match the replica count. It must match the pod template's labels. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors serviceName string serviceName is the name of the service that governs this StatefulSet. This service must exist before the StatefulSet, and is responsible for the network identity of the set. Pods get DNS/hostnames that follow the pattern: pod-specific-string.serviceName.default.svc.cluster.local where "pod-specific-string" is managed by the StatefulSet controller. template PodTemplateSpec template is the object that describes the pod that will be created if insufficient replicas are detected. Each pod stamped out by the StatefulSet will fulfill this Template, but have a unique identity from the rest of the StatefulSet. Each pod will be named with the format <statefulsetname>-<podindex>. For example, a pod in a StatefulSet named "web" with index number "3" would be named "web-3". The only allowed template.spec.restartPolicy value is "Always". updateStrategy object StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy. volumeClaimTemplates array (PersistentVolumeClaim) volumeClaimTemplates is a list of claims that pods are allowed to reference. The StatefulSet controller is responsible for mapping network identities to claims in a way that maintains the identity of a pod. Every claim in this list must have at least one matching (by name) volumeMount in one container in the template. A claim in this list takes precedence over any volumes in the template, with the same name. 7.6.1.2. .spec.ordinals Description StatefulSetOrdinals describes the policy used for replica ordinal assignment in this StatefulSet. Type object Property Type Description start integer start is the number representing the first replica's index. It may be used to number replicas from an alternate index (eg: 1-indexed) over the default 0-indexed names, or to orchestrate progressive movement of replicas from one StatefulSet to another. If set, replica indices will be in the range: [.spec.ordinals.start, .spec.ordinals.start + .spec.replicas). If unset, defaults to 0. Replica indices will be in the range: [0, .spec.replicas). 7.6.1.3. .spec.persistentVolumeClaimRetentionPolicy Description StatefulSetPersistentVolumeClaimRetentionPolicy describes the policy used for PVCs created from the StatefulSet VolumeClaimTemplates. Type object Property Type Description whenDeleted string WhenDeleted specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is deleted. The default policy of Retain causes PVCs to not be affected by StatefulSet deletion. The Delete policy causes those PVCs to be deleted. whenScaled string WhenScaled specifies what happens to PVCs created from StatefulSet VolumeClaimTemplates when the StatefulSet is scaled down. The default policy of Retain causes PVCs to not be affected by a scaledown. The Delete policy causes the associated PVCs for any excess pods above the replica count to be deleted. 7.6.1.4. .spec.updateStrategy Description StatefulSetUpdateStrategy indicates the strategy that the StatefulSet controller will use to perform updates. It includes any additional parameters necessary to perform the update for the indicated strategy. Type object Property Type Description rollingUpdate object RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType. type string Type indicates the type of the StatefulSetUpdateStrategy. Default is RollingUpdate. Possible enum values: - "OnDelete" triggers the legacy behavior. Version tracking and ordered rolling restarts are disabled. Pods are recreated from the StatefulSetSpec when they are manually deleted. When a scale operation is performed with this strategy,specification version indicated by the StatefulSet's currentRevision. - "RollingUpdate" indicates that update will be applied to all Pods in the StatefulSet with respect to the StatefulSet ordering constraints. When a scale operation is performed with this strategy, new Pods will be created from the specification version indicated by the StatefulSet's updateRevision. 7.6.1.5. .spec.updateStrategy.rollingUpdate Description RollingUpdateStatefulSetStrategy is used to communicate parameter for RollingUpdateStatefulSetStrategyType. Type object Property Type Description maxUnavailable IntOrString The maximum number of pods that can be unavailable during the update. Value can be an absolute number (ex: 5) or a percentage of desired pods (ex: 10%). Absolute number is calculated from percentage by rounding up. This can not be 0. Defaults to 1. This field is alpha-level and is only honored by servers that enable the MaxUnavailableStatefulSet feature. The field applies to all pods in the range 0 to Replicas-1. That means if there is any unavailable pod in the range 0 to Replicas-1, it will be counted towards MaxUnavailable. partition integer Partition indicates the ordinal at which the StatefulSet should be partitioned for updates. During a rolling update, all pods from ordinal Replicas-1 to Partition are updated. All pods from ordinal Partition-1 to 0 remain untouched. This is helpful in being able to do a canary based deployment. The default value is 0. 7.6.1.6. .status Description StatefulSetStatus represents the current state of a StatefulSet. Type object Required replicas Property Type Description availableReplicas integer Total number of available pods (ready for at least minReadySeconds) targeted by this statefulset. collisionCount integer collisionCount is the count of hash collisions for the StatefulSet. The StatefulSet controller uses this field as a collision avoidance mechanism when it needs to create the name for the newest ControllerRevision. conditions array Represents the latest available observations of a statefulset's current state. conditions[] object StatefulSetCondition describes the state of a statefulset at a certain point. currentReplicas integer currentReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by currentRevision. currentRevision string currentRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [0,currentReplicas). observedGeneration integer observedGeneration is the most recent generation observed for this StatefulSet. It corresponds to the StatefulSet's generation, which is updated on mutation by the API Server. readyReplicas integer readyReplicas is the number of pods created for this StatefulSet with a Ready Condition. replicas integer replicas is the number of Pods created by the StatefulSet controller. updateRevision string updateRevision, if not empty, indicates the version of the StatefulSet used to generate Pods in the sequence [replicas-updatedReplicas,replicas) updatedReplicas integer updatedReplicas is the number of Pods created by the StatefulSet controller from the StatefulSet version indicated by updateRevision. 7.6.1.7. .status.conditions Description Represents the latest available observations of a statefulset's current state. Type array 7.6.1.8. .status.conditions[] Description StatefulSetCondition describes the state of a statefulset at a certain point. Type object Required type status Property Type Description lastTransitionTime Time Last time the condition transitioned from one status to another. message string A human readable message indicating details about the transition. reason string The reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of statefulset condition. 7.6.2. API endpoints The following API endpoints are available: /apis/apps/v1/statefulsets GET : list or watch objects of kind StatefulSet /apis/apps/v1/watch/statefulsets GET : watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/statefulsets DELETE : delete collection of StatefulSet GET : list or watch objects of kind StatefulSet POST : create a StatefulSet /apis/apps/v1/watch/namespaces/{namespace}/statefulsets GET : watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} DELETE : delete a StatefulSet GET : read the specified StatefulSet PATCH : partially update the specified StatefulSet PUT : replace the specified StatefulSet /apis/apps/v1/watch/namespaces/{namespace}/statefulsets/{name} GET : watch changes to an object of kind StatefulSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status GET : read status of the specified StatefulSet PATCH : partially update status of the specified StatefulSet PUT : replace status of the specified StatefulSet 7.6.2.1. /apis/apps/v1/statefulsets Table 7.156. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list or watch objects of kind StatefulSet Table 7.157. HTTP responses HTTP code Reponse body 200 - OK StatefulSetList schema 401 - Unauthorized Empty 7.6.2.2. /apis/apps/v1/watch/statefulsets Table 7.158. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.159. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.6.2.3. /apis/apps/v1/namespaces/{namespace}/statefulsets Table 7.160. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.161. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete collection of StatefulSet Table 7.162. Query parameters Parameter Type Description continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. Table 7.163. Body parameters Parameter Type Description body DeleteOptions schema Table 7.164. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind StatefulSet Table 7.165. Query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. Table 7.166. HTTP responses HTTP code Reponse body 200 - OK StatefulSetList schema 401 - Unauthorized Empty HTTP method POST Description create a StatefulSet Table 7.167. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.168. Body parameters Parameter Type Description body StatefulSet schema Table 7.169. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 202 - Accepted StatefulSet schema 401 - Unauthorized Empty 7.6.2.4. /apis/apps/v1/watch/namespaces/{namespace}/statefulsets Table 7.170. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 7.171. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch individual changes to a list of StatefulSet. deprecated: use the 'watch' parameter with a list operation instead. Table 7.172. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.6.2.5. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name} Table 7.173. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 7.174. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method DELETE Description delete a StatefulSet Table 7.175. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed gracePeriodSeconds integer The duration in seconds before the object should be deleted. Value must be non-negative integer. The value zero indicates delete immediately. If this value is nil, the default grace period for the specified type will be used. Defaults to a per object value if not specified. zero means delete immediately. orphanDependents boolean Deprecated: please use the PropagationPolicy, this field will be deprecated in 1.7. Should the dependent objects be orphaned. If true/false, the "orphan" finalizer will be added to/removed from the object's finalizers list. Either this field or PropagationPolicy may be set, but not both. propagationPolicy string Whether and how garbage collection will be performed. Either this field or OrphanDependents may be set, but not both. The default policy is decided by the existing finalizer set in the metadata.finalizers and the resource-specific default policy. Acceptable values are: 'Orphan' - orphan the dependents; 'Background' - allow the garbage collector to delete the dependents in the background; 'Foreground' - a cascading policy that deletes all dependents in the foreground. Table 7.176. Body parameters Parameter Type Description body DeleteOptions schema Table 7.177. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified StatefulSet Table 7.178. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified StatefulSet Table 7.179. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.180. Body parameters Parameter Type Description body Patch schema Table 7.181. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified StatefulSet Table 7.182. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.183. Body parameters Parameter Type Description body StatefulSet schema Table 7.184. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty 7.6.2.6. /apis/apps/v1/watch/namespaces/{namespace}/statefulsets/{name} Table 7.185. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 7.186. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the `continue` field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset sendInitialEvents boolean `sendInitialEvents=true` may be set together with `watch=true`. In that case, the watch stream will begin with synthetic events to produce the current state of objects in the collection. Once all such events have been sent, a synthetic "Bookmark" event will be sent. The bookmark will report the ResourceVersion (RV) corresponding to the set of objects, and be marked with `"k8s.io/initial-events-end": "true"` annotation. Afterwards, the watch stream will proceed as usual, sending watch events corresponding to changes (subsequent to the RV) to objects watched. When `sendInitialEvents` option is set, we require `resourceVersionMatch` option to also be set. The semantic of the watch request is as following: - `resourceVersionMatch` = NotOlderThan is interpreted as "data at least as new as the provided `resourceVersion`" and the bookmark event is send when the state is synced to a `resourceVersion` at least as fresh as the one provided by the ListOptions. If `resourceVersion` is unset, this is interpreted as "consistent read" and the bookmark event is send when the state is synced at least to the moment when request started being processed. - `resourceVersionMatch` set to any other value or unset Invalid error is returned. Defaults to true if `resourceVersion=""` or `resourceVersion="0"` (for backward compatibility reasons) and to false otherwise. timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description watch changes to an object of kind StatefulSet. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 7.187. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 7.6.2.7. /apis/apps/v1/namespaces/{namespace}/statefulsets/{name}/status Table 7.188. Global path parameters Parameter Type Description name string name of the StatefulSet namespace string object name and auth scope, such as for teams and projects Table 7.189. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read status of the specified StatefulSet Table 7.190. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified StatefulSet Table 7.191. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . This field is required for apply requests (application/apply-patch) but optional for non-apply patch types (JsonPatch, MergePatch, StrategicMergePatch). fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. force boolean Force is going to "force" Apply requests. It means user will re-acquire conflicting fields owned by other people. Force flag must be unset for non-apply patch requests. Table 7.192. Body parameters Parameter Type Description body Patch schema Table 7.193. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified StatefulSet Table 7.194. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 7.195. Body parameters Parameter Type Description body StatefulSet schema Table 7.196. HTTP responses HTTP code Reponse body 200 - OK StatefulSet schema 201 - Created StatefulSet schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/api_reference/apps-apis-1
Chapter 1. Installation Overview
Chapter 1. Installation Overview Installing a standalone Manager environment with remote databases involves the following steps: Install and configure the Red Hat Virtualization Manager: Install two Red Hat Enterprise Linux machines: one for the Manager, and one for the databases. The second machine will be referred to as the remote server. Register the Manager machine with the Content Delivery Network and enable the Red Hat Virtualization Manager repositories. Manually configure the Manager database on the remote server. You can also use this procedure to manually configure the Data Warehouse database if you do not want the Data Warehouse setup script to configure it automatically. Configure the Red Hat Virtualization Manager using engine-setup . Install the Data Warehouse service and database on the remote server. Connect to the Administration Portal to add hosts and storage domains. Install hosts to run virtual machines on: Use either host type, or both: Red Hat Virtualization Host Red Hat Enterprise Linux Add the hosts to the Manager. Prepare storage to use for storage domains. You can use one of the following storage types: NFS iSCSI Fibre Channel (FCP) POSIX-compliant file system Local storage Red Hat Gluster Storage Add storage domains to the Manager. Important Keep the environment up to date. See How do I update my Red Hat Virtualization system? for more information. Since bug fixes for known issues are frequently released, use scheduled tasks to update the hosts and the Manager.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_remote_databases/install_overview_sm_remotedb_deploy
14.5. Amending an Image
14.5. Amending an Image Amend the image format-specific options for the image file. Optionally, specify the file's format type ( fmt ). Note This operation is only supported for the qcow2 file format.
[ "qemu-img amend [-p] [-f fmt ] [-t cache ] -o options filename" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-using-qemu_img-amending_an_image
Chapter 1. Upgrading the Red Hat Developer Hub Operator
Chapter 1. Upgrading the Red Hat Developer Hub Operator If you use the Operator to deploy your Red Hat Developer Hub instance, then an administrator can use the OpenShift Container Platform web console to upgrade the Operator to a later version. OpenShift Container Platform is currently supported from version 4.14 to 4.17. See also the Red Hat Developer Hub Life Cycle . Prerequisites You are logged in as an administrator on the OpenShift Container Platform web console. You have installed the Red Hat Developer Hub Operator. You have configured the appropriate roles and permissions within your project to create or access an application. For more information, see the Red Hat OpenShift Container Platform documentation on Building applications . Procedure In the Administrator perspective of the OpenShift Container Platform web console, click Operators > Installed Operators . On the Installed Operators page, click Red Hat Developer Hub Operator . On the Red Hat Developer Hub Operator page, click the Subscription tab. From the Upgrade status field on the Subscription details page, click Upgrade available . Note If there is no upgrade available, the Upgrade status field value is Up to date . On the InstallPlan details page, click Preview InstallPlan > Approve . Verification The Upgrade status field value on the Subscription details page is Up to date . Additional resources Installing Red Hat Developer Hub on OpenShift Container Platform with the Operator . Installing from OperatorHub by using the web console
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/upgrading_red_hat_developer_hub/proc-upgrade-rhdh-operator_title-upgrade-rhdh
Chapter 57. BrokerCapacity schema reference
Chapter 57. BrokerCapacity schema reference Used in: CruiseControlSpec Property Property type Description disk string The disk property has been deprecated. The Cruise Control disk capacity setting has been deprecated, is ignored, and will be removed in the future Broker capacity for disk in bytes. Use a number value with either standard OpenShift byte units (K, M, G, or T), their bibyte (power of two) equivalents (Ki, Mi, Gi, or Ti), or a byte value with or without E notation. For example, 100000M, 100000Mi, 104857600000, or 1e+11. cpuUtilization integer The cpuUtilization property has been deprecated. The Cruise Control CPU capacity setting has been deprecated, is ignored, and will be removed in the future Broker capacity for CPU resource utilization as a percentage (0 - 100). cpu string Broker capacity for CPU resource in cores or millicores. For example, 1, 1.500, 1500m. For more information on valid CPU resource units see https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/#meaning-of-cpu . inboundNetwork string Broker capacity for inbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. outboundNetwork string Broker capacity for outbound network throughput in bytes per second. Use an integer value with standard OpenShift byte units (K, M, G) or their bibyte (power of two) equivalents (Ki, Mi, Gi) per second. For example, 10000KiB/s. overrides BrokerCapacityOverride array Overrides for individual brokers. The overrides property lets you specify a different capacity configuration for different brokers.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-BrokerCapacity-reference
Configuring
Configuring Red Hat Advanced Cluster Security for Kubernetes 4.7 Configuring Red Hat Advanced Cluster Security for Kubernetes Red Hat OpenShift Documentation Team
[ "-----BEGIN CERTIFICATE----- MIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G l4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END CERTIFICATE-----", "-n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem>", "central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END EC PRIVATE KEY-----", "helm install -n stackrox --create-namespace stackrox-central-services rhacs/central-services -f values-private.yaml", "roxctl central generate --default-tls-cert \"cert.pem\" --default-tls-key \"key.pem\"", "Enter PEM cert bundle file (optional): <cert.pem> Enter PEM private key file (optional): <key.pem> Enter administrator password (default: autogenerated): Enter orchestrator (k8s, openshift): openshift", "-n <namespace> create secret tls central-default-tls-cert --cert <tls-cert.pem> --key <tls-key.pem>", "central: # Configure a default TLS certificate (public cert + private key) for central defaultTLS: cert: | -----BEGIN CERTIFICATE----- EXAMPLE!MIIMIICLDCCAdKgAwIBAgIBADAKBggqhkjOPQQDAjB9MQswCQYDVQQGEwJCRTEPMA0G -----END CERTIFICATE----- key: | -----BEGIN EC PRIVATE KEY----- EXAMPLE!MHcl4wOuDwKQa+upc8GftXE2C//4mKANBC6It01gUaTIpo= -----END EC PRIVATE KEY-----", "helm upgrade -n stackrox --create-namespace stackrox-central-services rhacs/central-services --reuse-values \\ 1 -f values-private.yaml", "oc -n stackrox create secret tls central-default-tls-cert --cert <server_cert.pem> --key <server_key.pem> --dry-run -o yaml | oc apply -f -", "oc delete secret central-default-tls-cert", "oc -n stackrox create secret tls central-default-tls-cert --cert <server_cert.pem> --key <server_key.pem> --dry-run -o yaml | oc apply -f -", "oc -n stackrox exec deploy/central -c central -- kill 1", "oc -n stackrox delete pod -lapp=central", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/sensor.sh", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u", "unzip -d sensor sensor-<cluster_name>.zip", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ 1", "./sensor/ca-setup-sensor.sh -d sensor/additional-cas/ -u", "oc -n stackrox deploy/sensor -c sensor -- kill 1", "kubectl -n stackrox deploy/sensor -c sensor -- kill 1", "oc -n stackrox delete pod -lapp=sensor", "kubectl -n stackrox delete pod -lapp=sensor", "chmod +x ca-setup.sh", "./ca-setup.sh -f <certificate>", "./ca-setup.sh -d <directory_name>", "oc -n stackrox exec deploy/central -c central -- kill 1", "oc -n stackrox delete pod -lapp=central", "oc delete pod -n stackrox -l app=scanner", "kubectl delete pod -n stackrox -l app=scanner", "./ca-setup-sensor.sh -d ./additional-cas/", "oc apply -f <secret_file.yaml>", "oc -n stackrox exec deploy/central -c central -- kill 1", "oc -n stackrox delete pod -lapp=central", "oc apply -f <secret_file.yaml>", "oc delete pod -n stackrox -l app=scanner; oc -n stackrox delete pod -l app=scanner-db", "kubectl delete pod -n stackrox -l app=scanner; kubectl -n stackrox delete pod -l app=scanner-db", "oc -n <namespace> delete pods --all 1", "roxctl -e <endpoint> -p <admin_password> central init-bundles generate --output-secrets <bundle_name> init-bundle.yaml", "oc -n stackrox apply -f <init-bundle.yaml>", "docker tag registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 <your_registry>/rhacs-main-rhel8:4.7.0", "docker tag registry.redhat.io/advanced-cluster-security/rhacs-main-rhel8:4.7.0 <your_registry>/other-name:latest", "docker login registry.redhat.io", "docker pull <image>", "docker tag <image> <new_image>", "docker push <new_image>", "Enter main image to use (if unset, the default will be used): <your_registry>/rhacs-main-rhel8:4.7.0", "Enter Scanner DB image to use (if unset, the default will be used): <your_registry>/rhacs-scanner-db-rhel8:4.7.0", "Enter Scanner image to use (if unset, the default will be used): <your_registry>/rhacs-scanner-rhel8:4.7.0", "Enter whether to run StackRox in offline mode, which avoids reaching out to the internet (default: \"false\"): true", "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl scanner upload-db -e \"USDROX_CENTRAL_ADDRESS\" --scanner-db-file=<compressed_scanner_definitions.zip>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl scanner upload-db -p <your_administrator_password> -e \"USDROX_CENTRAL_ADDRESS\" --scanner-db-file=<compressed_scanner_definitions.zip>", "export ROX_API_TOKEN=<api_token>", "export ROX_CENTRAL_ADDRESS=<address>:<port_number>", "roxctl collector support-packages upload <package_file> -e \"USDROX_CENTRAL_ADDRESS\"", "roxctl central generate interactive --plaintext-endpoints=<endpoints_spec> 1", "CENTRAL_PLAINTEXT_PATCH=' spec: template: spec: containers: - name: central env: - name: ROX_PLAINTEXT_ENDPOINTS value: <endpoints_spec> 1 '", "oc -n stackrox patch deploy/central -p \"USDCENTRAL_PLAINTEXT_PATCH\"", "oc -n stackrox get secret proxy-config -o go-template='{{index .data \"config.yaml\" | base64decode}}{{\"\\n\"}}' > /tmp/proxy-config.yaml", "oc -n stackrox create secret generic proxy-config --from-file=config.yaml=/tmp/proxy-config.yaml -o yaml --dry-run | oc label -f - --local -o yaml app.kubernetes.io/name=stackrox | oc apply -f -", "apiVersion: v1 kind: Secret metadata: namespace: stackrox name: proxy-config type: Opaque stringData: config.yaml: |- 1 # # NOTE: Both central and scanner should be restarted if this secret is changed. # # While it is possible that some components will pick up the new proxy configuration # # without a restart, it cannot be guaranteed that this will apply to every possible # # integration etc. # url: http://proxy.name:port 2 # username: username 3 # password: password 4 # # If the following value is set to true, the proxy wil NOT be excluded for the default hosts: # # - *.stackrox, *.stackrox.svc # # - localhost, localhost.localdomain, 127.0.0.0/8, ::1 # # - *.local # omitDefaultExcludes: false # excludes: # hostnames (may include * components) for which you do not 5 # # want to use a proxy, like in-cluster repositories. # - some.domain # # The following configuration sections allow specifying a different proxy to be used for HTTP(S) connections. # # If they are omitted, the above configuration is used for HTTP(S) connections as well as TCP connections. # # If only the `http` section is given, it will be used for HTTPS connections as well. # # Note: in most cases, a single, global proxy configuration is sufficient. # http: # url: http://http-proxy.name:port 6 # username: username 7 # password: password 8 # https: # url: http://https-proxy.name:port 9 # username: username 10 # password: password 11", "export ROX_PASSWORD= <rox_password> && export ROX_CENTRAL_ADDRESS= <address>:<port_number> 1", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" -p \"USDROX_PASSWORD\" central debug download-diagnostics", "export ROX_API_TOKEN= <api_token>", "roxctl -e \"USDROX_CENTRAL_ADDRESS\" central debug download-diagnostics", "Sample endpoints.yaml configuration for Central. # # CAREFUL: If the following line is uncommented, do not expose the default endpoint on port 8443 by default. # This will break normal operation. disableDefault: true # if true, do not serve on :8443 1 endpoints: 2 # Serve plaintext HTTP only on port 8080 - listen: \":8080\" 3 # Backend protocols, possible values are 'http' and 'grpc'. If unset or empty, assume both. protocols: 4 - http tls: 5 # Disable TLS. If this is not specified, assume TLS is enabled. disable: true 6 # Serve HTTP and gRPC for sensors only on port 8444 - listen: \":8444\" 7 tls: 8 # Which TLS certificates to serve, possible values are 'service' (For service certificates that Red&#160;Hat Advanced Cluster Security for Kubernetes generates) # and 'default' (user-configured default TLS certificate). If unset or empty, assume both. serverCerts: 9 - default - service # Client authentication settings. clientAuth: 10 # Enforce TLS client authentication. If unset, do not enforce, only request certificates # opportunistically. required: true 11 # Which TLS client CAs to serve, possible values are 'service' (CA for service # certificates that Red&#160;Hat Advanced Cluster Security for Kubernetes generates) and 'user' (CAs for PKI auth providers). If unset or empty, assume both. certAuthorities: 12 # if not set, assume [\"user\", \"service\"] - service", "oc -n stackrox get cm/central-endpoints -o go-template='{{index .data \"endpoints.yaml\"}}' > <directory_path>/central_endpoints.yaml", "oc -n stackrox create cm central-endpoints --from-file=endpoints.yaml=<directory-path>/central-endpoints.yaml -o yaml --dry-run | label -f - --local -o yaml app.kubernetes.io/name=stackrox | apply -f -", "oc -n stackrox exec deploy/central -c central -- kill 1", "oc -n stackrox delete pod -lapp=central", "oc -n stackrox get networkpolicy.networking.k8s.io/allow-ext-to-central -o yaml > <directory_path>/allow-ext-to-central-custom-port.yaml", "monitoring: openshift: enabled: false", "monitoring.openshift.enabled: false", "central.exposeMonitoring: true scanner.exposeMonitoring: true", "apiVersion: monitoring.coreos.com/v1 kind: ServiceMonitor metadata: name: prometheus-stackrox namespace: stackrox spec: endpoints: - interval: 30s port: monitoring scheme: http selector: matchLabels: app.kubernetes.io/name: <stackrox-service> 1", "oc apply -f servicemonitor.yaml 1", "oc get servicemonitor --namespace stackrox 1", "{ \"headers\": { \"Accept-Encoding\": [ \"gzip\" ], \"Content-Length\": [ \"586\" ], \"Content-Type\": [ \"application/json\" ], \"User-Agent\": [ \"Go-http-client/1.1\" ] }, \"data\": { \"audit\": { \"interaction\": \"CREATE\", \"method\": \"UI\", \"request\": { \"endpoint\": \"/v1/notifiers\", \"method\": \"POST\", \"source\": { \"requestAddr\": \"10.131.0.7:58276\", \"xForwardedFor\": \"8.8.8.8\", }, \"sourceIp\": \"8.8.8.8\", \"payload\": { \"@type\": \"storage.Notifier\", \"enabled\": true, \"generic\": { \"auditLoggingEnabled\": true, \"endpoint\": \"http://samplewebhookserver.com:8080\" }, \"id\": \"b53232ee-b13e-47e0-b077-1e383c84aa07\", \"name\": \"Webhook\", \"type\": \"generic\", \"uiEndpoint\": \"https://localhost:8000\" } }, \"status\": \"REQUEST_SUCCEEDED\", \"time\": \"2019-05-28T16:07:05.500171300Z\", \"user\": { \"friendlyName\": \"John Doe\", \"role\": { \"globalAccess\": \"READ_WRITE_ACCESS\", \"name\": \"Admin\" }, \"username\": \"[email protected]\" } } } }", "Warn: API Token [token name] (ID [token ID]) will expire in less than X days.", "roxctl declarative-config create permission-set --name=\"restricted\" --description=\"Restriction permission set that only allows access to Administration and Access resources\" --resource-with-access=Administration=READ_WRITE_ACCESS --resource-with-access=Access=READ_ACCESS > permission-set.yaml", "roxctl declarative-config create role --name=\"restricted\" --description=\"Restricted role that only allows access to Administration and Access\" --permission-set=\"restricted\" --access-scope=\"Unrestricted\" > role.yaml", "kubectl create configmap declarative-configurations \\ 1 --from-file permission-set.yaml --from-file role.yaml -o yaml --namespace=stackrox > declarative-configs.yaml", "kubectl apply -f declarative-configs.yaml 1", "name: A sample auth provider minimumRole: Analyst 1 uiEndpoint: central.custom-domain.com:443 2 extraUIEndpoints: 3 - central-alt.custom-domain.com:443 groups: 4 - key: email 5 value: [email protected] role: Admin 6 - key: groups value: reviewers role: Analyst requiredAttributes: 7 - key: org_id value: \"12345\" claimMappings: 8 - path: org_id value: my_org_id oidc: 9 issuer: sample.issuer.com 10 mode: auto 11 clientID: CLIENT_ID clientSecret: CLIENT_SECRET clientSecret: CLIENT_SECRET iap: 12 audience: audience saml: 13 spIssuer: sample.issuer.com metadataURL: sample.provider.com/metadata saml: 14 spIssuer: sample.issuer.com cert: | 15 ssoURL: saml.provider.com idpIssuer: idp.issuer.com userpki: certificateAuthorities: | 16 certificate 17 openshift: 18 enable: true", "name: A sample permission set description: A sample permission set created declaratively resources: - resource: Integration 1 access: READ_ACCESS 2 - resource: Administration access: READ_WRITE_ACCESS", "name: A sample access scope description: A sample access scope created declaratively rules: included: - cluster: secured-cluster-A 1 namespaces: - namespaceA - cluster: secured-cluster-B 2 clusterLabelSelectors: - requirements: - requirements: - key: kubernetes.io/metadata.name operator: IN 3 values: - production - staging - environment", "name: A sample role description: A sample role created declaratively permissionSet: A sample permission set 1 accessScope: Unrestricted 2", "proxy: endpoints: /acs: target: USD{ACS_API_URL} headers: authorization: Bearer USD{ACS_API_KEY} acs: acsUrl: USD{ACS_API_URL}", "- package: https://github.com/RedHatInsights/backstage-plugin-advanced-cluster-security/releases/download/v0.1.1/redhatinsights-backstage-plugin-acs-dynamic-0.1.1.tgz integrity: sha256-9JeRK2jN/Jgenf9kHwuvTvwTuVpqrRYsTGL6cpYAzn4= disabled: false pluginConfig: dynamicPlugins: frontend: redhatinsights.backstage-plugin-acs: entityTabs: - path: /acs title: RHACS mountPoint: entity.page.acs mountPoints: - mountPoint: entity.page.acs/cards importName: EntityACSContent config: layout: gridColumnEnd: lg: span 12 md: span 12 xs: span 12", "apiVersion: backstage.io/v1alpha1 kind: Component metadata: name: test-service annotations: acs/deployment-name: test-deployment-1,test-deployment-2,test-deployment-3" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html-single/configuring/index
17.3. Network Address Translation
17.3. Network Address Translation By default, virtual network switches operate in NAT mode. They use IP masquerading rather than Source-NAT (SNAT) or Destination-NAT (DNAT). IP masquerading enables connected guests to use the host physical machine IP address for communication to any external network. By default, computers that are placed externally to the host physical machine cannot communicate to the guests inside when the virtual network switch is operating in NAT mode, as shown in the following diagram: Figure 17.3. Virtual network switch using NAT with two guests Warning Virtual network switches use NAT configured by iptables rules. Editing these rules while the switch is running is not recommended, as incorrect rules may result in the switch being unable to communicate. If the switch is not running, you can set the public IP range for forward mode NAT in order to create a port masquerading range by running:
[ "iptables -j SNAT --to-source [start]-[end]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/sect-Virtual_Networking-Network_Address_Translation
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate and prioritize your feedback regarding our documentation. Provide as much detail as possible, so that your request can be quickly addressed. Prerequisites You are logged in to the Red Hat Customer Portal. Procedure To provide feedback, perform the following steps: Click the following link: Create Issue Describe the issue or enhancement in the Summary text box. Provide details about the issue or requested enhancement in the Description text box. Type your name in the Reporter text box. Click the Create button. This action creates a documentation ticket and routes it to the appropriate documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/release_notes/proc-providing-feedback-on-redhat-documentation
Chapter 11. Migrating
Chapter 11. Migrating Warning The Red Hat OpenShift distributed tracing platform (Jaeger) 3.5 is the last release of the Red Hat OpenShift distributed tracing platform (Jaeger) that Red Hat plans to support. In the Red Hat OpenShift distributed tracing platform 3.5, Jaeger and support for Elasticsearch remain deprecated. Support for the Red Hat OpenShift distributed tracing platform (Jaeger) ends on November 3, 2025. The Red Hat OpenShift distributed tracing platform Operator (Jaeger) will be removed from the redhat-operators catalog on November 3, 2025. For more information, see the Red Hat Knowledgebase solution Jaeger Deprecation and Removal in OpenShift . You must migrate to the Red Hat build of OpenTelemetry Operator and the Tempo Operator for distributed tracing collection and storage. For more information, see "Migrating" in the Red Hat build of OpenTelemetry documentation, "Installing" in the Red Hat build of OpenTelemetry documentation, and "Installing" in the distributed tracing platform (Tempo) documentation. If you are already using the Red Hat OpenShift distributed tracing platform (Jaeger) for your applications, you can migrate to the Red Hat build of OpenTelemetry, which is based on the OpenTelemetry open-source project. The Red Hat build of OpenTelemetry provides a set of APIs, libraries, agents, and instrumentation to facilitate observability in distributed systems. The OpenTelemetry Collector in the Red Hat build of OpenTelemetry can ingest the Jaeger protocol, so you do not need to change the SDKs in your applications. Migration from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry requires configuring the OpenTelemetry Collector and your applications to report traces seamlessly. You can migrate sidecar and sidecarless deployments. 11.1. Migrating with sidecars The Red Hat build of OpenTelemetry Operator supports sidecar injection into deployment workloads, so you can migrate from a distributed tracing platform (Jaeger) sidecar to a Red Hat build of OpenTelemetry sidecar. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure the OpenTelemetry Collector as a sidecar. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: "tempo-<example>-gateway:8090" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp] 1 This endpoint points to the Gateway of a TempoStack instance deployed by using the <example> Tempo Operator. Create a service account for running your application. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar Create a cluster role for the permissions needed by some processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: ["config.openshift.io"] resources: ["infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 The resourcedetectionprocessor requires permissions for infrastructures and infrastructures/status. Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Deploy the OpenTelemetry Collector as a sidecar. Remove the injected Jaeger Agent from your application by removing the "sidecar.jaegertracing.io/inject": "true" annotation from your Deployment object. Enable automatic injection of the OpenTelemetry sidecar by adding the sidecar.opentelemetry.io/inject: "true" annotation to the .spec.template.metadata.annotations field of your Deployment object. Use the created service account for the deployment of your application to allow the processors to get the correct information and add it to your traces. 11.2. Migrating without sidecars You can migrate from the distributed tracing platform (Jaeger) to the Red Hat build of OpenTelemetry without sidecar deployment. Prerequisites The Red Hat OpenShift distributed tracing platform (Jaeger) is used on the cluster. The Red Hat build of OpenTelemetry is installed. Procedure Configure OpenTelemetry Collector deployment. Create the project where the OpenTelemetry Collector will be deployed. apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability Create a service account for running the OpenTelemetry Collector instance. apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability Create a cluster role for setting the required permissions for the processors. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: ["", "config.openshift.io"] resources: ["pods", "namespaces", "infrastructures", "infrastructures/status"] verbs: ["get", "watch", "list"] 1 Permissions for the pods and namespaces resources are required for the k8sattributesprocessor . 2 Permissions for infrastructures and infrastructures/status are required for resourcedetectionprocessor . Create a ClusterRoleBinding to set the permissions for the service account. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io Create the OpenTelemetry Collector instance. Note This collector will export traces to a TempoStack instance. You must create your TempoStack instance by using the Red Hat Tempo Operator and place here the correct endpoint. apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: "tempo-example-gateway:8090" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp] Point your tracing endpoint to the OpenTelemetry Operator. If you are exporting your traces directly from your application to Jaeger, change the API endpoint from the Jaeger endpoint to the OpenTelemetry Collector endpoint. Example of exporting traces by using the jaegerexporter with Golang exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1 1 The URL points to the OpenTelemetry Collector API endpoint.
[ "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: <otel-collector-namespace> spec: mode: sidecar config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] timeout: 2s exporters: otlp: endpoint: \"tempo-<example>-gateway:8090\" 1 tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, resourcedetection, batch] exporters: [otlp]", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-sidecar", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector-sidecar rules: 1 - apiGroups: [\"config.openshift.io\"] resources: [\"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector-sidecar subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: otel-collector-example roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: project.openshift.io/v1 kind: Project metadata: name: observability", "apiVersion: v1 kind: ServiceAccount metadata: name: otel-collector-deployment namespace: observability", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: name: otel-collector rules: 1 2 - apiGroups: [\"\", \"config.openshift.io\"] resources: [\"pods\", \"namespaces\", \"infrastructures\", \"infrastructures/status\"] verbs: [\"get\", \"watch\", \"list\"]", "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: name: otel-collector subjects: - kind: ServiceAccount name: otel-collector-deployment namespace: observability roleRef: kind: ClusterRole name: otel-collector apiGroup: rbac.authorization.k8s.io", "apiVersion: opentelemetry.io/v1beta1 kind: OpenTelemetryCollector metadata: name: otel namespace: observability spec: mode: deployment serviceAccount: otel-collector-deployment config: receivers: jaeger: protocols: grpc: {} thrift_binary: {} thrift_compact: {} thrift_http: {} processors: batch: {} k8sattributes: {} memory_limiter: check_interval: 1s limit_percentage: 50 spike_limit_percentage: 30 resourcedetection: detectors: [openshift] exporters: otlp: endpoint: \"tempo-example-gateway:8090\" tls: insecure: true service: pipelines: traces: receivers: [jaeger] processors: [memory_limiter, k8sattributes, resourcedetection, batch] exporters: [otlp]", "exp, err := jaeger.New(jaeger.WithCollectorEndpoint(jaeger.WithEndpoint(url))) 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/red_hat_build_of_opentelemetry/dist-tracing-otel-migrating
Appendix E. Swift request headers
Appendix E. Swift request headers Table E.1. Request Headers Name Description Type Required X-Auth-User The key Ceph Object Gateway username to authenticate. String Yes X-Auth-Key The key associated to a Ceph Object Gateway username. String Yes
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/developer_guide/swift-request-headers_dev
Chapter 6. Configuring the Squid caching proxy server
Chapter 6. Configuring the Squid caching proxy server Squid is a proxy server that caches content to reduce bandwidth and load web pages more quickly. This chapter describes how to set up Squid as a proxy for the HTTP, HTTPS, and FTP protocol, as well as authentication and restricting access. 6.1. Setting up Squid as a caching proxy without authentication You can configure Squid as a caching proxy without authentication. The procedure limits access to the proxy based on IP ranges. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. Procedure Install the squid package: Edit the /etc/squid/squid.conf file: Adapt the localnet access control lists (ACL) to match the IP ranges that should be allowed to use the proxy: By default, the /etc/squid/squid.conf file contains the http_access allow localnet rule that allows using the proxy from all IP ranges specified in localnet ACLs. Note that you must specify all localnet ACLs before the http_access allow localnet rule. Important Remove all existing acl localnet entries that do not match your environment. The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Open the 3128 port in the firewall: Enable and start the squid service: Verification To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file was downloaded to the current directory, the proxy works. 6.2. Setting up Squid as a caching proxy with LDAP authentication You can configure Squid as a caching proxy that uses LDAP to authenticate users. The procedure configures that only authenticated users can use the proxy. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. An service user, such as uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com exists in the LDAP directory. Squid uses this account only to search for the authenticating user. If the authenticating user exists, Squid binds as this user to the directory to verify the authentication. Procedure Install the squid package: Edit the /etc/squid/squid.conf file: To configure the basic_ldap_auth helper utility, add the following configuration entry to the top of /etc/squid/squid.conf : The following describes the parameters passed to the basic_ldap_auth helper utility in the example above: -b base_DN sets the LDAP search base. -D proxy_service_user_DN sets the distinguished name (DN) of the account Squid uses to search for the authenticating user in the directory. -W path_to_password_file sets the path to the file that contains the password of the proxy service user. Using a password file prevents that the password is visible in the operating system's process list. -f LDAP_filter specifies the LDAP search filter. Squid replaces the %s variable with the user name provided by the authenticating user. The (&(objectClass=person)(uid=%s)) filter in the example defines that the user name must match the value set in the uid attribute and that the directory entry contains the person object class. -ZZ enforces a TLS-encrypted connection over the LDAP protocol using the STARTTLS command. Omit the -ZZ in the following situations: The LDAP server does not support encrypted connections. The port specified in the URL uses the LDAPS protocol. The -H LDAP_URL parameter specifies the protocol, the host name or IP address, and the port of the LDAP server in URL format. Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy: Important Specify these settings before the http_access deny all rule. Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in localnet ACLs: The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs . Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Store the password of the LDAP service user in the /etc/squid/ldap_password file, and set appropriate permissions for the file: Open the 3128 port in the firewall: Enable and start the squid service: Verification To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file was downloaded to the current directory, the proxy works. Troubleshooting steps To verify that the helper utility works correctly: Manually start the helper utility with the same settings you used in the auth_param parameter: Enter a valid user name and password, and press Enter : If the helper utility returns OK , authentication succeeded. 6.3. Setting up Squid as a caching proxy with kerberos authentication You can configure Squid as a caching proxy that authenticates users to an Active Directory (AD) using Kerberos. The procedure configures that only authenticated users can use the proxy. Prerequisites The procedure assumes that the /etc/squid/squid.conf file is as provided by the squid package. If you edited this file before, remove the file and reinstall the package. Procedure Install the following packages: Authenticate as the AD domain administrator: Create a keytab for Squid, store it in the /etc/squid/HTTP.keytab file and add the HTTP service principal to the keytab: Optional: If system is initially joined to the AD domain with realm (via adcli ), use following instructions to add HTTP principal and create a keytab file for squid: Add the HTTP service principal to the default keytab file /etc/krb5.keytab and verify: Load the /etc/krb5.keytab file, remove all service principals except HTTP , and save the remaining principals into the /etc/squid/HTTP.keytab file: In the interactive shell of ktutil , you can use the different options, until all unwanted principals are removed from keytab, for example: Warning The keys in /etc/krb5.keytab might get updated if SSSD or Samba/winbind will update the machine account password. After the update, the key in /etc/squid/HTTP.keytab will stop working, and you will need to perform the ktutil steps again to copy the new keys into the keytab. Set the owner of the keytab file to the squid user: Optional: Verify that the keytab file contains the HTTP service principal for the fully-qualified domain name (FQDN) of the proxy server: Edit the /etc/squid/squid.conf file: To configure the negotiate_kerberos_auth helper utility, add the following configuration entry to the top of /etc/squid/squid.conf : The following describes the parameters passed to the negotiate_kerberos_auth helper utility in the example above: -k file sets the path to the key tab file. Note that the squid user must have read permissions on this file. -s HTTP/ host_name @ kerberos_realm sets the Kerberos principal that Squid uses. Optionally, you can enable logging by passing one or both of the following parameters to the helper utility: -i logs informational messages, such as the authenticating user. -d enables debug logging. Squid logs the debugging information from the helper utility to the /var/log/squid/cache.log file. Add the following ACL and rule to configure that Squid allows only authenticated users to use the proxy: Important Specify these settings before the http_access deny all rule. Remove the following rule to disable bypassing the proxy authentication from IP ranges specified in localnet ACLs: The following ACL exists in the default configuration and defines 443 as a port that uses the HTTPS protocol: If users should be able to use the HTTPS protocol also on other ports, add an ACL for each of these port: Update the list of acl Safe_ports rules to configure to which ports Squid can establish a connection. For example, to configure that clients using the proxy can only access resources on port 21 (FTP), 80 (HTTP), and 443 (HTTPS), keep only the following acl Safe_ports statements in the configuration: By default, the configuration contains the http_access deny !Safe_ports rule that defines access denial to ports that are not defined in Safe_ports ACLs. Configure the cache type, the path to the cache directory, the cache size, and further cache type-specific settings in the cache_dir parameter: With these settings: Squid uses the ufs cache type. Squid stores its cache in the /var/spool/squid/ directory. The cache grows up to 10000 MB. Squid creates 16 level-1 sub-directories in the /var/spool/squid/ directory. Squid creates 256 sub-directories in each level-1 directory. If you do not set a cache_dir directive, Squid stores the cache in memory. If you set a different cache directory than /var/spool/squid/ in the cache_dir parameter: Create the cache directory: Configure the permissions for the cache directory: If you run SELinux in enforcing mode, set the squid_cache_t context for the cache directory: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Open the 3128 port in the firewall: Enable and start the squid service: Verification To verify that the proxy works correctly, download a web page using the curl utility: If curl does not display any error and the index.html file exists in the current directory, the proxy works. Troubleshooting steps Obtain a Kerberos ticket for the AD account: Optional: Display the ticket: Use the negotiate_kerberos_auth_test utility to test the authentication: If the helper utility returns a token, the authentication succeeded: 6.4. Configuring a domain deny list in Squid Frequently, administrators want to block access to specific domains. This section describes how to configure a domain deny list in Squid. Prerequisites Squid is configured, and users can use the proxy. Procedure Edit the /etc/squid/squid.conf file and add the following settings: Important Add these entries before the first http_access allow statement that allows access to users or clients. Create the /etc/squid/domain_deny_list.txt file and add the domains you want to block. For example, to block access to example.com including subdomains and to block example.net , add: Important If you referred to the /etc/squid/domain_deny_list.txt file in the squid configuration, this file must not be empty. If the file is empty, Squid fails to start. Restart the squid service: 6.5. Configuring the Squid service to listen on a specific port or IP address By default, the Squid proxy service listens on the 3128 port on all network interfaces. You can change the port and configuring Squid to listen on a specific IP address. Prerequisites The squid package is installed. Procedure Edit the /etc/squid/squid.conf file: To set the port on which the Squid service listens, set the port number in the http_port parameter. For example, to set the port to 8080 , set: To configure on which IP address the Squid service listens, set the IP address and port number in the http_port parameter. For example, to configure that Squid listens only on the 192.0.2.1 IP address on port 3128 , set: Add multiple http_port parameters to the configuration file to configure that Squid listens on multiple ports and IP addresses: If you configured that Squid uses a different port as the default ( 3128 ): Open the port in the firewall: If you run SELinux in enforcing mode, assign the port to the squid_port_t port type definition: If the semanage utility is not available on your system, install the policycoreutils-python-utils package. Restart the squid service: 6.6. Additional resources Configuration parameters usr/share/doc/squid-<version>/squid.conf.documented
[ "yum install squid", "acl localnet src 192.0.2.0/24 acl localnet 2001:db8:1::/64", "acl SSL_ports port 443", "acl SSL_ports port port_number", "acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443", "cache_dir ufs /var/spool/squid 10000 16 256", "mkdir -p path_to_cache_directory", "chown squid:squid path_to_cache_directory", "semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory", "firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload", "systemctl enable --now squid", "curl -O -L \" https://www.redhat.com/index.html \" -x \" proxy.example.com:3128 \"", "yum install squid", "auth_param basic program /usr/lib64/squid/basic_ldap_auth -b \" cn=users,cn=accounts,dc=example,dc=com \" -D \" uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com \" -W /etc/squid/ldap_password -f \" (&(objectClass=person)(uid=%s)) \" -ZZ -H ldap://ldap_server.example.com:389", "acl ldap-auth proxy_auth REQUIRED http_access allow ldap-auth", "http_access allow localnet", "acl SSL_ports port 443", "acl SSL_ports port port_number", "acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443", "cache_dir ufs /var/spool/squid 10000 16 256", "mkdir -p path_to_cache_directory", "chown squid:squid path_to_cache_directory", "semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory", "echo \" password \" > /etc/squid/ldap_password chown root:squid /etc/squid/ldap_password chmod 640 /etc/squid/ldap_password", "firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload", "systemctl enable --now squid", "curl -O -L \" https://www.redhat.com/index.html \" -x \" user_name:[email protected]:3128 \"", "/usr/lib64/squid/basic_ldap_auth -b \" cn=users,cn=accounts,dc=example,dc=com \" -D \" uid=proxy_user,cn=users,cn=accounts,dc=example,dc=com \" -W /etc/squid/ldap_password -f \" (&(objectClass=person)(uid=%s)) \" -ZZ -H ldap://ldap_server.example.com:389", "user_name password", "yum install squid krb5-workstation", "kinit administrator@ AD.EXAMPLE.COM", "export KRB5_KTNAME=FILE:/etc/squid/HTTP.keytab net ads keytab CREATE -U administrator net ads keytab ADD HTTP -U administrator", "adcli update -vvv --domain=ad.example.com --computer-name=PROXY --add-service-principal=\"HTTP/proxy.ad.example.com\" -C klist -kte /etc/krb5.keytab | grep -i HTTP", "ktutil ktutil: rkt /etc/krb5.keytab ktutil: l -e slot | KVNO | Principal ----------------------------------------------------------------------------- 1 | 2 | [email protected] (aes128-cts-hmac-sha1-96) 2 | 2 | [email protected] (aes256-cts-hmac-sha1-96) 3 | 2 | host/[email protected] (aes128-cts-hmac-sha1-96) 4 | 2 | host/[email protected] (aes256-cts-hmac-sha1-96) 5 | 2 | host/[email protected] (aes128-cts-hmac-sha1-96) 6 | 2 | host/[email protected] (aes256-cts-hmac-sha1-96) 7 | 2 | HTTP/[email protected] (aes128-cts-hmac-sha1-96) 8 | 2 | HTTP/[email protected] (aes256-cts-hmac-sha1-96)", "ktutil: delent 1", "ktutil: l -e slot | KVNO | Principal ------------------------------------------------------------------------------- 1 | 2 | HTTP/[email protected] (aes128-cts-hmac-sha1-96) 2 | 2 | HTTP/[email protected] (aes256-cts-hmac-sha1-96) ktutil: wkt /etc/squid/HTTP.keytab ktutil: q", "chown squid /etc/squid/HTTP.keytab", "klist -k /etc/squid/HTTP.keytab Keytab name: FILE:/etc/squid/HTTP.keytab KVNO Principal ---- --------------------------------------------------- 2 HTTP/[email protected]", "auth_param negotiate program /usr/lib64/squid/negotiate_kerberos_auth -k /etc/squid/HTTP.keytab -s HTTP/ proxy.ad.example.com @ AD.EXAMPLE.COM", "acl kerb-auth proxy_auth REQUIRED http_access allow kerb-auth", "http_access allow localnet", "acl SSL_ports port 443", "acl SSL_ports port port_number", "acl Safe_ports port 21 acl Safe_ports port 80 acl Safe_ports port 443", "cache_dir ufs /var/spool/squid 10000 16 256", "mkdir -p path_to_cache_directory", "chown squid:squid path_to_cache_directory", "semanage fcontext -a -t squid_cache_t \" path_to_cache_directory (/.*)?\" restorecon -Rv path_to_cache_directory", "firewall-cmd --permanent --add-port=3128/tcp firewall-cmd --reload", "systemctl enable --now squid", "curl -O -L \" https://www.redhat.com/index.html \" --proxy-negotiate -u : -x \" proxy.ad.example.com:3128 \"", "kinit user@ AD.EXAMPLE.COM", "klist", "/usr/lib64/squid/negotiate_kerberos_auth_test proxy.ad.example.com", "Token: YIIFtAYGKwYBBQUCoIIFqDC", "acl domain_deny_list dstdomain \"/etc/squid/domain_deny_list.txt\" http_access deny all domain_deny_list", ".example.com example.net", "systemctl restart squid", "http_port 8080", "http_port 192.0.2.1:3128", "http_port 192.0.2.1:3128 http_port 192.0.2.1:8080", "firewall-cmd --permanent --add-port= port_number /tcp firewall-cmd --reload", "semanage port -a -t squid_port_t -p tcp port_number", "systemctl restart squid" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/deploying_different_types_of_servers/configuring-the-squid-caching-proxy-server_deploying-different-types-of-servers
Appendix A. Using your subscription
Appendix A. Using your subscription Streams for Apache Kafka is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. Accessing Your Account Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. Activating a Subscription Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. Downloading Zip and Tar Files To access zip or tar files, use the customer portal to find the relevant files for download. If you are using RPM packages, this step is not required. Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Streams for Apache Kafka for Apache Kafka entries in the INTEGRATION AND AUTOMATION category. Select the desired Streams for Apache Kafka product. The Software Downloads page opens. Click the Download link for your component. Installing packages with DNF To install a package and all the package dependencies, use: dnf install <package_name> To install a previously-downloaded package from a local directory, use: dnf install <path_to_download_package> Revised on 2024-05-30 17:23:22 UTC
[ "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/using_the_streams_for_apache_kafka_bridge/using_your_subscription
Chapter 16. Consoles and logging during installation
Chapter 16. Consoles and logging during installation The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows in addition to the main interface. Each of these windows serve a different purpose; they display several different logs, which can be used to troubleshoot issues during the installation process. One of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. The terminal multiplexer is running in virtual console 1. To switch from the actual installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . During the text mode installation, start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has five available windows; their contents are described in the following table, along with keyboard shortcuts. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n , Alt+ Tab , and Ctrl + b p to switch to the or tmux window, respectively. Table 16.1. Available tmux windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC direct mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related to storage devices and configuration, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from utilities executed during the installation process, stored in /tmp/program.log .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_over_the_network/consoles-logging-during-install_rhel-installer
Chapter 2. ActiveMQ
Chapter 2. ActiveMQ ActiveMQ Component Using the ActiveMQ component, you can send messages to a JMS Queue or Topic or consume messages from a JMS Queue or Topic using Apache ActiveMQ . This component is based on JMS component and uses Spring's JMS support for declarative transactions, using Spring's JmsTemplate for sending and a MessageListenerContainer for consuming. All JMS component options also apply to the ActiveMQ Component. To use this component, make sure you have the activemq.jar or activemq-core.jar on your classpath along with any Apache Camel dependencies such as camel-core.jar , camel-spring.jar and camel-jms.jar . Transacted and caching See section Transactions and Cache Levels below on JMS page if you are using transactions with JMS as it can impact performance. URI format Where destinationName is an ActiveMQ queue or topic name. By default, the destinationName is interpreted as a queue name. For example, to connect to the queue, FOO.BAR , use: You can include the optional queue: prefix, if you prefer: To connect to a topic, you must include the topic: prefix. For example, to connect to the topic, Stocks.Prices , use: Options All JMS component options also apply to the ActiveMQ Component. Camel on EAP deployment This component is supported by the Camel on EAP (Wildfly Camel) framework, which offers a simplified deployment model on the Red Hat JBoss Enterprise Application Platform (JBoss EAP) container. You can configure the ActiveMQ Camel component to work either with an embedded broker or an external broker. To embed a broker in the JBoss EAP container, configure the ActiveMQ Resource Adapter in the EAP container configuration file - for details, see ActiveMQ Resource Adapter Configuration . Configuring the Connection Factory The following test case shows how to add an ActiveMQComponent to the CamelContext using the activeMQComponent() method while specifying the brokerURL used to connect to ActiveMQ. Configuring the Connection Factory using Spring XML You can configure the ActiveMQ broker URL on the ActiveMQComponent as follows Using connection pooling When sending to an ActiveMQ broker using Camel it is best practice to use a pooled connection factory to handle efficient pooling of JMS connections, sessions, and producers. See ActiveMQ Spring Support for more information. Add the AMQ pool with Maven: Set up the activemq component as follows: Note Notice the init and destroy methods on the pooled connection factory. These methods are important to ensure the connection pool is properly started and shut down. The PooledConnectionFactory will then create a connection pool with up to 8 connections in use at the same time. Each connection can be shared by many sessions. There is an option named maxActive you can use to configure the maximum number of sessions per connection; the default value is 500 . From ActiveMQ 5.7 onwards, the option has been renamed to maxActiveSessionPerConnection to reflect its purpose better. Note that the concurrentConsumers is set to a higher value than maxConnections is. This is permitted, as each consumer is using a session, and as a session can share the same connection, this will work. In this example, we can have 8 * 500 = 4000 active sessions simultaneously. Invoking MessageListener POJOs in a route The ActiveMQ component also provides a helper Type Converter from a JMS MessageListener to a Processor . This means that the Bean component can invoke any JMS MessageListener bean directly inside any route. You can create a MessageListener in JMS as follows: Example Then use it in your route as follows Example That is, you can reuse any of the Apache Camel components and easily integrate them into your JMS MessageListener POJO\! Using ActiveMQ Destination Options Available as of ActiveMQ 5.6 You can configure the Destination Options in the endpoint uri, using the "destination." prefix. For example, to mark a consumer as exclusive, and set its prefetch size to 50, you can do as follows: .Example Consuming Advisory Messages ActiveMQ can generate Advisory messages , which are put in topics you can consume. Such messages can help you send alerts in case you detect slow consumers or to build statistics (number of messages/produced per day, etc.) The following Spring DSL example shows you how to read messages from a topic. Example If you consume a message on a queue, you should see the following files under the data/activemq folder : and containing string: Example Getting Component JAR You need this dependency: camel-activemq ActiveMQ is an extension of the JMS component released with the ActiveMQ project .
[ "activemq:[queue:|topic:]destinationName", "activemq:FOO.BAR", "activemq:queue:FOO.BAR", "activemq:topic:Stocks.Prices", "camelContext.addComponent(\"activemq\", activeMQComponent(\"vm://localhost?broker.persistent=false\"));", "<beans xmlns=\"http://www.springframework.org/schema/beans\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xsi:schemaLocation=\" http://www.springframework.org/schema/beans http://www.springframework.org/schema/beans/spring-beans-2.0.xsd http://camel.apache.org/schema/spring http://camel.apache.org/schema/spring/camel-spring.xsd\"> <camelContext xmlns=\"http://camel.apache.org/schema/spring\"> </camelContext> <bean id=\"activemq\" class=\"org.apache.activemq.camel.component.ActiveMQComponent\"> <property name=\"brokerURL\" value=\"tcp://somehost:61616\"/> </bean> </beans>", "<dependency> <groupId>org.apache.activemq</groupId> <artifactId>activemq-pool</artifactId> <version>5.11.0.redhat-630516</version> </dependency>", "<bean id=\"jmsConnectionFactory\" class=\"org.apache.activemq.ActiveMQConnectionFactory\"> <property name=\"brokerURL\" value=\"tcp://localhost:61616\" /> </bean> <bean id=\"pooledConnectionFactory\" class=\"org.apache.activemq.pool.PooledConnectionFactory\" init-method=\"start\" destroy-method=\"stop\"> <property name=\"maxConnections\" value=\"8\" /> <property name=\"connectionFactory\" ref=\"jmsConnectionFactory\" /> </bean> <bean id=\"jmsConfig\" class=\"org.apache.camel.component.jms.JmsConfiguration\"> <property name=\"connectionFactory\" ref=\"pooledConnectionFactory\"/> <property name=\"concurrentConsumers\" value=\"10\"/> </bean> <bean id=\"activemq\" class=\"org.apache.activemq.camel.component.ActiveMQComponent\"> <property name=\"configuration\" ref=\"jmsConfig\"/> </bean>", "public class MyListener implements MessageListener { public void onMessage(Message jmsMessage) { // } }", "from(\"file://foo/bar\"). bean(MyListener.class);", "<camelContext xmlns=\"http://camel.apache.org/schema/spring\"> <route> <from uri=\"file://src/test/data?noop=true\"/> <to uri=\"activemq:queue:foo\"/> </route> <route> <!-- use consumer.exclusive ActiveMQ destination option, notice we have to prefix with destination. --> <from uri=\"activemq:foo?destination.consumer.exclusive=true&amp;destination.consumer.prefetchSize=50\"/> <to uri=\"mock:results\"/> </route> </camelContext>", "<route> <from uri=\"activemq:topic:ActiveMQ.Advisory.Connection?mapJmsMessage=false\" /> <convertBodyTo type=\"java.lang.String\"/> <transform> <simple>USD{in.body}&#13;</simple> </transform> <to uri=\"file://data/activemq/?fileExist=Append&ileName=advisoryConnection-USD{date:now:yyyyMMdd}.txt\" /> </route>", "advisoryConnection-20100312.txt advisoryProducer-20100312.txt", "ActiveMQMessage {commandId = 0, responseRequired = false, messageId = ID:dell-charles-3258-1268399815140 -1:0:0:0:221, originalDestination = null, originalTransactionId = null, producerId = ID:dell-charles- 3258-1268399815140-1:0:0:0, destination = topic://ActiveMQ.Advisory.Connection, transactionId = null, expiration = 0, timestamp = 0, arrival = 0, brokerInTime = 1268403383468, brokerOutTime = 1268403383468, correlationId = null, replyTo = null, persistent = false, type = Advisory, priority = 0, groupID = null, groupSequence = 0, targetConsumerId = null, compressed = false, userID = null, content = null, marshalledProperties = org.apache.activemq.util.ByteSequence@17e2705, dataStructure = ConnectionInfo {commandId = 1, responseRequired = true, connectionId = ID:dell-charles-3258-1268399815140-2:50, clientId = ID:dell-charles-3258-1268399815140-14:0, userName = , password = *****, brokerPath = null, brokerMasterConnector = false, manageable = true, clientMaster = true}, redeliveryCounter = 0, size = 0, properties = {originBrokerName=master, originBrokerId=ID:dell-charles- 3258-1268399815140-0:0, originBrokerURL=vm://master}, readOnlyProperties = true, readOnlyBody = true, droppable = false}", "<dependency> <groupId>org.fusesource</groupId> <artifactId>camel-activemq</artifactId> <version>7.11.0.fuse-sb2-7_11_0-00035-redhat-00001</version> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/idu-activemq
probe::socket.read_iter.return
probe::socket.read_iter.return Name probe::socket.read_iter.return - Conclusion of message received via sock_read_iter Synopsis socket.read_iter.return Values flags Socket flags value type Socket type value size Size of message received (in bytes) or error code if success = 0 family Protocol family value name Name of this probe protocol Protocol value state Socket state value success Was receive successful? (1 = yes, 0 = no) Context The message receiver. Description Fires at the conclusion of receiving a message on a socket via the sock_read_iter function
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-socket-read-iter-return
Chapter 3. Targeted Policy
Chapter 3. Targeted Policy Targeted policy is the default SELinux policy used in Red Hat Enterprise Linux. When using targeted policy, processes that are targeted run in a confined domain, and processes that are not targeted run in an unconfined domain. For example, by default, logged-in users run in the unconfined_t domain, and system processes started by init run in the unconfined_service_t domain; both of these domains are unconfined. Executable and writable memory checks may apply to both confined and unconfined domains. However, by default, subjects running in an unconfined domain can allocate writable memory and execute it. These memory checks can be enabled by setting Booleans, which allow the SELinux policy to be modified at runtime. Boolean configuration is discussed later. 3.1. Confined Processes Almost every service that listens on a network, such as sshd or httpd , is confined in Red Hat Enterprise Linux. Also, most processes that run as the root user and perform tasks for users, such as the passwd utility, are confined. When a process is confined, it runs in its own domain, such as the httpd process running in the httpd_t domain. If a confined process is compromised by an attacker, depending on SELinux policy configuration, an attacker's access to resources and the possible damage they can do is limited. Complete this procedure to ensure that SELinux is enabled and the system is prepared to perform the following example: Procedure 3.1. How to Verify SELinux Status Confirm that SELinux is enabled, is running in enforcing mode, and that targeted policy is being used. The correct output should look similar to the output below: See Section 4.4, "Permanent Changes in SELinux States and Modes" for detailed information about changing SELinux modes. As root, create a file in the /var/www/html/ directory: Enter the following command to view the SELinux context of the newly created file: By default, Linux users run unconfined in Red Hat Enterprise Linux, which is why the testfile file is labeled with the SELinux unconfined_u user. RBAC is used for processes, not files. Roles do not have a meaning for files; the object_r role is a generic role used for files (on persistent storage and network file systems). Under the /proc directory, files related to processes may use the system_r role. The httpd_sys_content_t type allows the httpd process to access this file. The following example demonstrates how SELinux prevents the Apache HTTP Server ( httpd ) from reading files that are not correctly labeled, such as files intended for use by Samba. This is an example, and should not be used in production. It assumes that the httpd and wget packages are installed, the SELinux targeted policy is used, and that SELinux is running in enforcing mode. Procedure 3.2. An Example of Confined Process As root, start the httpd daemon: Confirm that the service is running. The output should include the information below (only the time stamp will differ): Change into a directory where your Linux user has write access to, and enter the following command. Unless there are changes to the default configuration, this command succeeds: The chcon command relabels files; however, such label changes do not survive when the file system is relabeled. For permanent changes that survive a file system relabel, use the semanage utility, which is discussed later. As root, enter the following command to change the type to a type used by Samba: Enter the following command to view the changes: Note that the current DAC permissions allow the httpd process access to testfile . Change into a directory where your user has write access to, and enter the following command. Unless there are changes to the default configuration, this command fails: As root, remove testfile : If you do not require httpd to be running, as root, enter the following command to stop it: This example demonstrates the additional security added by SELinux. Although DAC rules allowed the httpd process access to testfile in step 2, because the file was labeled with a type that the httpd process does not have access to, SELinux denied access. If the auditd daemon is running, an error similar to the following is logged to /var/log/audit/audit.log : Also, an error similar to the following is logged to /var/log/httpd/error_log :
[ "~]USD sestatus SELinux status: enabled SELinuxfs mount: /sys/fs/selinux SELinux root directory: /etc/selinux Loaded policy name: targeted Current mode: enforcing Mode from config file: enforcing Policy MLS status: enabled Policy deny_unknown status: allowed Max kernel policy version: 30", "~]# touch /var/www/html/testfile", "~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:httpd_sys_content_t:s0 /var/www/html/testfile", "~]# systemctl start httpd.service", "~]USD systemctl status httpd.service httpd.service - The Apache HTTP Server Loaded: loaded (/usr/lib/systemd/system/httpd.service; disabled) Active: active (running) since Mon 2013-08-05 14:00:55 CEST; 8s ago", "~]USD wget http://localhost/testfile --2009-11-06 17:43:01-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 0 [text/plain] Saving to: `testfile' [ <=> ] 0 --.-K/s in 0s 2009-11-06 17:43:01 (0.00 B/s) - `testfile' saved [0/0]", "~]# chcon -t samba_share_t /var/www/html/testfile", "~]USD ls -Z /var/www/html/testfile -rw-r--r-- root root unconfined_u:object_r:samba_share_t:s0 /var/www/html/testfile", "~]USD wget http://localhost/testfile --2009-11-06 14:11:23-- http://localhost/testfile Resolving localhost... 127.0.0.1 Connecting to localhost|127.0.0.1|:80... connected. HTTP request sent, awaiting response... 403 Forbidden 2009-11-06 14:11:23 ERROR 403: Forbidden.", "~]# rm -i /var/www/html/testfile", "~]# systemctl stop httpd.service", "type=AVC msg=audit(1220706212.937:70): avc: denied { getattr } for pid=1904 comm=\"httpd\" path=\"/var/www/html/testfile\" dev=sda5 ino=247576 scontext=unconfined_u:system_r:httpd_t:s0 tcontext=unconfined_u:object_r:samba_share_t:s0 tclass=file type=SYSCALL msg=audit(1220706212.937:70): arch=40000003 syscall=196 success=no exit=-13 a0=b9e21da0 a1=bf9581dc a2=555ff4 a3=2008171 items=0 ppid=1902 pid=1904 auid=500 uid=48 gid=48 euid=48 suid=48 fsuid=48 egid=48 sgid=48 fsgid=48 tty=(none) ses=1 comm=\"httpd\" exe=\"/usr/sbin/httpd\" subj=unconfined_u:system_r:httpd_t:s0 key=(null)", "[Wed May 06 23:00:54 2009] [error] [client 127.0.0.1 ] (13)Permission denied: access to /testfile denied" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-targeted_policy
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q3/html/using_the_amq_ruby_client/making-open-source-more-inclusive
Appendix C. Revision History
Appendix C. Revision History Revision History Revision 0.3-1 Fri Apr 17 2020 Jaroslav Klech Removed the soft_watchdog kernel parameter from chapter 3. Revision 0.3-0 Thu Feb 27 2020 Jaroslav Klech Added a Known Issue note for BZ#1551632 (Kernel). Revision 0.2-9 Mon Oct 07 2019 Jiri Herrmann Clarified a Technology Preview note related to OVMF. Revision 0.2-8 Sun Apr 28 2019 Lenka Spackova Improved wording of a Technology Preview feature description (File Systems). Revision 0.2-7 Mon Feb 04 2019 Lenka Spackova Improved structure of the book. Revision 0.2-6 Tue Apr 17 2018 Lenka Spackova Updated a recommendation related to the sslwrap() deprecation. Revision 0.2-5 Tue Feb 06 2018 Lenka Spackova Added a missing Technology Preview - OVMF (Virtualization). Added information regarding deprecation of containers using the libvirt-lxc tooling. Revision 0.2-4 Mon Oct 30 2017 Lenka Spackova Added information on changes in the ld linker behavior to Deprecated Functionality. Revision 0.2-3 Wed Oct 11 2017 Lenka Spackova Fixed workaround for the megaraid_sas known issue (Kernel). Revision 0.2-2 Wed Sep 13 2017 Lenka Spackova Added information regarding limited support for visuals in the Xorg server. Revision 0.2-1 Fri Jul 14 2017 Lenka Spackova Added kexec to Technology Previews (Kernel). Revision 0.2-0 Fri Jun 23 2017 Lenka Spackova Improved an iostat bug fix description. Revision 0.1-9 Wed May 03 2017 Lenka Spackova A new Pacemaker feature added to Clustering. Revision 0.1-8 Thu Apr 27 2017 Lenka Spackova Red Hat Access Labs renamed to Red Hat Customer Portal Labs. Revision 0.1-7 Thu Mar 30 2017 Lenka Spackova Added a new feature to Storage. Revision 0.1-6 Thu Mar 23 2017 Lenka Spackova Updated the firewalld rebase description (Security). Moved a SELinux-related bug fix description to the correct chapter (Security). Revision 0.1-4 Tue Feb 14 2017 Lenka Spackova Updated the samba rebase description (Authentication and Interoperability). Revision 0.1-2 Fri Jan 20 2017 Lenka Spackova Added a known issue related to bind-dyndb-ldap (Authentication and Interoperability). Revision 0.1-1 Fri Dec 16 2016 Lenka Spackova Runtime Instrumentation for IBM z System has been moved to fully supported features (Hardware Enablement). Added information regarding the default registration URL (System and Subscription Management). Added a note on the WALinuxAgent rebase in the Extras channel (Virtualization). Added a note about a configurable SSH key file for the ABRT reporter-upload tool (Compiler and Tools). Revision 0.1-0 Fri Nov 25 2016 Lenka Spackova Added Intel DIMM management tools to Technology Previews (Hardware Enablement). Added a known issue (Kernel). Revision 0.0-9 Mon Nov 21 2016 Lenka Spackova Updated Known Issues (Authentication and Interoperability, Installation and Booting) and New Features (Compiler and Tools, Kernel, Storage). Revision 0.0-8 Thu Nov 03 2016 Lenka Spackova Release of the Red Hat Enterprise Linux 7.3 Release Notes. Revision 0.0-3 Thu Aug 25 2016 Lenka Spackova Release of the Red Hat Enterprise Linux 7.3 Beta Release Notes.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.3_release_notes/appe-7.3_release_notes-revision_history