title
stringlengths 4
168
| content
stringlengths 7
1.74M
| commands
listlengths 1
5.62k
⌀ | url
stringlengths 79
342
|
---|---|---|---|
Chapter 3. Upgrading to the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 | Chapter 3. Upgrading to the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 The steps to upgrade to the latest Red Hat JBoss Core Services (JBCS) release differ depending on whether you previously installed JBCS from RPM packages or from an archive file. Upgrading JBCS when installed from RPM packages If you installed an earlier release of the JBCS Apache HTTP Server from RPM packages on RHEL 7 or RHEL 8 by using the yum groupinstall command, you can upgrade to the latest release. You can use the yum groupupdate command to upgrade to the 2.4.57 release on RHEL 7 or RHEL 8. Note JBCS does not provide an RPM distribution of the Apache HTTP Server on RHEL 9. Upgrading JBCS when installed from an archive file If you installed an earlier release of the JBCS Apache HTTP Server from an archive file, you must perform the following steps to upgrade to the Apache HTTP Server 2.4.57: Install the Apache HTTP Server 2.4.57. Set up the Apache HTTP Server 2.4.57. Remove the earlier version of Apache HTTP Server. The following procedure describes the recommended steps for upgrading a JBCS Apache HTTP Server 2.4.51 release that you installed from archive files to the latest 2.4.57 release. Prerequisites If you are using Red Hat Enterprise Linux, you have root user access. If you are using Windows Server, you have administrative access. The Red Hat JBoss Core Services Apache HTTP Server 2.4.51 or earlier was previously installed in your system from an archive file. Procedure Shut down any running instances of Red Hat JBoss Core Services Apache HTTP Server 2.4.51. Back up the Red Hat JBoss Core Services Apache HTTP Server 2.4.51 installation and configuration files. Install the Red Hat JBoss Core Services Apache HTTP Server 2.4.57 using the .zip installation method for the current system (see Additional Resources below). Migrate your configuration from the Red Hat JBoss Core Services Apache HTTP Server version 2.4.51 to version 2.4.57. Note The Apache HTTP Server configuration files might have changed since the Apache HTTP Server 2.4.51 release. Consider updating the 2.4.57 version configuration files rather than overwrite them with the configuration files from a different version, such as the Apache HTTP Server 2.4.51. Remove the Red Hat JBoss Core Services Apache HTTP Server 2.4.51 root directory. Additional Resources Installing the JBCS Apache HTTP Server on RHEL from archive files Installing the JBCS Apache HTTP Server on RHEL from RPM packages Installing the JBCS Apache HTTP Server on Windows Server | null | https://docs.redhat.com/en/documentation/red_hat_jboss_core_services/2.4.57/html/red_hat_jboss_core_services_apache_http_server_2.4.57_release_notes/upgrading-to-the-jbcs-http-2.4.57-release-notes |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_for_sap_solutions/9/html/configuring_ha_clusters_to_manage_sap_netweaver_or_sap_s4hana_application_server_instances_using_the_rhel_ha_add-on/conscious-language-message_configuring-clusters-to-manage |
Chapter 126. KafkaBridgeConsumerSpec schema reference | Chapter 126. KafkaBridgeConsumerSpec schema reference Used in: KafkaBridgeSpec Full list of KafkaBridgeConsumerSpec schema properties Configures consumer options for the Kafka Bridge as keys. The values can be one of the following JSON types: String Number Boolean Exceptions You can specify and configure the options listed in the Apache Kafka configuration documentation for consumers . However, AMQ Streams takes care of configuring and managing options related to the following, which cannot be changed: Kafka cluster bootstrap address Security (encryption, authentication, and authorization) Consumer group identifier Properties with the following prefixes cannot be set: bootstrap.servers group.id sasl. security. ssl. If the config property contains an option that cannot be changed, it is disregarded, and a warning message is logged to the Cluster Operator log file. All other supported options are forwarded to Kafka Bridge, including the following exceptions to the options configured by AMQ Streams: Any ssl configuration for supported TLS versions and cipher suites Example Kafka Bridge consumer configuration apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # ... consumer: config: auto.offset.reset: earliest enable.auto.commit: true # ... Important The Cluster Operator does not validate keys or values in the config object. If an invalid configuration is provided, the Kafka Bridge deployment might not start or might become unstable. In this case, fix the configuration so that the Cluster Operator can roll out the new configuration to all Kafka Bridge nodes. 126.1. KafkaBridgeConsumerSpec schema properties Property Description config The Kafka consumer configuration used for consumer instances created by the bridge. Properties with the following prefixes cannot be set: ssl., bootstrap.servers, group.id, sasl., security. (with the exception of: ssl.endpoint.identification.algorithm, ssl.cipher.suites, ssl.protocol, ssl.enabled.protocols). map | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-kafkabridgeconsumerspec-reference |
8.165. pam_pkcs11 | 8.165. pam_pkcs11 8.165.1. RHBA-2014:1474 - pam_pkcs11 bug fix update Updated pam_pkcs11 packages that fix two bugs are now available for red Hat Enterprise Linux 6. The pam_pkcs11 package allows X.509 certificate-based user authentication. It provides access to the certificate and its dedicated private key with an appropriate Public Key Cryptographic Standards #11 (PKCS#11) module. Bug Fixes BZ# 887143 The pam_pkcs11 utility generated an incorrect Lightweight Directory Access Protocol (LDAP) URL when attempting to connect to port 636. As a consequence, the connection to that port failed. This update applies a patch to address this bug, and pam_pkcs11 now generates correct LDAP URL in the described scenario. BZ# 1012082 After adding the coolkey module manually using the full path by running the "modutil -add "CoolKey PKCS #11 Module" -dbdir /etc/pki/nssdb -libfile /usr/lib64/pkcs11/libcookeypk11.so" command, an attempt to log in using a smart card failed. The underlying source code has been modified to fix this bug and the user is now able to log in using the smart cards as expected. Users of pam_pkcs11 are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_technical_notes/pam_pkcs11 |
Chapter 21. KafkaAuthorizationSimple schema reference | Chapter 21. KafkaAuthorizationSimple schema reference Used in: KafkaClusterSpec Full list of KafkaAuthorizationSimple schema properties Configures the Kafka custom resource to use simple authorization and define Access Control Lists (ACLs). ACLs allow you to define which users have access to which resources at a granular level. Streams for Apache Kafka uses Kafka's built-in authorization plugins as follows: StandardAuthorizer for Kafka in KRaft mode AclAuthorizer for ZooKeeper-based Kafka Set the type property in the authorization section to the value simple , and configure a list of super users. Super users are always allowed without querying ACL rules. Access rules are configured for the KafkaUser , as described in the ACLRule schema reference . Example simple authorization configuration apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # ... authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 # ... Note The super.user configuration option in the config property in Kafka.spec.kafka is ignored. Designate super users in the authorization property instead. 21.1. KafkaAuthorizationSimple schema properties The type property is a discriminator that distinguishes use of the KafkaAuthorizationSimple type from KafkaAuthorizationOpa , KafkaAuthorizationKeycloak , KafkaAuthorizationCustom . It must have the value simple for the type KafkaAuthorizationSimple . Property Property type Description type string Must be simple . superUsers string array List of super users. Should contain list of user principals which should get unlimited access rights. | [
"apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=user-1 - user-2 - CN=user-3 #"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkaauthorizationsimple-reference |
7.292. ypserv | 7.292. ypserv 7.292.1. RHBA-2013:0330 - ypserv bug fix update Updated ypserv packages that fix four bugs are now available for Red Hat Enterprise Linux 6. The ypserv packages provide the Network Information Service (NIS) server. NIS is a system that provides network information such as login names, passwords, home directories, and group information to all the machines on a network. Bug Fixes BZ#790812 Prior to this update, the NIS server was returning "0" (YP_FALSE) instead of "-1" (YP_NOMAP) after a request for a database not present in the server's domain. This behavior caused the autofs mount attempts to fail on Solaris clients. With this update, the return value has been fixed and the autofs mounts no longer fail on Solaris clients. BZ# 816981 Previously, when the crypt() function returned NULL, the yppasswd utility did not properly recognize the return value. This bug has been fixed, and the NULL return values of crypt() are now recognized and reported correctly by yppaswd. BZ#845283 Previously, the ypserv utility allocated large amounts of virtual memory when parsing XDR requests, but failed to free that memory in case the request was not parsed successfully. Consequently, memory leaks occurred. With this update, a patch has been provided to free the already allocated memory when parsing of a request fails. As a result, the memory leaks no longer occur. BZ# 863952 Previously, the yppush(8) man page did not describe how to change settings of the yppush utility. The manual page has been amended to specify that the settings can be changed in the /var/yp/Makefile file. All users of ypserv are advised to upgrade to these updated packages, which fix these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/ypserv |
Data Grid downloads | Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software. | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/red_hat_data_grid_8.5_release_notes/rhdg-downloads_datagrid |
Chapter 2. Creating RHEL KVM or RHOSP-compatible images | Chapter 2. Creating RHEL KVM or RHOSP-compatible images To create images that you can manage in the Red Hat OpenStack Platform (RHOSP) Image service (glance), you can use Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) instance images, or you can manually create RHOSP-compatible images in the QCOW2 format by using RHEL ISO files or Windows ISO files. 2.1. Creating RHEL KVM images Use Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) instance images to create images that you can manage in the Red Hat OpenStack Platform (RHOSP) Image service (glance). 2.1.1. Using a RHEL KVM instance image with Red Hat OpenStack Platform You can use one of the following Red Hat Enterprise Linux (RHEL) Kernel-based Virtual Machine (KVM) instance images with Red Hat OpenStack Platform (RHOSP): Red Hat Enterprise Linux 9 KVM Guest Image Red Hat Enterprise Linux 8 KVM Guest Image These QCOW2 images are configured with cloud-init and must have EC2-compatible metadata services for provisioning Secure Shell (SSH) keys to function correctly. Ready Windows KVM instance images in QCOW2 format are not available. Note For KVM instance images: The root account in the image is deactivated, but sudo access is granted to a special user named cloud-user . There is no root password set for this image. The root password is locked in /etc/shadow by placing !! in the second field. For a RHOSP instance, generate an SSH keypair from the RHOSP dashboard or command line, and use that key combination to perform an SSH public authentication to the instance as root user. When you launch the instance, this public key is injected to it. You can then authenticate by using the private key that you download when you create the keypair. 2.1.2. Creating a RHEL-based root partition image for bare-metal instances To create a custom root partition image for bare-metal instances, download the base Red Hat Enterprise Linux KVM instance image, and then upload the image to the Image service (glance). Procedure Download the base Red Hat Enterprise Linux KVM instance image from the Customer Portal . Define DIB_LOCAL_IMAGE as the downloaded image: Replace <ver> with the RHEL version number of the image. Set your registration information depending on your method of registration: Red Hat Customer Portal: Red Hat Satellite: Replace values in angle brackets <> with the correct values for your Red Hat Customer Portal or Red Hat Satellite registration. Optional: If you have any offline repositories, you can define DIB_YUM_REPO_CONF as a local repository configuration: Replace <file-path> with the path to your local repository configuration file. Use the diskimage-builder tool to extract the kernel as rhel-image.vmlinuz and the initial RAM disk as rhel-image.initrd : Upload the images to the Image service: 2.1.3. Creating a RHEL-based whole-disk user image for bare-metal instances To create a whole-disk user image for bare-metal instances, download the base Red Hat Enterprise Linux KVM instance image, and then upload the image to the Image service (glance). Procedure Download the base Red Hat Enterprise Linux KVM instance image from the Customer Portal . Define DIB_LOCAL_IMAGE as the downloaded image: Replace <ver> with the RHEL version number of the image. Set your registration information depending on your method of registration: Red Hat Customer Portal: Red Hat Satellite: Replace values in angle brackets <> with the correct values for your Red Hat Customer Portal or Red Hat Satellite registration. Optional: If you have any offline repositories, you can define DIB_YUM_REPO_CONF as a local repository configuration: Replace <file-path> with the path to your local repository configuration file. Upload the image to the Image service: 2.2. Creating instance images with RHEL or Windows ISO files You can create custom Red Hat Enterprise Linux (RHEL) or Windows images in QCOW2 format from ISO files, and upload these images to the Red Hat OpenStack Platform (RHOSP) Image service (glance) for use when creating instances. 2.2.1. Prerequisites A Linux host machine to create an image. This can be any machine on which you can install and run the Linux packages, except for the undercloud or the overcloud. The advanced-virt repository is enabled: The virt-manager application is installed to have all packages necessary to create a guest operating system: The libguestfs-tools package is installed to have a set of tools to access and modify virtual machine images: A RHEL 9 or 8 ISO file or a Windows ISO file. For more information about RHEL ISO files, see RHEL 9.0 Binary DVD or RHEL 8.6 Binary DVD . If you do not have a Windows ISO file, see the Microsoft Evaluation Center to download an evaluation image. A text editor, if you want to change the kickstart files (RHEL only). Important If you install the libguestfs-tools package on the undercloud, deactivate iscsid.socket to avoid port conflicts with the tripleo_iscsid service on the undercloud: When you have the prerequisites in place, you can proceed to create a RHEL or Windows image: Create a Red Hat Enterprise Linux 9 image Create a Red Hat Enterprise Linux 8 image Create a Windows image 2.2.2. Creating a Red Hat Enterprise Linux 9 image You can create a Red Hat OpenStack Platform (RHOSP) image in QCOW2 format by using a Red Hat Enterprise Linux (RHEL) 9 ISO file. Procedure Log on to your host machine as the root user. Start the installation by using virt-install : Replace the values in angle brackets <> with the correct values for your RHEL 9 image. This command launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Configure the instance: At the initial Installer boot menu, select Install Red Hat Enterprise Linux 9 . Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, select Auto-detected installation media . When prompted about which type of installation destination, select Local Standard Disks . For other storage options, select Automatically configure partitioning . In the Which type of installation would you like? window, choose the Basic Server install, which installs an SSH server. For network and host name, select eth0 for network and choose a host name for your device. The default host name is localhost.localdomain . Enter a password in the Root Password field and enter the same password again in the Confirm field. When the on-screen message confirms that the installation is complete, reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so that it contains only the following values: Reboot the machine. Register the machine with the Content Delivery Network. Replace pool-id with a valid pool ID. You can see a list of available pool IDs by running the subscription-manager list --available command. Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and add the following content under cloud_init_modules : The resolv-conf option automatically configures the resolv.conf file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. Add the following line to /etc/sysconfig/network to avoid issues when accessing the EC2 metadata service: To ensure that the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/default/grub file: Run the grub2-mkconfig command: The output is as follows: Deregister the instance so that the resulting image does not contain the subscription details for this instance: Power off the instance: Reset and clean the image by using the virt-sysprep command so that it can be used to create instances without issues: Reduce the image size by converting any free space in the disk image back to free space in the host: This command creates a new <rhel9-cloud.qcow2> file in the location from where the command is run. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. The <rhel9-cloud.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSP deployment, see Uploading images to the Image service . 2.2.3. Creating a Red Hat Enterprise Linux 8 image You can create a Red Hat OpenStack Platform (RHOSP) image in QCOW2 format by using a Red Hat Enterprise Linux (RHEL) 8 ISO file. Procedure Log on to your host machine as the root user. Start the installation by using virt-install : Replace the values in angle brackets <> with the correct values for your RHEL image. This command launches an instance and starts the installation process. Note If the instance does not launch automatically, run the virt-viewer command to view the console: Configure the instance: At the initial Installer boot menu, select Install Red Hat Enterprise Linux 8 . Choose the appropriate Language and Keyboard options. When prompted about which type of devices your installation uses, select Basic Storage Devices . Choose a host name for your device. The default host name is localhost.localdomain . Set the timezone and root password. In the Which type of installation would you like? window, choose the Basic Server install, which installs an SSH server. When the on-screen message confirms that the installation is complete, reboot the instance and log in as the root user. Update the /etc/sysconfig/network-scripts/ifcfg-eth0 file so that it contains only the following values: Reboot the machine. Register the machine with the Content Delivery Network: Replace pool-id with a valid pool ID. You can see a list of available pool IDs by running the subscription-manager list --available command. Update the system: Install the cloud-init packages: Edit the /etc/cloud/cloud.cfg configuration file and add the following content under cloud_init_modules . The resolv-conf option automatically configures the resolv.conf file when an instance boots for the first time. This file contains information related to the instance such as nameservers , domain , and other options. To prevent network issues, create /etc/udev/rules.d/75-persistent-net-generator.rules : This prevents the /etc/udev/rules.d/70-persistent-net.rules file from being created. If the /etc/udev/rules.d/70-persistent-net.rules file is created, networking might not function correctly when you boot from snapshots because the network interface is created as eth1 instead of eth0 and the IP address is not assigned. Add the following line to /etc/sysconfig/network to avoid issues when accessing the EC2 metadata service: To ensure that the console messages appear in the Log tab on the dashboard and the nova console-log output, add the following boot option to the /etc/grub.conf file: Deregister the instance so that the resulting image does not contain the same subscription details for this instance: Power off the instance: Reset and clean the image by using the virt-sysprep command so that it can be used to create instances without issues: Reduce the image size by converting any free space in the disk image back to free space in the host: This command creates a new <rhel86-cloud.qcow2> file in the location from where the command is run. Note You must manually resize the partitions of instances based on the image in accordance with the disk space in the flavor that is applied to the instance. The <rhel86-cloud.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSP deployment, see Uploading images to the Image service . 2.2.4. Creating a Windows image You can create a Red Hat OpenStack Platform (RHOSP) image in QCOW2 format by using a Windows ISO file. Procedure Log on to your host machine as the root user. Start the installation by using virt-install : Replace the values in angle brackets <> withe the correct values for your Windows image. Note The --os-type=windows parameter ensures that the clock is configured correctly for the Windows instance and enables its Hyper-V enlightenment features. You must also set os_type=windows in the image metadata before uploading the image to the Image service (glance). The virt-install command saves the instance image as /var/lib/libvirt/images/<windows-image>.qcow2 by default. If you want to keep the instance image elsewhere, change the parameter of the --disk option: Replace <file-name> with the name of the file that stores the instance image, and optionally its path. For example, path=win8.qcow2,size=8 creates an 8 GB file named win8.qcow2 in the current working directory. Note If the instance does not launch automatically, run the virt-viewer command to view the console: For more information about how to install Windows, see the Microsoft documentation. To allow the newly-installed Windows system to use the virtualized hardware, you might need to install VirtIO drivers. For more information, see Installing KVM paravirtualized drivers for Windows virtual machines in Configuring and managing virtualization . To complete the configuration, download and run Cloudbase-Init on the Windows system. At the end of the installation of Cloudbase-Init, select the Run Sysprep and Shutdown checkboxes. The Sysprep tool makes the instance unique by generating an OS ID, which is used by certain Microsoft services. Important Red Hat does not provide technical support for Cloudbase-Init. If you encounter an issue, see Contact Cloudbase Solutions . When the Windows system shuts down, the <windows-image.qcow2> image file is ready to be uploaded to the Image service. For more information about uploading this image to your RHOSP deployment, see Uploading images to the Image service . 2.3. Creating an image for UEFI Secure Boot When the overcloud contains UEFI Secure Boot Compute nodes, you can create a Secure Boot instance image that cloud users can use to launch Secure Boot instances. Procedure Create a new image for UEFI Secure Boot: Replace <base_image_file> with an image file that supports UEFI and the GUID Partition Table (GPT) standard, and includes an EFI system partition. If the default machine type is not q35 , then set the machine type to q35 : Specify that the instance must be scheduled on a UEFI Secure Boot host: 2.4. Metadata properties for virtual hardware The Compute service (nova) has deprecated support for using libosinfo data to set default device models. Instead, use the following image metadata properties to configure the optimal virtual hardware for an instance: os_distro os_version hw_cdrom_bus hw_disk_bus hw_scsi_model hw_vif_model hw_video_model hypervisor_type For more information about these metadata properties, see Image configuration parameters . | [
"export DIB_LOCAL_IMAGE=rhel-<ver>-x86_64-kvm.qcow2",
"export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_AUTO_ATTACH=true export REG_METHOD=portal export https_proxy='<IP_address:port>' (if applicable) export http_proxy='<IP_address:port>' (if applicable)",
"export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_SAT_URL='<satellite-url>' export REG_ORG='<satellite-org>' export REG_ENV='<satellite-env>' export REG_METHOD=<method>",
"export DIB_YUM_REPO_CONF=<file-path>",
"export DIB_RELEASE=<ver> disk-image-create rhel baremetal -o rhel-image",
"KERNEL_ID=USD(openstack image create --file rhel-image.vmlinuz --public --container-format aki --disk-format aki -f value -c id rhel-image.vmlinuz) RAMDISK_ID=USD(openstack image create --file rhel-image.initrd --public --container-format ari --disk-format ari -f value -c id rhel-image.initrd) openstack image create --file rhel-image.qcow2 --public --container-format bare --disk-format qcow2 --property kernel_id=USDKERNEL_ID --property ramdisk_id=USDRAMDISK_ID rhel-root-partition-bare-metal-image",
"export DIB_LOCAL_IMAGE=rhel-<ver>-x86_64-kvm.qcow2",
"export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_AUTO_ATTACH=true export REG_METHOD=portal export https_proxy='<IP_address:port>' (if applicable) export http_proxy='<IP_address:port>' (if applicable)",
"export REG_USER='<username>' export REG_PASSWORD='<password>' export REG_SAT_URL='<satellite-url>' export REG_ORG='<satellite-org>' export REG_ENV='<satellite-env>' export REG_METHOD=<method>",
"export DIB_YUM_REPO_CONF=<file-path>",
"openstack image create --file rhel-image.qcow2 --public --container-format bare --disk-format qcow2 rhel-whole-disk-bare-metal-image",
"sudo subscription-manager repos --enable=advanced-virt-for-rhel-<ver>-x86_64-rpms",
"sudo dnf module install -y virt",
"sudo dnf install -y libguestfs-tools-c",
"sudo systemctl disable --now iscsid.socket",
"virt-install --virt-type kvm --name <rhel9-cloud-image> --ram <2048> --cdrom </var/lib/libvirt/images/rhel-9.0-x86_64-dvd.iso> --disk <rhel9.qcow2>,format=qcow2,size=<10> --network=bridge:virbr0 --graphics vnc,listen=127.0.0.1 --noautoconsole --os-variant=<rhel9.0>",
"virt-viewer <rhel9-cloud-image>",
"TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no",
"sudo subscription-manager register sudo subscription-manager attach --pool=<pool-id> sudo subscription-manager repos --enable rhel-9-for-x86_64-baseos-rpms --enable rhel-9-for-x86_64-appstream-rpms",
"dnf -y update",
"dnf install -y cloud-utils-growpart cloud-init",
"- resolv-conf",
"NOZEROCONF=yes",
"GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\"",
"grub2-mkconfig -o /boot/grub2/grub.cfg",
"Generating grub configuration file Found linux image: /boot/vmlinuz-3.10.0-229.9.2.el9.x86_64 Found initrd image: /boot/initramfs-3.10.0-229.9.2.el9.x86_64.img Found linux image: /boot/vmlinuz-3.10.0-121.el9.x86_64 Found initrd image: /boot/initramfs-3.10.0-121.el9.x86_64.img Found linux image: /boot/vmlinuz-0-rescue-b82a3044fb384a3f9aeacf883474428b Found initrd image: /boot/initramfs-0-rescue-b82a3044fb384a3f9aeacf883474428b.img done",
"subscription-manager repos --disable=* subscription-manager unregister dnf clean all",
"poweroff",
"virt-sysprep -d <rhel9-cloud-image>",
"virt-sparsify --compress <rhel9.qcow2> <rhel9-cloud.qcow2>",
"virt-install --virt-type kvm --name <rhel86-cloud-image> --ram <2048> --vcpus <2> --disk <rhel86.qcow2>,format=qcow2,size=<10> --location <rhel-8.6-x86_64-boot.iso> --network=bridge:virbr0 --graphics vnc,listen=127.0.0.1 --noautoconsole --os-variant <rhel8.6>",
"virt-viewer <rhel86-cloud-image>",
"TYPE=Ethernet DEVICE=eth0 ONBOOT=yes BOOTPROTO=dhcp NM_CONTROLLED=no",
"sudo subscription-manager register sudo subscription-manager attach --pool=<pool-id> sudo subscription-manager repos --enable rhel-8-for-x86_64-baseos-rpms --enable rhel-8-for-x86_64-appstream-rpms",
"dnf -y update",
"dnf install -y cloud-utils-growpart cloud-init",
"- resolv-conf",
"echo \"#\" > /etc/udev/rules.d/75-persistent-net-generator.rules",
"NOZEROCONF=yes",
"GRUB_CMDLINE_LINUX_DEFAULT=\"console=tty0 console=ttyS0,115200n8\"",
"subscription-manager repos --disable=* subscription-manager unregister dnf clean all",
"poweroff",
"virt-sysprep -d <rhel86-cloud-image>",
"virt-sparsify --compress <rhel86.qcow2> <rhel86-cloud.qcow2>",
"virt-install --name=<windows-image> --disk size=<size> --cdrom=<file-path-to-windows-iso-file> --os-type=windows --network=bridge:virbr0 --graphics spice --ram=<ram>",
"--disk path=<file-name>,size=<size>",
"virt-viewer <windows-image>",
"openstack image create --file <base_image_file> uefi_secure_boot_image",
"openstack image set --property hw_machine_type=q35 uefi_secure_boot_image",
"openstack image set --property hw_firmware_type=uefi --property os_secure_boot=required uefi_secure_boot_image"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/creating_and_managing_images/assembly_glance-creating-images_osp |
4.3. CONTROL/MONITORING | 4.3. CONTROL/MONITORING The CONTROL/MONITORING Panel presents the a limited runtime status of LVS. It displays the status of the pulse daemon, the LVS routing table, and the LVS-spawned nanny processes. Note The fields for CURRENT LVS ROUTING TABLE and CURRENT LVS PROCESSES remain blank until you actually start LVS, as shown in Section 4.8, "Starting LVS" . Figure 4.2. The CONTROL/MONITORING Panel Auto update The status display on this page can be updated automatically at a user configurable interval. To enable this feature, click on the Auto update checkbox and set the desired update frequency in the Update frequency in seconds text box (the default value is 10 seconds). It is not recommended that you set the automatic update to an interval less than 10 seconds. Doing so may make it difficult to reconfigure the Auto update interval because the page will update too frequently. If you encounter this issue, simply click on another panel and then back on CONTROL/MONITORING . The Auto update feature does not work with all browsers, such as Mozilla . Update information now You can manually update the status information manually by clicking this button. CHANGE PASSWORD Clicking this button takes you to a help screen with information on how to change the administrative password for the Piranha Configuration Tool . | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/virtual_server_administration/s1-piranha-ctrlmon-vsa |
Chapter 5. ConsoleNotification [console.openshift.io/v1] | Chapter 5. ConsoleNotification [console.openshift.io/v1] Description ConsoleNotification is the extension for configuring openshift web console notifications. Compatibility level 2: Stable within a major release for a minimum of 9 months or 3 minor releases (whichever is longer). Type object Required spec 5.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object ConsoleNotificationSpec is the desired console notification configuration. 5.1.1. .spec Description ConsoleNotificationSpec is the desired console notification configuration. Type object Required text Property Type Description backgroundColor string backgroundColor is the color of the background for the notification as CSS data type color. color string color is the color of the text for the notification as CSS data type color. link object link is an object that holds notification link details. location string location is the location of the notification in the console. Valid values are: "BannerTop", "BannerBottom", "BannerTopBottom". text string text is the visible text of the notification. 5.1.2. .spec.link Description link is an object that holds notification link details. Type object Required href text Property Type Description href string href is the absolute secure URL for the link (must use https) text string text is the display text for the link 5.2. API endpoints The following API endpoints are available: /apis/console.openshift.io/v1/consolenotifications DELETE : delete collection of ConsoleNotification GET : list objects of kind ConsoleNotification POST : create a ConsoleNotification /apis/console.openshift.io/v1/consolenotifications/{name} DELETE : delete a ConsoleNotification GET : read the specified ConsoleNotification PATCH : partially update the specified ConsoleNotification PUT : replace the specified ConsoleNotification /apis/console.openshift.io/v1/consolenotifications/{name}/status GET : read status of the specified ConsoleNotification PATCH : partially update status of the specified ConsoleNotification PUT : replace status of the specified ConsoleNotification 5.2.1. /apis/console.openshift.io/v1/consolenotifications HTTP method DELETE Description delete collection of ConsoleNotification Table 5.1. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list objects of kind ConsoleNotification Table 5.2. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotificationList schema 401 - Unauthorized Empty HTTP method POST Description create a ConsoleNotification Table 5.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.4. Body parameters Parameter Type Description body ConsoleNotification schema Table 5.5. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 201 - Created ConsoleNotification schema 202 - Accepted ConsoleNotification schema 401 - Unauthorized Empty 5.2.2. /apis/console.openshift.io/v1/consolenotifications/{name} Table 5.6. Global path parameters Parameter Type Description name string name of the ConsoleNotification HTTP method DELETE Description delete a ConsoleNotification Table 5.7. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 5.8. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified ConsoleNotification Table 5.9. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified ConsoleNotification Table 5.10. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.11. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified ConsoleNotification Table 5.12. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.13. Body parameters Parameter Type Description body ConsoleNotification schema Table 5.14. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 201 - Created ConsoleNotification schema 401 - Unauthorized Empty 5.2.3. /apis/console.openshift.io/v1/consolenotifications/{name}/status Table 5.15. Global path parameters Parameter Type Description name string name of the ConsoleNotification HTTP method GET Description read status of the specified ConsoleNotification Table 5.16. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified ConsoleNotification Table 5.17. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.18. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified ConsoleNotification Table 5.19. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 5.20. Body parameters Parameter Type Description body ConsoleNotification schema Table 5.21. HTTP responses HTTP code Reponse body 200 - OK ConsoleNotification schema 201 - Created ConsoleNotification schema 401 - Unauthorized Empty | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/console_apis/consolenotification-console-openshift-io-v1 |
4.3.2.2. Password Aging | 4.3.2.2. Password Aging Password aging is another technique used by system administrators to defend against bad passwords within an organization. Password aging means that after a set amount of time (usually 90 days) the user is prompted to create a new password. The theory behind this is that if a user is forced to change his password periodically, a cracked password is only useful to an intruder for a limited amount of time. The downside to password aging, however, is that users are more likely to write their passwords down. There are two primary programs used to specify password aging under Red Hat Enterprise Linux: the chage command or the graphical User Manager ( system-config-users ) application. The -M option of the chage command specifies the maximum number of days the password is valid. So, for instance, to set a user's password to expire in 90 days, type the following command: In the above command, replace <username> with the name of the user. To disable password expiration, it is traditional to use a value of 99999 after the -M option (this equates to a little over 273 years). The graphical User Manager application may also be used to create password aging policies. To access this application, go to the Main Menu button (on the Panel) => System Settings => Users &Groups or type the command system-config-users at a shell prompt (for example, in an XTerm or a GNOME terminal). Click on the Users tab, select the user from the user list, and click Properties from the button menu (or choose File => Properties from the pull-down menu). Then click the Password Info tab and enter the number of days before the password expires, as shown in Figure 4.1, " Password Info Pane" . Figure 4.1. Password Info Pane For more information about user and group configuration (including instructions on forcing first time passwords), refer to the chapter titled User and Group Configuration in the System Administrators Guide . For an overview of user and resource management, refer to the chapter titled Managing User Accounts and Resource Access in the Red Hat Enterprise Linux Introduction to System Adminitration . | [
"chage -M 90 <username>"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s3-wstation-pass-org-age |
Chapter 9. PackageManifest [packages.operators.coreos.com/v1] | Chapter 9. PackageManifest [packages.operators.coreos.com/v1] Description PackageManifest holds information about a package, which is a reference to one (or more) channels under a single package. Type object 9.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta spec object PackageManifestSpec defines the desired state of PackageManifest status object PackageManifestStatus represents the current status of the PackageManifest 9.1.1. .spec Description PackageManifestSpec defines the desired state of PackageManifest Type object 9.1.2. .status Description PackageManifestStatus represents the current status of the PackageManifest Type object Required catalogSource catalogSourceDisplayName catalogSourcePublisher catalogSourceNamespace packageName channels defaultChannel Property Type Description catalogSource string CatalogSource is the name of the CatalogSource this package belongs to catalogSourceDisplayName string catalogSourceNamespace string CatalogSourceNamespace is the namespace of the owning CatalogSource catalogSourcePublisher string channels array Channels are the declared channels for the package, ala stable or alpha . channels[] object PackageChannel defines a single channel under a package, pointing to a version of that package. defaultChannel string DefaultChannel is, if specified, the name of the default channel for the package. The default channel will be installed if no other channel is explicitly given. If the package has a single channel, then that channel is implicitly the default. packageName string PackageName is the name of the overall package, ala etcd . provider object AppLink defines a link to an application 9.1.3. .status.channels Description Channels are the declared channels for the package, ala stable or alpha . Type array 9.1.4. .status.channels[] Description PackageChannel defines a single channel under a package, pointing to a version of that package. Type object Required name currentCSV entries Property Type Description currentCSV string CurrentCSV defines a reference to the CSV holding the version of this package currently for the channel. currentCSVDesc object CSVDescription defines a description of a CSV entries array Entries lists all CSVs in the channel, with their upgrade edges. entries[] object ChannelEntry defines a member of a package channel. name string Name is the name of the channel, e.g. alpha or stable 9.1.5. .status.channels[].currentCSVDesc Description CSVDescription defines a description of a CSV Type object Property Type Description annotations object (string) apiservicedefinitions APIServiceDefinitions customresourcedefinitions CustomResourceDefinitions description string LongDescription is the CSV's description displayName string DisplayName is the CSV's display name icon array Icon is the CSV's base64 encoded icon icon[] object Icon defines a base64 encoded icon and media type installModes array (InstallMode) InstallModes specify supported installation types keywords array (string) links array links[] object AppLink defines a link to an application maintainers array maintainers[] object Maintainer defines a project maintainer maturity string minKubeVersion string Minimum Kubernetes version for operator installation nativeApis array (GroupVersionKind) provider object AppLink defines a link to an application relatedImages array (string) List of related images version OperatorVersion Version is the CSV's semantic version 9.1.6. .status.channels[].currentCSVDesc.icon Description Icon is the CSV's base64 encoded icon Type array 9.1.7. .status.channels[].currentCSVDesc.icon[] Description Icon defines a base64 encoded icon and media type Type object Property Type Description base64data string mediatype string 9.1.8. .status.channels[].currentCSVDesc.links Description Type array 9.1.9. .status.channels[].currentCSVDesc.links[] Description AppLink defines a link to an application Type object Property Type Description name string url string 9.1.10. .status.channels[].currentCSVDesc.maintainers Description Type array 9.1.11. .status.channels[].currentCSVDesc.maintainers[] Description Maintainer defines a project maintainer Type object Property Type Description email string name string 9.1.12. .status.channels[].currentCSVDesc.provider Description AppLink defines a link to an application Type object Property Type Description name string url string 9.1.13. .status.channels[].entries Description Entries lists all CSVs in the channel, with their upgrade edges. Type array 9.1.14. .status.channels[].entries[] Description ChannelEntry defines a member of a package channel. Type object Required name Property Type Description name string Name is the name of the bundle for this entry. version string Version is the version of the bundle for this entry. 9.1.15. .status.provider Description AppLink defines a link to an application Type object Property Type Description name string url string 9.2. API endpoints The following API endpoints are available: /apis/packages.operators.coreos.com/v1/packagemanifests GET : list objects of kind PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests GET : list objects of kind PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name} GET : read the specified PackageManifest /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name}/icon GET : connect GET requests to icon of PackageManifest 9.2.1. /apis/packages.operators.coreos.com/v1/packagemanifests Table 9.1. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PackageManifest Table 9.2. HTTP responses HTTP code Reponse body 200 - OK PackageManifestList schema 9.2.2. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests Table 9.3. Global path parameters Parameter Type Description namespace string object name and auth scope, such as for teams and projects Table 9.4. Global query parameters Parameter Type Description allowWatchBookmarks boolean allowWatchBookmarks requests watch events with type "BOOKMARK". Servers that do not implement bookmarks may ignore this flag and bookmarks are sent at the server's discretion. Clients should not assume bookmarks are returned at any specific interval, nor may they assume the server will send any BOOKMARK event during a session. If this is not a watch, this field is ignored. continue string The continue option should be set when retrieving more results from the server. Since this value is server defined, clients may only use the continue value from a query result with identical query parameters (except for the value of continue) and the server may reject a continue value it does not recognize. If the specified continue value is no longer valid whether due to expiration (generally five to fifteen minutes) or a configuration change on the server, the server will respond with a 410 ResourceExpired error together with a continue token. If the client needs a consistent list, it must restart their list without the continue field. Otherwise, the client may send another list request with the token received with the 410 error, the server will respond with a list starting from the key, but from the latest snapshot, which is inconsistent from the list results - objects that are created, modified, or deleted after the first list request will be included in the response, as long as their keys are after the " key". This field is not supported when watch is true. Clients may start a watch from the last resourceVersion value returned by the server and not miss any modifications. fieldSelector string A selector to restrict the list of returned objects by their fields. Defaults to everything. labelSelector string A selector to restrict the list of returned objects by their labels. Defaults to everything. limit integer limit is a maximum number of responses to return for a list call. If more items exist, the server will set the continue field on the list metadata to a value that can be used with the same initial query to retrieve the set of results. Setting a limit may return fewer than the requested amount of items (up to zero items) in the event all requested objects are filtered out and clients should only use the presence of the continue field to determine whether more results are available. Servers may choose not to support the limit argument and will return all of the available results. If limit is specified and the continue field is empty, clients may assume that no more results are available. This field is not supported if watch is true. The server guarantees that the objects returned when using continue will be identical to issuing a single list call without a limit - that is, no objects created, modified, or deleted after the first request is issued will be included in any subsequent continued requests. This is sometimes referred to as a consistent snapshot, and ensures that a client that is using limit to receive smaller chunks of a very large result can ensure they see all possible objects. If objects are updated during a chunked list the version of the object that was present at the time the first list result was calculated is returned. pretty string If 'true', then the output is pretty printed. resourceVersion string resourceVersion sets a constraint on what resource versions a request may be served from. See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset resourceVersionMatch string resourceVersionMatch determines how resourceVersion is applied to list calls. It is highly recommended that resourceVersionMatch be set for list calls where resourceVersion is set See https://kubernetes.io/docs/reference/using-api/api-concepts/#resource-versions for details. Defaults to unset timeoutSeconds integer Timeout for the list/watch call. This limits the duration of the call, regardless of any activity or inactivity. watch boolean Watch for changes to the described resources and return them as a stream of add, update, and remove notifications. Specify resourceVersion. HTTP method GET Description list objects of kind PackageManifest Table 9.5. HTTP responses HTTP code Reponse body 200 - OK PackageManifestList schema 9.2.3. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name} Table 9.6. Global path parameters Parameter Type Description name string name of the PackageManifest namespace string object name and auth scope, such as for teams and projects Table 9.7. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read the specified PackageManifest Table 9.8. HTTP responses HTTP code Reponse body 200 - OK PackageManifest schema 9.2.4. /apis/packages.operators.coreos.com/v1/namespaces/{namespace}/packagemanifests/{name}/icon Table 9.9. Global path parameters Parameter Type Description name string name of the PackageManifest namespace string object name and auth scope, such as for teams and projects HTTP method GET Description connect GET requests to icon of PackageManifest Table 9.10. HTTP responses HTTP code Reponse body 200 - OK string | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operatorhub_apis/packagemanifest-packages-operators-coreos-com-v1 |
Chapter 1. Configuring identity stores | Chapter 1. Configuring identity stores 1.1. Creating a filesystem-realm 1.1.1. Filesystem realm in Elytron With a filesystem security realm, filesystem-realm , you can use a filesystem-based identity store in Elytron to store user credentials and attributes. Elytron stores each identity along with the associated credentials and attributes in an XML file in the filesystem. The name of the XML file is the name of the identity. You can associate multiple credentials and attributes with each identity. By default, identities are stored in the filesystem as follows: Elytron applies two levels of directory hashing to the directory structure where an identity is stored. For example, an identity named "user1" is stored at the location u/s/user1.xml . This is done to overcome the limit set by some filesystems on the number of files you can store in a single directory and for performance reasons. Use the levels attribute to configure the number of levels of directory hashing to apply. The identity names are Base32 encoded before they are used as filenames. This is done because some filesystems are case-insensitive or might restrict the set of characters allowed in a filename. You can turn off the encoding by setting the attribute encoded to false . For information about other attributes and their default values, see filesystem-realm attributes . Encryption The filesystem-realm uses Base64 encoding for clear passwords, hashed passwords, and attributes when storing an identity in an identity file. For added security, you can encrypt the clear passwords, hashed passwords, and attributes using a secret key stored in a credential store. The secret key is used both for encrypting and decrypting the passwords and attributes. Integrity check To ensure that the identities created with a filesystem-realm are not tampered with, you can enable integrity checking on the filesystem-realm by referencing a key pair in the filesystem-realm during creation. Integrity checking works in filesystem-realm as follows: When you create an identity in the filesystem-realm with integrity checking enabled, Elytron creates the identity file and generates a signature for it. Whenever the identity file is read, for example when updating the identity or loading the identity for authentication, Elytron verifies the identity file contents against the signature to ensure the file has not been tampered with since the last authorized write. When you update an existing identity that has an associated signature, Elytron updates the content and generates a new signature after the original content passes verification. If the verification fails, you get the following failure message: Additional resources filesystem-realm attributes Creating a filesystem-realm in Elytron Creating an encrypted filesystem-realm in Elytron Creating a filesystem-realm with integrity in Elytron 1.1.2. Creating a filesystem-realm in Elytron Create a filesystem-realm and a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. Procedure Create a filesystem-realm in Elytron. Syntax Example Add a user to the realm and configure the user's role. Add a user. Syntax Example Set a password for the user. Syntax Example Set roles for the user. Syntax Example Create a security domain that references the filesystem-realm . Syntax Example Verification To verify that Elytron can load an identity from the filesystem-realm , use the following command: Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources filesystem-realm attributes security-domain attributes simple-role-decoder attributes 1.1.3. Creating an encrypted filesystem-realm in Elytron Create an encrypted filesystem-realm to secure JBoss EAP applications or server interfaces and ensure that the user credentials are encrypted and therefore secure. 1.1.3.1. Creating a secret-key-credential-store for a standalone server Create a secret-key-credential-store using the management CLI. When you create a secret-key-credential-store , JBoss EAP generates a secret key by default. The name of the generated key is key and its size is 256-bit. Prerequisites JBoss EAP is running. You have provided at least read/write access to the directory containing the secret-key-credential-store for the user account under which JBoss EAP is running. Procedure Use the following command to create a secret-key-credential-store using the management CLI: Syntax Example 1.1.3.2. Creating an encrypted filesystem-realm Create an encrypted filesystem-realm and a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. You have created a secret-key-credential-store . For more information, see Creating a secret-key-credential-store for a standalone server . Procedure Create an encrypted filesystem-realm in Elytron. Syntax Example Add a user to the realm and configure the user's role. Add a user. Syntax Example Set a password for the user. Syntax Example Set roles for the user. Syntax Example Create a security domain that references the filesystem-realm . Syntax Example Verification To verify that Elytron can load an identity from the encrypted filesystem-realm , use the following command: Syntax Example You can now use the created security domain to add authentication and authorization to management interfaces and applications. Additional resources filesystem-realm attributes security-domain attributes simple-role-decoder attributes 1.1.4. Creating a filesystem-realm with integrity support in Elytron Create a filesystem-realm with integrity support to secure JBoss EAP applications or server interfaces and ensure that the user credentials are not tampered with. 1.1.4.1. Creating a key pair by using the management CLI Create a key store with a key pair in Elytron. Prerequisites JBoss EAP is running. Procedure Create a key store. Syntax Example Create a key pair in the key store. Syntax Example Persist the key pair to the key store file. Syntax Example Additional resources key-store attributes 1.1.4.2. Creating a filesystem-realm with integrity support Create a filesystem-realm with integrity support and a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. Prerequisites JBoss EAP is running. You have created a secret-key-credential-store . For more information, see Creating a key pair by using the management CLI . Procedure Create filesystem-realm in Elytron. Syntax Example Add a user to the realm and configure the user's role. Add a user. Syntax Example Set a password for the user. Syntax Example Set roles for the user. Syntax Example Create a security domain that references the filesystem-realm . Syntax Example Verification To verify that Elytron can load an identity from the filesystem-realm , use the following command: Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources filesystem-realm attributes security-domain attributes simple-role-decoder attributes 1.1.4.3. Updating the key pair in an existing filesystem-realm with integrity support enabled You can update the key pair referenced in a filesystem-realm with integrity support enabled in the case that the existing key was compromised. Also, it is a good practice to rotate keys. Prerequisites You have generated a key pair. You have created a filesystem-realm with integrity checking enabled. For more information, see Creating a filesystem-realm with integrity support . Procedure Create a key pair in the existing key store. Syntax Example Persist the key pair to the key store file. Syntax Example Update the key store alias to reference a new key pair. Syntax Example Reload the server. Use the new key pair to update the files in filesystem-realm with new signatures. Syntax Example Verification Verify that the key pair referenced in the filesystem-realm has been updated using the following management CLI command: Syntax Example The key pair referenced in the filesystem-realm has been updated. Additional resources filesystem-realm attributes 1.1.5. Encrypting an unencrypted filesystem-realm If you have a filesystem-realm configured in Elytron, you can add encryption to it using the WildFly Elytron Tool. 1.1.5.1. Creating a secret-key-credential-store for a standalone server Create a secret-key-credential-store using the management CLI. When you create a secret-key-credential-store , JBoss EAP generates a secret key by default. The name of the generated key is key and its size is 256-bit. Prerequisites JBoss EAP is running. You have provided at least read/write access to the directory containing the secret-key-credential-store for the user account under which JBoss EAP is running. Procedure Use the following command to create a secret-key-credential-store using the management CLI: Syntax Example 1.1.5.2. Converting an unencrypted filesystem-realm to an encrypted filesystem-realm You can convert an unencrypted filesystem-realm into an encrypted one by using the WildFly Elytron tool filesystem-realm-encrypt command. Prerequisites You have an existing filesystem-realm . For more information, see Creating a filesystem-realm in Elytron . You have created a secret-key-credential-store . For more information, see Creating a secret-key-credential-store for a standalone server . JBoss EAP is running. Procedure Convert an unencrypted filesystem-realm into an encrypted one. Syntax Example The WildFly Elytron command filesystem-realm-encrypt creates a filesystem-realm specified with the --output-location argument. It also creates a CLI script at the root of the filesystem-realm that you can use to add the filesystem-realm resource in the elytron subsystem. Tip Use the --summary option to see a summary of the command execution. Use the generated CLI script to add the filesystem-realm resource in the elytron subsystem. Synax Example You can use the encrypted filesystem-realm to create a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. Additional resources filesystem-realm attributes secret-key-credential-store attributes Creating an encrypted filesystem-realm in Elytron For more information about the WildFly Elytron tool filesystem-realm-encrypt command, run the filesystem-realm-encrypt --help command: 1.1.6. Adding integrity support to an existing filesystem-realm If you have a filesystem-realm configured in Elytron, you can sign it with a key pair by using the WildFly Elytron Tool to enable integrity checks. 1.1.6.1. Creating a key pair by using the management CLI Create a key store with a key pair in Elytron. Prerequisites JBoss EAP is running. Procedure Create a key store. Syntax Example Create a key pair in the key store. Syntax Example Persist the key pair to the key store file. Syntax Example Additional resources key-store attributes 1.1.6.2. Enabling integrity checks for a filesystem-realm You can create a filesystem-realm with integrity checks from an existing non-empty filesystem-realm by using the WildFly Elytron tool filesystem-realm-integrity command. You can use the filesystem-realm-integrity command for the following use cases: Creating a new filesystem-realm with integrity checks from an existing filesystem-realm . Adding integrity checks to an existing filesystem-realm . Prerequisites You have an existing filesystem-realm . For more information, see Creating a filesystem-realm in Elytron . You have generated a key pair. For more information, see Creating a key pair by using the management CLI . JBoss EAP is running. Procedure Create a filesystem-realm with integrity support by using an existing filesystem-realm and signing it with a key pair. To add integrity support to the existing filesystem-realm , omit the --output-location and --realm-name options in the following command. If you specify the --output-location and --realm-name options, the command creates a new filesystem-realm with integrity checks without updating the existing one. Syntax Example Exmaple output The WildFly Elytron command filesystem-realm-integrity creates a filesystem-realm specified with the --output-location argument. It also creates a CLI script at the root of the filesystem-realm that you can use to add the filesystem-realm resource in the elytron subsystem. Tip Use the --summary option to see a summary of the command execution. Use the generated CLI script to add the filesystem-realm resource in the elytron subsystem. Syntax Example You can use the filesystem-realm to create a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. Additional resources filesystem-realm attributes Creating a filesystem-realm with integrity support in Elytron For more information about the WildFly Elytron tool filesystem-realm-integrity command, run the filesystem-realm-integrity --help command: 1.2. Creating a JDBC realm 1.2.1. Creating a jdbc-realm in Elytron Create a jdbc-realm and a security domain that references the realm to secure the JBoss EAP server interfaces or the applications deployed on the server. The examples in the procedure use a PostgreSQL database which is configured as follows: Database name: postgresdb Database login credentials: username: postgres password: postgres Table name: example_jboss_eap_users example_jboss_eap_users contents: username password roles user1 passwordUser1 Admin user2 passwordUser2 Guest Prerequisites You have configured the database containing the users. JBoss EAP is running. You have downloaded the appropriate JDBC driver. Procedure Deploy the database driver for the database using the management CLI. Syntax Example Configure the database as the data source. Syntax Example Create a jdbc-realm in Elytron. Syntax Example Note The example shows how to obtain passwords and roles from a single principal-query . You can also create additional principal-query with attribute-mapping attributes if you require multiple queries to obtain roles or additional authentication or authorization information. For a list of supported password mappers, see Password Mappers . Create a security domain that references the jdbc-realm . Syntax Example Verification To verify that Elytron can load data from the database, use the following command: Syntax Example The output confirms that Elytron can load data from the database. You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources jdbc-realm attributes Password Mappers security-domain attributes 1.3. Creating an LDAP realm 1.3.1. LDAP realm in Elytron The Lightweight Directory Access Protocol (LDAP) realm, ldap-realm , in Elytron is a security realm that you can use to load identities from an LDAP identity store. The following example illustrates how an identity in LDAP is mapped with an Elytron identity in JBoss EAP. Example LDAP Data Interchange Format (LDIF) file dn: ou=Users,dc=wildfly,dc=org objectClass: organizationalUnit objectClass: top ou: Users dn: uid=user1,ou=Users,dc=wildfly,dc=org objectClass: top objectClass: person objectClass: inetOrgPerson cn: user1 sn: user1 uid: user1 userPassword: userPassword1 dn: ou=Roles,dc=wildfly,dc=org objectclass: top objectclass: organizationalUnit ou: Roles dn: cn=Admin,ou=Roles,dc=wildfly,dc=org objectClass: top objectClass: groupOfNames cn: Admin member: uid=user1,ou=Users,dc=wildfly,dc=org Example commands to create an LDAP realm The commands result in the following configuration: <ldap-realm name="exampleLDAPRealm" dir-context="exampleDirContext"> 1 <identity-mapping rdn-identifier="uid" search-base-dn="ou=Users,dc=wildfly,dc=org"> 2 <attribute-mapping> 3 <attribute from="cn" to="Roles" filter="(&(objectClass=groupOfNames)(member={1}))" filter-base-dn="ou=Roles,dc=wildfly,dc=org"/> 4 </attribute-mapping> <user-password-mapper from="userPassword"/> 5 </identity-mapping> </ldap-realm> 1 The realm definition. name is the ldap-realm realm name. dir-context is the configuration to connect to an LDAP server. 2 Define how identity is mapped. rdn-identifier is relative distinguished name (RDN) of the principal's distinguished name (DN) to use to obtain the principal's name from an LDAP entry. In the example LDIF, uid is configured to represent the principal's name from the base DN=ou=Users,dc=wildfly,dc=org . search-base-dn is the base DN to search for identities. In the example LDIF, it is defined as dn: ou=Users,dc=wildfly,dc=org . 3 Define the LDAP attributes to the identity's attributes mappings. 4 Configure how to map a specific LDAP attribute as an Elytron identity attribute. from is the LDAP attribute to map. If it is not defined, the DN of the entry is used. to is the name of the identity's attribute mapped from LDAP attribute. If not provided, the name of the attribute is the same as the one defined in from . If from is also not defined, the DN of the entry is used. filter is a filter to use to obtain the values for a specific attribute. String '{0}' is replaced by the username, '{1}' by user identity DN. objectClass is the LDAP object class to use. In the example LDIF, the object class to use is defined as groupOfNames . member is the member to map. {0} is replaced by user name, and {1} by user identity DN. In this example, {1} is used to map member to user1 . filter-base-dn is the name of the context where the filter should be applied. The result of the example filter is that the user user1 is mapped with the Admin role. 5 user-password-mapper defines the LDAP attribute from which an identity's password is obtained. In the example it is configured as userPassword , which is defined in the LDIF as userPassword1 . Additional resources Creating an ldap-realm in Elytron ldap-realm attributes 1.3.2. Creating an ldap-realm in Elytron Create an Elytron security realm backed by a Lightweight Directory Access Protocol (LDAP) identity store. Use the security realm to create a security domain to add authentication and authorization to management interfaces or the applications deployed on the server. Note ldap-realm configured as caching realm does not support Active Directory. For more information, see Changing LDAP/AD User Password via JBossEAP CLI for Elytron . Important In cases where the elytron subsystem uses an LDAP server to perform authentication, JBoss EAP will return a 500 error code, or internal server error, if that LDAP server is unreachable. To ensure that the management interfaces and applications secured using an LDAP realm can be accessed even if the LDAP server becomes available, use a failover realm. For information see Creating a failover realm . For the examples in this procedure, the following LDAP Data Interchange Format (LDIF) is used: The LDAP connection parameters used for the example are as follows: LDAP URL: ldap://10.88.0.2 LDAP admin password: secret You need this for Elytron to connect with the LDAP server. LDAP admin Distinguished Name (DN): (cn=admin,dc=wildfly,dc=org) LDAP organization: wildfly If no organization name is specified, it defaults to Example Inc . LDAP domain: wildfly.org This is the name that is matched when the platform receives an LDAP search reference. Prerequisites You have configured an LDAP identity store. JBoss EAP is running. Procedure Configure a directory context that provides the URL and the principal used to connect to the LDAP server. Syntax Example Create an LDAP realm that references the directory context. Specify the Search Base DN and how users are mapped. Syntax Example If you store hashed passwords in the LDIF file, you can specify the following attributes: hash-encoding : This attribute specifies the string format for the password if it is not stored in plain text. It is set to base64 encoding by default, but hex is also supported. hash-charset : This attribute specifies the character set to use when converting the password string to a byte array. It is set to UTF-8 by default. Warning If any referenced LDAP servers contain a loop in referrals, it can result in a java.lang.OutOfMemoryError error in JBoss EAP. Create a role decoder to map attributes to roles. Syntax Example Create a security domain that references the LDAP realm and the role decoder. Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources ldap-realm attributes security-domain attributes 1.4. Creating a properties realm 1.4.1. Create a security domain referencing a properties-realm in Elytron Create a properties-realm and a security domain that references the realm to secure your JBoss EAP management interfaces or the applications that you deployed on the server. Prerequisites JBoss EAP is running. You have an authorized user and an existing legacy properties file with the correct realm written in the commented out line in the users.properties file: Example USDEAP_HOME/standalone/configuration/my-example-users.properties The password for user1 is userPassword1 . The password is hashed to the file as HEX( MD5( user1:exampleSecurityRealm:userPassword1 )) . The authorized user listed in your users.properties file has a role in the groups.properties file: Example USDEAP_HOME/standalone/configuration/my-example-groups.properties Procedure Create a properties-realm in Elytron. Syntax Example Create a security domain that references the properties-realm . Syntax Example Verification To verify that Elytron can load data from the properties file, use the following command: Syntax Example The output confirms that Elytron can load data from the properties file. You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources properties-realm attributes security-domain attributes simple-role-decoder attributes 1.5. Creating a custom realm 1.5.1. Adding a custom-realm security realm in Elytron You can use a custom-realm to create an Elytron security realm that is tailored to your use case. You can add a custom-realm when existing Elytron security realms do not suit your use case. Prerequisites JBoss EAP is installed and running. Maven is installed. You have an implemented custom realm java class. Procedure Implement a custom realm java class and package it as a JAR file. Add a module containing your custom realm implementation. Syntax Example Create your custom-realm . Syntax Example Note This example expects that the implemented custom realm has the class name com.example.customrealm.ExampleRealm . Note You can use the configuration attribute to pass key/value configuration to the custom-realm . The configuration attribute is optional. Define a security domain based on the realm that you created. Syntax Example You now can use the created security domain to add authentication and authorization to management interfaces and applications. For more information, see Securing management interfaces and applications . Additional resources custom-realm attributes security-domain attributes To learn more about the module add command, you can run the --help command in the JBoss EAP management CLI: | [
"{ \"outcome\" => \"failed\", \"failure-description\" => \"WFLYCTL0158: Operation handler failed:java.lang.RuntimeException: WFLYELY01008: Failed to obtain the authorization identity.\", \"rolled-back\" => true }",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add(path= <file_path> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add(path=fs-realm-users,relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity(identity= <user_name> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add-identity(identity=user1) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :set-password(identity= <user_name> , clear={password= <password> })",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:set-password(identity=user1, clear={password=\"passwordUser1\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> , name= <roles_attribute_name> , value=[ <role_1> , <role_N> ])",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add-identity-attribute(identity=user1, name=Roles, value=[\"Admin\",\"Guest\"]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <filesystem_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <filesystem_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :read-identity(name= <username> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:read-identity(name=user1) { \"outcome\" => \"success\", \"result\" => { \"name\" => \"user1\", \"attributes\" => {\"Roles\" => [ \"Admin\", \"Guest\" ]}, \"roles\" => [ \"Guest\", \"Admin\" ] } }",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :add(path=\" <path_to_the_credential_store> \", relative-to= <path_to_store_file> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:add(path=examplePropertiesCredentialStore.cs, relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add(path= <file_path> ,credential-store= <name_of_credential_store> ,secret-key= <key> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add(path=fs-realm-users,relative-to=jboss.server.config.dir, credential-store=examplePropertiesCredentialStore, secret-key=key) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity(identity= <user_name> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add-identity(identity=user1) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :set-password(identity= <user_name> , clear={password= <password> })",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:set-password(identity=user1, clear={password=\"passwordUser1\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> , name= <roles_attribute_name> , value=[ <role_1> , <role_N> ])",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add-identity-attribute(identity=user1, name=Roles, value=[\"Admin\",\"Guest\"]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <filesystem_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <filesystem_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :read-identity(name= <username> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:read-identity(name=user1) { \"outcome\" => \"success\", \"result\" => { \"name\" => \"user1\", \"attributes\" => {\"Roles\" => [ \"Admin\", \"Guest\" ]}, \"roles\" => [ \"Guest\", \"Admin\" ] } }",
"/subsystem=elytron/key-store= <key_store_name> :add(path= <path_to_key_store_file> ,credential-reference={ <password> })",
"/subsystem=elytron/key-store=exampleKeystore:add(path=keystore, relative-to=jboss.server.config.dir, type=JKS, credential-reference={clear-text=secret}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/key-store= <key_store_name> :generate-key-pair(alias= <alias> ,algorithm= <key_algorithm> ,key-size= <size_of_key> ,validity= <validity_in_days> ,distinguished-name=\" <distinguished_name> \")",
"/subsystem=elytron/key-store=exampleKeystore:generate-key-pair(alias=localhost,algorithm=RSA,key-size=1024,validity=365,distinguished-name=\"CN=localhost\") {\"outcome\" => \"success\"}",
"/subsystem=elytron/key-store= <key_store_name> :store()",
"/subsystem=elytron/key-store=exampleKeystore:store() { \"outcome\" => \"success\", \"result\" => undefined }",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add(path= <file_path> ,key-store= <key_store_name> ,key-store-alias= <key_store_alias> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add(path=fs-realm-users,relative-to=jboss.server.config.dir, key-store=exampleKeystore, key-store-alias=localhost) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity(identity= <user_name> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add-identity(identity=user1) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :set-password(identity= <user_name> , clear={password= <password> })",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:set-password(identity=user1, clear={password=\"passwordUser1\"}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :add-identity-attribute(identity= <user_name> , name= <roles_attribute_name> , value=[ <role_1> , <role_N> ])",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:add-identity-attribute(identity=user1, name=Roles, value=[\"Admin\",\"Guest\"]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <filesystem_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <filesystem_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :read-identity(name= <username> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:read-identity(name=user1) { \"outcome\" => \"success\", \"result\" => { \"name\" => \"user1\", \"attributes\" => {\"Roles\" => [ \"Admin\", \"Guest\" ]}, \"roles\" => [ \"Guest\", \"Admin\" ] } }",
"/subsystem=elytron/key-store= <key_store_name> :generate-key-pair(alias= <alias> ,algorithm= <key_algorithm> ,key-size= <size_of_key> ,validity= <validity_in_days> ,distinguished-name=\" <distinguished_name> \")",
"/subsystem=elytron/key-store=exampleKeystore:generate-key-pair(alias=localhost2,algorithm=RSA,key-size=1024,validity=365,distinguished-name=\"CN=localhost\") {\"outcome\" => \"success\"}",
"/subsystem=elytron/key-store= <key_store_name> :store()",
"/subsystem=elytron/key-store=exampleKeystore:store() { \"outcome\" => \"success\", \"result\" => undefined }",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :write-attribute(name=key-store-alias, value= <key_store_alias> )",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:write-attribute(name=key-store-alias, value=localhost2) { \"outcome\" => \"success\", \"response-headers\" => { \"operation-requires-reload\" => true, \"process-state\" => \"reload-required\" } }",
"reload",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :update-key-pair()",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:update-key-pair() {\"outcome\" => \"success\"}",
"/subsystem=elytron/filesystem-realm= <filesystem_realm_name> :read-resource()",
"/subsystem=elytron/filesystem-realm=exampleSecurityRealm:read-resource() { \"outcome\" => \"success\", \"result\" => { \"credential-store\" => undefined, \"encoded\" => true, \"hash-charset\" => \"UTF-8\", \"hash-encoding\" => \"base64\", \"key-store\" => \"exampleKeystoreFSRealm\", \"key-store-alias\" => \"localhost2\", \"levels\" => 2, \"secret-key\" => undefined, \"path\" => \"fs-realm-users\", \"relative-to\" => \"jboss.server.config.dir\" } }",
"/subsystem=elytron/secret-key-credential-store= <name_of_credential_store> :add(path=\" <path_to_the_credential_store> \", relative-to= <path_to_store_file> )",
"/subsystem=elytron/secret-key-credential-store=examplePropertiesCredentialStore:add(path=examplePropertiesCredentialStore.cs, relative-to=jboss.server.config.dir) {\"outcome\" => \"success\"}",
"JBOSS_HOME /bin/elytron-tool.sh filesystem-realm-encrypt --input-location <existing_filesystem_realm_name> --output-location JBOSS_HOME /standalone/configuration/ <target_filesystem_realm_name> --credential-store <path_to_credential_store> / <credential_store>",
"JBOSS_HOME /bin/elytron-tool.sh filesystem-realm-encrypt --input-location JBOSS_HOME /standalone/configuration/fs-realm-users --output-location JBOSS_HOME /standalone/configuration/fs-realm-users-enc --credential-store JBOSS_HOME /standalone/configuration/examplePropertiesCredentialStore.cs Creating encrypted realm for: JBOSS_HOME /standalone/configuration/fs-realm-users Found credential store and alias, using pre-existing key",
"JBOSS_HOME /bin/jboss-cli.sh --connect --file= <target_filesystem_realm_directory> / <target_filesystem_realm_name> .cli",
"JBOSS_HOME /bin/jboss-cli.sh --connect --file= JBOSS_HOME /standalone/configuration/fs-realm-users-enc/encrypted-filesystem-realm.cli",
"JBOSS_HOME /bin/elytron-tool.sh filesystem-realm-encrypt --help",
"/subsystem=elytron/key-store= <key_store_name> :add(path= <path_to_key_store_file> ,credential-reference={ <password> })",
"/subsystem=elytron/key-store=exampleKeystore:add(path=keystore, relative-to=jboss.server.config.dir, type=JKS, credential-reference={clear-text=secret}) {\"outcome\" => \"success\"}",
"/subsystem=elytron/key-store= <key_store_name> :generate-key-pair(alias= <alias> ,algorithm= <key_algorithm> ,key-size= <size_of_key> ,validity= <validity_in_days> ,distinguished-name=\" <distinguished_name> \")",
"/subsystem=elytron/key-store=exampleKeystore:generate-key-pair(alias=localhost,algorithm=RSA,key-size=1024,validity=365,distinguished-name=\"CN=localhost\") {\"outcome\" => \"success\"}",
"/subsystem=elytron/key-store= <key_store_name> :store()",
"/subsystem=elytron/key-store=exampleKeystore:store() { \"outcome\" => \"success\", \"result\" => undefined }",
"JBOSS_HOME /bin/elytron-tool.sh filesystem-realm-integrity --input-location <path_to_existing_filesystem_realm> --keystore <path_to_key_store_file> --password <keystore_password> --key-pair <key_pair_alias> --output-location <path_for_new_filesystem_realm> --realm-name <name_of_new_filesystem_realm>",
"JBOSS_HOME /bin/elytron-tool.sh filesystem-realm-integrity --input-location JBOSS_HOME /standalone/configuration/fs-realm-users/ --keystore JBOSS_HOME /standalone/configuration/keystore --password secret --key-pair localhost --output-location JBOSS_HOME /standalone/configuration/fs-realm-users --realm-name exampleRealmWithIntegrity",
"Creating filesystem realm with integrity verification for: JBOSS_HOME /standalone/configuration/fs-realm-users",
"JBOSS_HOME /bin/jboss-cli.sh --connect --file= <target_filesystem_realm_directory> / <target_filesystem_realm_name> .cli",
"JBOSS_HOME /bin/jboss-cli.sh --connect --file= JBOSS_HOME /standalone/configuration/fs-realm-users/exampleRealmWithIntegrity.cli",
"JBOSS_HOME /bin/elytron-tool.sh filesystem-realm-integrity --help",
"deploy <path_to_jdbc_driver> / <jdbc-driver>",
"deploy PATH_TO_JDBC_DRIVER /postgresql-42.2.9.jar",
"data-source add --name= <data_source_name> --jndi-name= <jndi_name> --driver-name= <jdbc-driver> --connection-url= <database_URL> --user-name= <database_username> --password= <database_username>",
"data-source add --name=examplePostgresDS --jndi-name=java:jboss/examplePostgresDS --driver-name=postgresql-42.2.9.jar --connection-url=jdbc:postgresql://localhost:5432/postgresdb --user-name=postgres --password=postgres",
"/subsystem=elytron/jdbc-realm= <jdbc_realm_name> :add(principal-query=[ <sql_query_to_load_users> ])",
"/subsystem=elytron/jdbc-realm=exampleSecurityRealm:add(principal-query=[{sql=\"SELECT password,roles FROM example_jboss_eap_users WHERE username=?\",data-source=examplePostgresDS,clear-password-mapper={password-index=1},attribute-mapping=[{index=2,to=Roles}]}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <jdbc_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= < jdbc_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm}]) {\"outcome\" => \"success\"}",
"/subsystem=elytron/security-domain= <security_domain_name> :read-identity(name= <username> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:read-identity(name=user1) { \"outcome\" => \"success\", \"result\" => { \"name\" => \"user1\", \"attributes\" => {\"Roles\" => [\"Admin\"]}, \"roles\" => [\"Admin\"] } }",
"dn: ou=Users,dc=wildfly,dc=org objectClass: organizationalUnit objectClass: top ou: Users dn: uid=user1,ou=Users,dc=wildfly,dc=org objectClass: top objectClass: person objectClass: inetOrgPerson cn: user1 sn: user1 uid: user1 userPassword: userPassword1 dn: ou=Roles,dc=wildfly,dc=org objectclass: top objectclass: organizationalUnit ou: Roles dn: cn=Admin,ou=Roles,dc=wildfly,dc=org objectClass: top objectClass: groupOfNames cn: Admin member: uid=user1,ou=Users,dc=wildfly,dc=org",
"/subsystem=elytron/dir-context=exampleDirContext:add(url=\"ldap://10.88.0.2\",principal=\"cn=admin,dc=wildfly,dc=org\",credential-reference={clear-text=\"secret\"}) /subsystem=elytron/ldap-realm=exampleSecurityRealm:add(dir-context=exampleDirContext,identity-mapping={search-base-dn=\"ou=Users,dc=wildfly,dc=org\",rdn-identifier=\"uid\",user-password-mapper={from=\"userPassword\"},attribute-mapping=[{filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\",filter=\"(&(objectClass=groupOfNames)(member={1}))\",from=\"cn\",to=\"Roles\"}]})",
"<ldap-realm name=\"exampleLDAPRealm\" dir-context=\"exampleDirContext\"> 1 <identity-mapping rdn-identifier=\"uid\" search-base-dn=\"ou=Users,dc=wildfly,dc=org\"> 2 <attribute-mapping> 3 <attribute from=\"cn\" to=\"Roles\" filter=\"(&(objectClass=groupOfNames)(member={1}))\" filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\"/> 4 </attribute-mapping> <user-password-mapper from=\"userPassword\"/> 5 </identity-mapping> </ldap-realm>",
"dn: ou=Users,dc=wildfly,dc=org objectClass: organizationalUnit objectClass: top ou: Users dn: uid=user1,ou=Users,dc=wildfly,dc=org objectClass: top objectClass: person objectClass: inetOrgPerson cn: user1 sn: user1 uid: user1 userPassword: userPassword1 dn: ou=Roles,dc=wildfly,dc=org objectclass: top objectclass: organizationalUnit ou: Roles dn: cn=Admin,ou=Roles,dc=wildfly,dc=org objectClass: top objectClass: groupOfNames cn: Admin member: uid=user1,ou=Users,dc=wildfly,dc=org",
"/subsystem=elytron/dir-context= <dir_context_name> :add(url=\" <LDAP_URL> \",principal=\" <principal_distinguished_name> \",credential-reference= <credential_reference> )",
"/subsystem=elytron/dir-context=exampleDirContext:add(url=\"ldap://10.88.0.2\",principal=\"cn=admin,dc=wildfly,dc=org\",credential-reference={clear-text=\"secret\"})",
"/subsystem=elytron/ldap-realm= <ldap_realm_name> add:(dir-context= <dir_context_name> ,identity-mapping=search-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",rdn-identifier=\" <relative_distinguished_name_identifier> \",user-password-mapper={from= <password_attribute_name> },attribute-mapping=[{filter-base-dn=\"ou= <organization_unit> ,dc= <domain_component> \",filter=\" <ldap_filter> \",from=\" <ldap_attribute_name> \",to=\" <identity_attribute_name> \"}]})",
"/subsystem=elytron/ldap-realm=exampleSecurityRealm:add(dir-context=exampleDirContext,identity-mapping={search-base-dn=\"ou=Users,dc=wildfly,dc=org\",rdn-identifier=\"uid\",user-password-mapper={from=\"userPassword\"},attribute-mapping=[{filter-base-dn=\"ou=Roles,dc=wildfly,dc=org\",filter=\"(&(objectClass=groupOfNames)(member={1}))\",from=\"cn\",to=\"Roles\"}]})",
"/subsystem=elytron/simple-role-decoder= <role_decoder_name> :add(attribute= <attribute> )",
"/subsystem=elytron/simple-role-decoder=from-roles-attribute:add(attribute=Roles)",
"/subsystem=elytron/security-domain= <security_domain_name> :add(realms=[{realm= <ldap_realm_name> ,role-decoder= <role_decoder_name> }],default-realm= <ldap_realm_name> ,permission-mapper= <permission_mapper> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(realms=[{realm=exampleSecurityRealm,role-decoder=from-roles-attribute}],default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper)",
"#USDREALM_NAME=exampleSecurityRealmUSD user1=078ed9776d4b8e63b6e51135ec45cc75",
"user1=Admin",
"/subsystem=elytron/properties-realm= <properties_realm_name> :add(users-properties={path= <file_path> },groups-properties={path= <file_path> })",
"/subsystem=elytron/properties-realm=exampleSecurityRealm:add(users-properties={path=my-example-users.properties,relative-to=jboss.server.config.dir,plain-text=true},groups-properties={path=my-example-groups.properties,relative-to=jboss.server.config.dir})",
"/subsystem=elytron/security-domain= <security_domain_name> :add(default-realm= <properties_realm_name> ,permission-mapper=default-permission-mapper,realms=[{realm= <properties_realm_name> ,role-decoder=\" <role_decoder_name> \"}])",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(default-realm=exampleSecurityRealm,permission-mapper=default-permission-mapper,realms=[{realm=exampleSecurityRealm,role-decoder=groups-to-roles}])",
"/subsystem=elytron/security-domain= <security_domain_name> :read-identity(name= <username> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:read-identity(name=user1) { \"outcome\" => \"success\", \"result\" => { \"name\" => \"user1\", \"attributes\" => {\"Roles\" => [\"Admin\"]}, \"roles\" => [\"Admin\"] } }",
"mvn package",
"module add --name= <name_of_your_wildfly_module> --resources= <path_to_custom_realm_jar> --dependencies=org.wildfly.security.elytron",
"module add --name=com.example.customrealm --resources=EAP_HOME/custom-realm.jar --dependencies=org.wildfly.security.elytron",
"/subsystem=elytron/custom-realm= <name_of_your_custom_realm> :add(module= <name_of_your_wildfly_module> ,class-name= <class_name_of_custom_realm_> ,configuration={ <configuration_option_1> = <configuration_value_1> , <configuration_option_2> = <configuration_value_2> })",
"/subsystem=elytron/custom-realm=example-realm:add(module=com.example.customrealm,class-name=com.example.customrealm.ExampleRealm,configuration={exampleConfigOption1=exampleConfigValue1,exampleConfigOption2=exampleConfigValue2})",
"/subsystem=elytron/security-domain= <your_security_domain_name> :add(realms=[{realm= <your_realm_name> }],default-realm= <your_realm_name> ,permission-mapper= <your_permission_mapper_name> )",
"/subsystem=elytron/security-domain=exampleSecurityDomain:add(realms=[{realm=example-realm}],default-realm=example-realm,permission-mapper=default-permission-mapper)",
"module add --help"
]
| https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/securing_applications_and_management_interfaces_using_an_identity_store/configuring_identity_stores |
Chapter 3. LVM Components | Chapter 3. LVM Components This chapter describes the components of an LVM Logical volume. 3.1. Physical Volumes The underlying physical storage unit of an LVM logical volume is a block device such as a partition or whole disk. To use the device for an LVM logical volume the device must be initialized as a physical volume (PV). Initializing a block device as a physical volume places a label near the start of the device. By default, the LVM label is placed in the second 512-byte sector. You can overwrite this default by placing the label on any of the first 4 sectors. This allows LVM volumes to co-exist with other users of these sectors, if necessary. An LVM label provides correct identification and device ordering for a physical device, since devices can come up in any order when the system is booted. An LVM label remains persistent across reboots and throughout a cluster. The LVM label identifies the device as an LVM physical volume. It contains a random unique identifier (the UUID) for the physical volume. It also stores the size of the block device in bytes, and it records where the LVM metadata will be stored on the device. The LVM metadata contains the configuration details of the LVM volume groups on your system. By default, an identical copy of the metadata is maintained in every metadata area in every physical volume within the volume group. LVM metadata is small and stored as ASCII. Currently LVM allows you to store 0, 1 or 2 identical copies of its metadata on each physical volume. The default is 1 copy. Once you configure the number of metadata copies on the physical volume, you cannot change that number at a later time. The first copy is stored at the start of the device, shortly after the label. If there is a second copy, it is placed at the end of the device. If you accidentally overwrite the area at the beginning of your disk by writing to a different disk than you intend, a second copy of the metadata at the end of the device will allow you to recover the metadata. For detailed information about the LVM metadata and changing the metadata parameters, see Appendix E, LVM Volume Group Metadata . 3.1.1. LVM Physical Volume Layout Figure 3.1, "Physical Volume layout" shows the layout of an LVM physical volume. The LVM label is on the second sector, followed by the metadata area, followed by the usable space on the device. Note In the Linux kernel (and throughout this document), sectors are considered to be 512 bytes in size. Figure 3.1. Physical Volume layout 3.1.2. Multiple Partitions on a Disk LVM allows you to create physical volumes out of disk partitions. It is generally recommended that you create a single partition that covers the whole disk to label as an LVM physical volume for the following reasons: Administrative convenience It is easier to keep track of the hardware in a system if each real disk only appears once. This becomes particularly true if a disk fails. In addition, multiple physical volumes on a single disk may cause a kernel warning about unknown partition types at boot-up. Striping performance LVM cannot tell that two physical volumes are on the same physical disk. If you create a striped logical volume when two physical volumes are on the same physical disk, the stripes could be on different partitions on the same disk. This would result in a decrease in performance rather than an increase. Although it is not recommended, there may be specific circumstances when you will need to divide a disk into separate LVM physical volumes. For example, on a system with few disks it may be necessary to move data around partitions when you are migrating an existing system to LVM volumes. Additionally, if you have a very large disk and want to have more than one volume group for administrative purposes then it is necessary to partition the disk. If you do have a disk with more than one partition and both of those partitions are in the same volume group, take care to specify which partitions are to be included in a logical volume when creating striped volumes. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/lvm_components |
Chapter 3. Deploy standalone Multicloud Object Gateway | Chapter 3. Deploy standalone Multicloud Object Gateway Deploying only the Multicloud Object Gateway component with the OpenShift Data Foundation provides the flexibility in deployment and helps to reduce the resource consumption. Use this section to deploy only the standalone Multicloud Object Gateway component, which involves the following steps: Installing Red Hat OpenShift Data Foundation Operator Creating standalone Multicloud Object Gateway 3.1. Installing Red Hat OpenShift Data Foundation Operator You can install Red Hat OpenShift Data Foundation Operator using the Red Hat OpenShift Container Platform Operator Hub. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin and operator installation permissions. You must have at least three worker or infrastructure nodes in the Red Hat OpenShift Container Platform cluster. Each node should include one disk and requires 3 disks (PVs). However, one PV remains eventually unused by default. This is an expected behavior. For additional resource requirements, see the Planning your deployment guide. Important When you need to override the cluster-wide default node selector for OpenShift Data Foundation, you can use the following command to specify a blank node selector for the openshift-storage namespace (create openshift-storage namespace in this case): Taint a node as infra to ensure only Red Hat OpenShift Data Foundation resources are scheduled on that node. This helps you save on subscription costs. For more information, see the How to use dedicated worker nodes for Red Hat OpenShift Data Foundation section in the Managing and Allocating Storage Resources guide. Procedure Log in to the OpenShift Web Console. Click Operators OperatorHub . Scroll or type OpenShift Data Foundation into the Filter by keyword box to find the OpenShift Data Foundation Operator. Click Install . Set the following options on the Install Operator page: Update Channel as stable-4.13 . Installation Mode as A specific namespace on the cluster . Installed Namespace as Operator recommended namespace openshift-storage . If Namespace openshift-storage does not exist, it is created during the operator installation. Select Approval Strategy as Automatic or Manual . If you select Automatic updates, then the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of your Operator without any intervention. If you select Manual updates, then the OLM creates an update request. As a cluster administrator, you must then manually approve that update request to update the Operator to a newer version. Ensure that the Enable option is selected for the Console plugin . Click Install . Verification steps After the operator is successfully installed, a pop-up with a message, Web console update is available appears on the user interface. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console: Navigate to Installed Operators and verify that the OpenShift Data Foundation Operator shows a green tick indicating successful installation. Navigate to Storage and verify if the Data Foundation dashboard is available. 3.2. Creating a standalone Multicloud Object Gateway You can create only the standalone Multicloud Object Gateway component while deploying OpenShift Data Foundation. Prerequisites Ensure that the OpenShift Data Foundation Operator is installed. Procedure In the OpenShift Web Console, click Operators Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click OpenShift Data Foundation operator and then click Create StorageSystem . In the Backing storage page, select the following: Select Multicloud Object Gateway for Deployment type . Select the Use an existing StorageClass option. Click . Optional: Select the Connect to an external key management service checkbox. This is optional for cluster-wide encryption. From the Key Management Service Provider drop-down list, either select Vault or Thales CipherTrust Manager (using KMIP) . If you selected Vault , go to the step. If you selected Thales CipherTrust Manager (using KMIP) , go to step iii. Select an Authentication Method . Using Token authentication method Enter a unique Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Token . Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Vault Enterprise Namespace . Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. Using Kubernetes authentication method Enter a unique Vault Connection Name , host Address of the Vault server ('https://<hostname or ip>'), Port number and Role name. Expand Advanced Settings to enter additional settings and certificate details based on your Vault configuration: Enter the Key Value secret path in Backend Path that is dedicated and unique to OpenShift Data Foundation. Optional: Enter TLS Server Name and Authentication Path if applicable. Upload the respective PEM encoded certificate file to provide the CA Certificate , Client Certificate and Client Private Key . Click Save and skip to step iv. To use Thales CipherTrust Manager (using KMIP) as the KMS provider, follow the steps below: Enter a unique Connection Name for the Key Management service within the project. In the Address and Port sections, enter the IP of Thales CipherTrust Manager and the port where the KMIP interface is enabled. For example: Address : 123.34.3.2 Port : 5696 Upload the Client Certificate , CA certificate , and Client Private Key . If StorageClass encryption is enabled, enter the Unique Identifier to be used for encryption and decryption generated above. The TLS Server field is optional and used when there is no DNS entry for the KMIP endpoint. For example, kmip_all_<port>.ciphertrustmanager.local . Select a Network . Click . In the Review and create page, review the configuration details: To modify any configuration settings, click Back . Click Create StorageSystem . Verification steps Verifying that the OpenShift Data Foundation cluster is healthy In the OpenShift Web Console, click Storage Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Object tab, verify that both Object Service and Data Resiliency have a green tick. In the Details card, verify that the MCG information is displayed. Verifying the state of the pods Click Workloads Pods from the OpenShift Web Console. Select openshift-storage from the Project drop-down list and verify that the following pods are in Running state. Note If the Show default projects option is disabled, use the toggle button to list all the default projects. Component Corresponding pods OpenShift Data Foundation Operator ocs-operator-* (1 pod on any storage node) ocs-metrics-exporter-* (1 pod on any storage node) odf-operator-controller-manager-* (1 pod on any storage node) odf-console-* (1 pod on any storage node) csi-addons-controller-manager-* (1 pod on any storage node) Rook-ceph Operator rook-ceph-operator-* (1 pod on any storage node) Multicloud Object Gateway noobaa-operator-* (1 pod on any storage node) noobaa-core-* (1 pod on any storage node) noobaa-db-pg-* (1 pod on any storage node) noobaa-endpoint-* (1 pod on any storage node) | [
"oc annotate namespace openshift-storage openshift.io/node-selector="
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_and_managing_openshift_data_foundation_using_google_cloud/deploy-standalone-multicloud-object-gateway |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/creating_and_managing_instances/making-open-source-more-inclusive |
Data Grid documentation | Data Grid documentation Documentation for Data Grid is available on the Red Hat customer portal. Data Grid 8.4 Documentation Data Grid 8.4 Component Details Supported Configurations for Data Grid 8.4 Data Grid 8 Feature Support Data Grid Deprecated Features and Functionality | null | https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/hot_rod_.net_client_guide/rhdg-docs_datagrid |
Part VI. Monitoring and Automation | Part VI. Monitoring and Automation This part describes various tools that allow system administrators to monitor system performance, automate system tasks, and report bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/part-monitoring_and_automation |
Chapter 1. Obtaining Red Hat Enterprise Linux | Chapter 1. Obtaining Red Hat Enterprise Linux If you have a Red Hat subscription, you can download ISO image files of the Red Hat Enterprise Linux 6.9 installation DVD from the Software & Download Center that is part of the Red Hat Customer Portal. If you do not already have a subscription, either purchase one or obtain a free evaluation subscription from the Software & Download Center at https://access.redhat.com/downloads . The following table indicates the types of boot and installation media available for different architectures and notes the image file that you need to produce the media. Table 1.1. Boot and installation media Architecture Installation DVD Boot CD or boot DVD Boot USB flash drive Where variant is the variant of Red Hat Enterprise Linux (for example, server or workstation ) and version is the latest version number (for example, 6.5). BIOS-based 32-bit x86 x86 DVD ISO image file rhel- variant - version -i386-boot.iso rhel- variant - version -i386-boot.iso UEFI-based 32-bit x86 Not available BIOS-based AMD64 and Intel 64 x86_64 DVD ISO image file (to install 64-bit operating system) or x86 DVD ISO image file (to install 32-bit operating system) rhel- variant - version -x86_64boot.iso or rhel- variant - version -i386-boot.iso rhel- variant - version -x86_64boot.iso or rhel- variant - version -i386-boot.iso UEFI-based AMD64 and Intel 64 x86_64 DVD ISO image file rhel- variant - version -x86_64-boot.iso efidisk.img (from x86_64 DVD ISO image file) POWER (64-bit only) ppc DVD ISO image file rhel-server- version -ppc64-boot.iso Not available System z s390 DVD ISO image file Not available Not available If you have a subscription or evaluation subscription, follow these steps to obtain the Red Hat Enterprise Linux 6.9 ISO image files: Procedure 1.1. Downloading Red Hat Enterprise Linux ISO Images Visit the Customer Portal at https://access.redhat.com/home . If you are not logged in, click LOG IN on the right side of the page. Enter your account credentials when prompted. Click DOWNLOADS at the top of the page. Click Red Hat Enterprise Linux . Ensure that you select the appropriate Product Variant , Version and Architecture for your installation target. By default, Red Hat Enterprise Linux Server and x86_64 are selected. If you are not sure which variant best suits your needs, see http://www.redhat.com/en/technologies/linux-platforms/enterprise-linux . A list of available downloads is displayed; most notably, a minimal Boot ISO image and a full installation Binary DVD ISO image. The Boot ISO is a minimal boot image which only contains the installer and requires a source to install packages from (such as an HTTP or FTP server). The Binary DVD download contains both the installer and necessary packages, and therefore requires less setup. Additional images may be available, such as preconfigured virtual machine images, which are beyond the scope of this document. Choose the image file that you want to use. There are several ways to download an ISO image from Red Hat Customer Portal: Click its name to begin downloading it to your computer using your web browser. Right-click the name and then click Copy Link Location or a similar menu item, the exact wording of which depends on the browser that you are using. This action copies the URL of the file to your clipboard, which allows you to use an alternative application to download the file to your computer. This approach is especially useful if your Internet connection is unstable: in that case, you browser may fail to download the whole file, and an attempt to resume the interrupted download process fails because the download link contains an authentication key which is only valid for a short time. Specialized applications such as curl can, however, be used to resume interrupted download attempts from the Customer Portal, which means that you need not download the whole file again and thus you save your time and bandwidth consumption. Procedure 1.2. Using curl to Download Installation Media Make sure the curl package is installed by running the following command as root: If your Linux distribution does not use yum , or if you do not use Linux at all, download the most appropriate software package from the curl website . Open a terminal window, enter a suitable directory, and type the following command: Replace filename.iso with the ISO image name as displayed in the Customer Portal, such as rhel-server-6.9-x86_64-dvd.iso . This is important because the download link in the Customer Portal contains extra characters which curl would otherwise use in the downloaded file name, too. Then, keep the single quotation mark in front of the parameter, and replace copied_link_location with the link that you have copied from the Customer Portal. Note that in Linux, you can paste the content of the clipboard into the terminal window by middle-clicking anywhere in the window, or by pressing Shift + Insert . Finally, use another single quotation mark after the last parameter, and press Enter to run the command and start transferring the ISO image. The single quotation marks prevent the command line interpreter from misinterpreting any special characters that might be included in the download link. Example 1.1. Downloading an ISO image with curl The following is an example of a curl command line: Note that the actual download link is much longer because it contains complicated identifiers. If your Internet connection does drop before the transfer is complete, refresh the download page in the Customer Portal; log in again if necessary. Copy the new download link, use the same basic curl command line parameters as earlier but be sure to use the new download link, and add -C - to instruct curl to automatically determine where it should continue based on the size of the already downloaded file. Example 1.2. Resuming an interrupted download attempt The following is an example of a curl command line that you use if you have only partially downloaded the ISO image of your choice: Optionally, you can use a checksum utility such as sha256sum to verify the integrity of the image file after the download finishes. All downloads on the Download Red Hat Enterprise Linux page are provided with their checksums for reference: Similar tools are available for Microsoft Windows and Mac OS X . You can also use the installation program to verify the media when starting the installation; see Section 28.6.1, "Verifying Boot Media" for details. After you download an ISO image file of the installation DVD from the Red Hat Customer Portal, you can: burn it to a physical DVD (refer to Section 2.1, "Making an Installation DVD" ). use it to prepare minimal boot media (refer to Section 2.2, "Making Minimal Boot Media" ). place it on a server to prepare for installations over a network (refer to Section 4.1, "Preparing for a Network Installation" for x86 architectures, Section 12.1, "Preparing for a Network Installation" for Power Systems servers or Section 19.1, "Preparing for a Network Installation" for IBM System z). place it on a hard drive to prepare for installation to use the hard drive as an installation source (refer to Section 4.2, "Preparing for a Hard Drive Installation" for x86 architectures, Section 12.2, "Preparing for a Hard Drive Installation" for Power Systems servers or Section 19.2, "Preparing for a Hard Drive Installation" for IBM System z). place it on a pre-boot execution environment (PXE) server to prepare for installations using PXE boot (refer to Chapter 30, Setting Up an Installation Server ). | [
"yum install curl",
"curl -o filename.iso ' copied_link_location '",
"curl -o rhel-server-6.9-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-server-6.9-x86_64-dvd.iso?_auth_=141...7bf'",
"curl -o rhel-server-6.9-x86_64-dvd.iso 'https://access.cdn.redhat.com//content/origin/files/sha256/85/85a...46c/rhel-server-6.9-x86_64-dvd.iso?_auth_=141...963' -C -",
"sha256sum rhel-server-6.9-x86_64-dvd.iso 85a...46c rhel-server-6.9-x86_64-dvd.iso"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/ch-Obtaining_Red_Hat_Enterprise_Linux |
Chapter 17. Monitoring resources | Chapter 17. Monitoring resources The following chapter details how to configure monitoring and reporting for managed systems. This includes host configuration, content views, compliance, registered hosts, promotions, and synchronization. 17.1. Using the Red Hat Satellite content dashboard The Red Hat Satellite content dashboard contains various widgets which provide an overview of the host configuration, content views, compliance reports, and hosts currently registered, promotions and synchronization, and a list of the latest notifications. In the Satellite web UI, navigate to Monitor > Dashboard to access the content dashboard. The dashboard can be rearranged by clicking on a widget and dragging it to a different position. The following widgets are available: Host Configuration Status An overview of the configuration states and the number of hosts associated with it during the last reporting interval. The following table shows the descriptions of the possible configuration states. Table 17.1. Host configuration states Icon State Description Hosts that had performed modifications without error Host that successfully performed modifications during the last reporting interval. Hosts in error state Hosts on which an error was detected during the last reporting interval. Good host reports in the last 35 minutes Hosts without error that did not perform any modifications in the last 35 minutes. Hosts that had pending changes Hosts on which some resources would be applied but Puppet was configured to run in the noop mode. Out of sync hosts Hosts that were not synchronized and the report was not received during the last reporting interval. Hosts with no reports Hosts for which no reports were collected during the last reporting interval. Hosts with alerts disabled Hosts which are not being monitored. Click the particular configuration status to view hosts associated with it. Host Configuration Chart A pie chart shows the proportion of the configuration status and the percentage of all hosts associated with it. Latest Events A list of messages produced by hosts including administration information, product changes, and any errors. Monitor this section for global notifications sent to all users and to detect any unusual activity or errors. Run Distribution (last 30 minutes) A graph shows the distribution of the running Puppet agents during the last puppet interval which is 30 minutes by default. In this case, each column represents a number of reports received from clients during 3 minutes. New Hosts A list of the recently created hosts. Click the host for more details. Task Status A summary of all current tasks, grouped by their state and result. Click the number to see the list of corresponding tasks. Latest Warning/Error Tasks A list of the latest tasks that have been stopped due to a warning or error. Click a task to see more details. Discovered Hosts A list of all bare-metal hosts detected on the provisioning network by the Discovery plugin. Latest Errata A list of all errata available for hosts registered to Satellite. Content Views A list of all content views in Satellite and their publish status. Sync Overview An overview of all products or repositories enabled in Satellite and their synchronization status. All products that are in the queue for synchronization, are unsynchronized or have been previously synchronized are listed in this section. Host Collections A list of all host collections in Satellite and their status, including the number of content hosts in each host collection. Virt-who Configuration Status An overview of the status of reports received from the virt-who daemon running on hosts in the environment. The following table shows the possible states. Table 17.2. virt-who configuration states State Description No Reports No report has been received because either an error occurred during the virt-who configuration deployment, or the configuration has not been deployed yet, or virt-who cannot connect to Satellite during the scheduled interval. No Change No report has been received because hypervisor did not detect any changes on the virtual machines, or virt-who failed to upload the reports during the scheduled interval. If you added a virtual machine but the configuration is in the No Change state, check that virt-who is running. OK The report has been received without any errors during the scheduled interval. Total Configurations A total number of virt-who configurations. Click the configuration status to see all configurations in this state. The widget also lists the three latest configurations in the No Change state under Latest Configurations Without Change . Latest Compliance Reports A list of the latest compliance reports. Each compliance report shows a number of rules passed (P), failed (F), or othered (O). Click the host for the detailed compliance report. Click the policy for more details on that policy. Compliance Reports Breakdown A pie chart shows the distribution of compliance reports according to their status. Red Hat Insights Actions Red Hat Insights is a tool embedded in Satellite that checks the environment and suggests actions you can take. The actions are divided into 4 categories: Availability, Stability, Performance, and Security. Red Hat Insights Risk Summary A table shows the distribution of the actions according to the risk levels. Risk level represents how critical the action is and how likely it is to cause an actual issue. The possible risk levels are: Low, Medium, High, and Critical. Note It is not possible to change the date format displayed in the Satellite web UI. 17.1.1. Managing tasks Red Hat Satellite keeps a complete log of all planned or performed tasks, such as repositories synchronised, errata applied, and content views published. To review the log, navigate to Monitor > Satellite Tasks > Tasks . In the Task window, you can search for specific tasks, view their status, details, and elapsed time since they started. You can also cancel and resume one or more tasks. The tasks are managed using the Dynflow engine. Remote tasks have a timeout which can be adjusted as needed. To adjust timeout settings In the Satellite web UI, navigate to Administer > Settings . Enter %_timeout in the search box and click Search . The search should return four settings, including a description. In the Value column, click the icon to a number to edit it. Enter the desired value in seconds, and click Save . Note Adjusting the %_finish_timeout values might help in case of low bandwidth. Adjusting the %_accept_timeout values might help in case of high latency. When a task is initialized, any back-end service that will be used in the task, such as Candlepin or Pulp, will be checked for correct functioning. If the check fails, you will receive an error similar to the following one: There was an issue with the backend service candlepin: Connection refused - connect(2). If the back-end service checking feature turns out to be causing any trouble, it can be disabled as follows. To disable checking for services In the Satellite web UI, navigate to Administer > Settings . Enter check_services_before_actions in the search box and click Search . In the Value column, click the icon to edit the value. From the drop-down menu, select false . Click Save . 17.2. Configuring RSS notifications To view Satellite event notification alerts, click the Notifications icon in the upper right of the screen. By default, the Notifications area displays RSS feed events published in the Red Hat Satellite Blog . The feed is refreshed every 12 hours and the Notifications area is updated whenever new events become available. You can configure the RSS feed notifications by changing the URL feed. The supported feed format is RSS 2.0 and Atom. For an example of the RSS 2.0 feed structure, see the Red Hat Satellite Blog feed . For an example of the Atom feed structure, see the Foreman blog feed . To configure RSS feed notifications In the Satellite web UI, navigate to Administer > Settings and select the Notifications tab. In the RSS URL row, click the edit icon in the Value column and type the required URL. In the RSS enable row, click the edit icon in the Value column to enable or disable this feature. 17.3. Monitoring Satellite Server Audit records list the changes made by all users on Satellite. This information can be used for maintenance and troubleshooting. Procedure In the Satellite web UI, navigate to Monitor > Audits to view the audit records. To obtain a list of all the audit attributes, use the following command: 17.4. Monitoring Capsule Server The following section shows how to use the Satellite web UI to find Capsule information valuable for maintenance and troubleshooting. 17.4.1. Viewing general Capsule information In the Satellite web UI, navigate to Infrastructure > Capsules to view a table of Capsule Servers registered to Satellite Server. The information contained in the table answers the following questions: Is Capsule Server running? This is indicated by a green icon in the Status column. A red icon indicates an inactive Capsule, use the service foreman-proxy restart command on Capsule Server to activate it. What services are enabled on Capsule Server? In the Features column, you can verify if, for example, your Capsule provides a DHCP service or acts as a Pulp mirror. Capsule features can be enabled during installation or configured in addition. For more information, see Installing Capsule Server . What organizations and locations is Capsule Server assigned to? A Capsule Server can be assigned to multiple organizations and locations, but only Capsules belonging to the currently selected organization are displayed. To list all Capsules, select Any Organization from the context menu in the top left corner. After changing the Capsule configuration, select Refresh from the drop-down menu in the Actions column to ensure the Capsule table is up to date. Click the Capsule name to view further details. At the Overview tab, you can find the same information as in the Capsule table. In addition, you can answer to the following questions: Which hosts are managed by Capsule Server? The number of associated hosts is displayed to the Hosts managed label. Click the number to view the details of associated hosts. How much storage space is available on Capsule Server? The amount of storage space occupied by the Pulp content in /var/lib/pulp is displayed. Also the remaining storage space available on the Capsule can be ascertained. 17.4.2. Monitoring services In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the selected Capsule. At the Services tab, you can find basic information on Capsule services, such as the list of DNS domains, or the number of Pulp workers. The appearance of the page depends on what services are enabled on Capsule Server. Services providing more detailed status information can have dedicated tabs at the Capsule page. For more information, see Section 17.4.3, "Monitoring Puppet" . 17.4.3. Monitoring Puppet In the Satellite web UI, navigate to Infrastructure > Capsules and click the name of the selected Capsule. At the Puppet tab you can find the following: A summary of Puppet events, an overview of latest Puppet runs, and the synchronization status of associated hosts at the General sub-tab. A list of Puppet environments at the Environments sub-tab. At the Puppet CA tab you can find the following: A certificate status overview and the number of autosign entries at the General sub-tab. A table of CA certificates associated with the Capsule at the Certificates sub-tab. Here you can inspect the certificate expiry data, or cancel the certificate by clicking Revoke . A list of autosign entries at the Autosign entries sub-tab. Here you can create an entry by clicking New or delete one by clicking Delete . Note The Puppet and Puppet CA tabs are available only if you have Puppet enabled in your Satellite. Additional resources For more information, see Enabling Puppet Integration with Satellite in Managing configurations by using Puppet integration . | [
"There was an issue with the backend service candlepin: Connection refused - connect(2).",
"foreman-rake audits:list_attributes"
]
| https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/monitoring_resources_admin |
Networking | Networking OpenShift Dedicated 4 Configuring OpenShift Dedicated networking Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/networking/index |
Appendix B. Using Red Hat Maven repositories | Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat | [
"/home/ <username> /.m2/settings.xml",
"C:\\Users\\<username>\\.m2\\settings.xml",
"<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>",
"<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>",
"<repository> <id>red-hat-local</id> <url> USD{repository-url} </url> </repository>"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_jms_client/using_red_hat_maven_repositories |
5.11. Multi-Level Security (MLS) | 5.11. Multi-Level Security (MLS) The Multi-Level Security technology refers to a security scheme that enforces the Bell-La Padula Mandatory Access Model. Under MLS, users and processes are called subjects , and files, devices, and other passive components of the system are called objects . Both subjects and objects are labeled with a security level, which entails a subject's clearance or an object's classification. Each security level is composed of a sensitivity and a category , for example, an internal release schedule is filed under the internal documents category with a confidential sensitivity. Figure 5.1, "Levels of clearance" shows levels of clearance as originally designed by the US defense community. Relating to our internal schedule example above, only users that have gained the confidential clearance are allowed to view documents in the confidential category. However, users who only have the confidential clearance are not allowed to view documents that require higher levels or clearance; they are allowed read access only to documents with lower levels of clearance, and write access to documents with higher levels of clearance. Figure 5.1. Levels of clearance Figure 5.2, "Allowed data flows using MLS" shows all allowed data flows between a subject running under the "Secret" security level and various objects with different security levels. In simple terms, the Bell-LaPadula model enforces two properties: no read up and no write down . Figure 5.2. Allowed data flows using MLS 5.11.1. MLS and System Privileges MLS access rules are always combined with conventional access permissions (file permissions). For example, if a user with a security level of "Secret" uses Discretionary Access Control (DAC) to block access to a file by other users, this also blocks access by users with a security level of "Top Secret". It is important to remember that SELinux MLS policy rules are checked after DAC rules. A higher security clearance does not automatically give permission to arbitrarily browse a file system. Users with top-level clearances do not automatically acquire administrative rights on multi-level systems. While they may have access to all information on the computer, this is different from having administrative rights. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security-enhanced_linux/mls |
Chapter 5. Installing a cluster on RHV in a restricted network | Chapter 5. Installing a cluster on RHV in a restricted network In OpenShift Container Platform version 4.13, you can install a customized OpenShift Container Platform cluster on Red Hat Virtualization (RHV) in a restricted network by creating an internal mirror of the installation release content. 5.1. Prerequisites The following items are required to install an OpenShift Container Platform cluster on a RHV environment. You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have a supported combination of versions in the Support Matrix for OpenShift Container Platform on RHV . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 5.2. About installations in restricted networks In OpenShift Container Platform 4.13, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.4. Requirements for the RHV environment To install and run an OpenShift Container Platform version 4.13 cluster, the RHV environment must meet the following requirements. Not meeting these requirements can cause the installation or process to fail. Additionally, not meeting these requirements can cause the OpenShift Container Platform cluster to fail days or weeks after installation. The following requirements for CPU, memory, and storage resources are based on default values multiplied by the default number of virtual machines the installation program creates. These resources must be available in addition to what the RHV environment uses for non-OpenShift Container Platform operations. By default, the installation program creates seven virtual machines during the installation process. First, it creates a bootstrap virtual machine to provide temporary services and a control plane while it creates the rest of the OpenShift Container Platform cluster. When the installation program finishes creating the cluster, deleting the bootstrap machine frees up its resources. If you increase the number of virtual machines in the RHV environment, you must increase the resources accordingly. Requirements The RHV version is 4.4. The RHV environment has one data center whose state is Up . The RHV data center contains an RHV cluster. The RHV cluster has the following resources exclusively for the OpenShift Container Platform cluster: Minimum 28 vCPUs: four for each of the seven virtual machines created during installation. 112 GiB RAM or more, including: 16 GiB or more for the bootstrap machine, which provides the temporary control plane. 16 GiB or more for each of the three control plane machines which provide the control plane. 16 GiB or more for each of the three compute machines, which run the application workloads. The RHV storage domain must meet these etcd backend performance requirements . In production environments, each virtual machine must have 120 GiB or more. Therefore, the storage domain must provide 840 GiB or more for the default OpenShift Container Platform cluster. In resource-constrained or non-production environments, each virtual machine must have 32 GiB or more, so the storage domain must have 230 GiB or more for the default OpenShift Container Platform cluster. To download images from the Red Hat Ecosystem Catalog during installation and update procedures, the RHV cluster must have access to an internet connection. The Telemetry service also needs an internet connection to simplify the subscription and entitlement process. The RHV cluster must have a virtual network with access to the REST API on the RHV Manager. Ensure that DHCP is enabled on this network, because the VMs that the installer creates obtain their IP address by using DHCP. A user account and group with the following least privileges for installing and managing an OpenShift Container Platform cluster on the target RHV cluster: DiskOperator DiskCreator UserTemplateBasedVm TemplateOwner TemplateCreator ClusterAdmin on the target cluster Warning Apply the principle of least privilege: Avoid using an administrator account with SuperUser privileges on RHV during the installation process. The installation program saves the credentials you provide to a temporary ovirt-config.yaml file that might be compromised. 5.5. Verifying the requirements for the RHV environment Verify that the RHV environment meets the requirements to install and run an OpenShift Container Platform cluster. Not meeting these requirements can cause failures. Important These requirements are based on the default resources the installation program uses to create control plane and compute machines. These resources include vCPUs, memory, and storage. If you change these resources or increase the number of OpenShift Container Platform machines, adjust these requirements accordingly. Procedure Check that the RHV version supports installation of OpenShift Container Platform version 4.13. In the RHV Administration Portal, click the ? help icon in the upper-right corner and select About . In the window that opens, make a note of the RHV Software Version . Confirm that the RHV version is 4.4. For more information about supported version combinations, see Support Matrix for OpenShift Container Platform on RHV . Inspect the data center, cluster, and storage. In the RHV Administration Portal, click Compute Data Centers . Confirm that the data center where you plan to install OpenShift Container Platform is accessible. Click the name of that data center. In the data center details, on the Storage tab, confirm the storage domain where you plan to install OpenShift Container Platform is Active . Record the Domain Name for use later on. Confirm Free Space has at least 230 GiB. Confirm that the storage domain meets these etcd backend performance requirements , which you can measure by using the fio performance benchmarking tool . In the data center details, click the Clusters tab. Find the RHV cluster where you plan to install OpenShift Container Platform. Record the cluster name for use later on. Inspect the RHV host resources. In the RHV Administration Portal, click Compute > Clusters . Click the cluster where you plan to install OpenShift Container Platform. In the cluster details, click the Hosts tab. Inspect the hosts and confirm they have a combined total of at least 28 Logical CPU Cores available exclusively for the OpenShift Container Platform cluster. Record the number of available Logical CPU Cores for use later on. Confirm that these CPU cores are distributed so that each of the seven virtual machines created during installation can have four cores. Confirm that, all together, the hosts have 112 GiB of Max free Memory for scheduling new virtual machines distributed to meet the requirements for each of the following OpenShift Container Platform machines: 16 GiB required for the bootstrap machine 16 GiB required for each of the three control plane machines 16 GiB for each of the three compute machines Record the amount of Max free Memory for scheduling new virtual machines for use later on. Verify that the virtual network for installing OpenShift Container Platform has access to the RHV Manager's REST API. From a virtual machine on this network, use curl to reach the RHV Manager's REST API: USD curl -k -u <username>@<profile>:<password> \ 1 https://<engine-fqdn>/ovirt-engine/api 2 1 For <username> , specify the user name of an RHV account with privileges to create and manage an OpenShift Container Platform cluster on RHV. For <profile> , specify the login profile, which you can get by going to the RHV Administration Portal login page and reviewing the Profile dropdown list. For <password> , specify the password for that user name. 2 For <engine-fqdn> , specify the fully qualified domain name of the RHV environment. For example: USD curl -k -u ocpadmin@internal:pw123 \ https://rhv-env.virtlab.example.com/ovirt-engine/api 5.6. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. During the initial boot, the machines require an IP address configuration that is set either through a DHCP server or statically by providing the required boot options. After a network connection is established, the machines download their Ignition config files from an HTTP or HTTPS server. The Ignition config files are then used to set the exact state of each machine. The Machine Config Operator completes more changes to the machines, such as the application of new certificates or keys, after installation. It is recommended to use a DHCP server for long-term management of the cluster machines. Ensure that the DHCP server is configured to provide persistent IP addresses, DNS server information, and hostnames to the cluster machines. Note If a DHCP service is not available for your user-provisioned infrastructure, you can instead provide the IP networking configuration and the address of the DNS server to the nodes at RHCOS install time. These can be passed as boot arguments if you are installing from an ISO image. See the Installing RHCOS and starting the OpenShift Container Platform bootstrap process section for more information about static IP provisioning and advanced networking options. The Kubernetes API server must be able to resolve the node names of the cluster machines. If the API servers and worker nodes are in different zones, you can configure a default DNS search zone to allow the API server to resolve the node names. Another supported approach is to always refer to hosts by their fully-qualified domain names in both the node objects and all DNS requests. Firewall Configure your firewall so your cluster has access to required sites. See also: Red Hat Virtualization Manager firewall requirements Host firewall requirements DNS Configure infrastructure-provided DNS to allow the correct resolution of the main components and services. If you use only one load balancer, these DNS records can point to the same IP address. Create DNS records for api.<cluster_name>.<base_domain> (internal and external resolution) and api-int.<cluster_name>.<base_domain> (internal resolution) that point to the load balancer for the control plane machines. Create a DNS record for *.apps.<cluster_name>.<base_domain> that points to the load balancer for the Ingress router. For example, ports 443 and 80 of the compute machines. 5.6.1. Setting the cluster node hostnames through DHCP On Red Hat Enterprise Linux CoreOS (RHCOS) machines, the hostname is set through NetworkManager. By default, the machines obtain their hostname through DHCP. If the hostname is not provided by DHCP, set statically through kernel arguments, or another method, it is obtained through a reverse DNS lookup. Reverse DNS lookup occurs after the network has been initialized on a node and can take time to resolve. Other system services can start prior to this and detect the hostname as localhost or similar. You can avoid this by using DHCP to provide the hostname for each cluster node. Additionally, setting the hostnames through DHCP can bypass any manual DNS record name configuration errors in environments that have a DNS split-horizon implementation. 5.6.2. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 5.1. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.2. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports NTP configuration for user-provisioned infrastructure OpenShift Container Platform clusters are configured to use a public Network Time Protocol (NTP) server by default. If you want to use a local enterprise NTP server, or if your cluster is being deployed in a disconnected network, you can configure the cluster to use a specific time server. For more information, see the documentation for Configuring chrony time service . If a DHCP server provides NTP server information, the chrony time service on the Red Hat Enterprise Linux CoreOS (RHCOS) machines read the information and can sync the clock with the NTP servers. 5.7. User-provisioned DNS requirements In OpenShift Container Platform deployments, DNS name resolution is required for the following components: The Kubernetes API The OpenShift Container Platform application wildcard The bootstrap, control plane, and compute machines Reverse DNS resolution is also required for the Kubernetes API, the bootstrap machine, the control plane machines, and the compute machines. DNS A/AAAA or CNAME records are used for name resolution and PTR records are used for reverse name resolution. The reverse records are important because Red Hat Enterprise Linux CoreOS (RHCOS) uses the reverse records to set the hostnames for all the nodes, unless the hostnames are provided by DHCP. Additionally, the reverse records are used to generate the certificate signing requests (CSR) that OpenShift Container Platform needs to operate. Note It is recommended to use a DHCP server to provide the hostnames to each cluster node. See the DHCP recommendations for user-provisioned infrastructure section for more information. The following DNS records are required for a user-provisioned OpenShift Container Platform cluster and they must be in place before installation. In each record, <cluster_name> is the cluster name and <base_domain> is the base domain that you specify in the install-config.yaml file. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 5.4. Required DNS records Component Record Description Kubernetes API api.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the API load balancer. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. api-int.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to internally identify the API load balancer. These records must be resolvable from all the nodes within the cluster. Important The API server must be able to resolve the worker nodes by the hostnames that are recorded in Kubernetes. If the API server cannot resolve the node names, then proxied API calls can fail, and you cannot retrieve logs from pods. Routes *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that refers to the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. These records must be resolvable by both clients external to the cluster and from all the nodes within the cluster. For example, console-openshift-console.apps.<cluster_name>.<base_domain> is used as a wildcard route to the OpenShift Container Platform console. Bootstrap machine bootstrap.<cluster_name>.<base_domain>. A DNS A/AAAA or CNAME record, and a DNS PTR record, to identify the bootstrap machine. These records must be resolvable by the nodes within the cluster. Control plane machines <control_plane><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the control plane nodes. These records must be resolvable by the nodes within the cluster. Compute machines <compute><n>.<cluster_name>.<base_domain>. DNS A/AAAA or CNAME records and DNS PTR records to identify each machine for the worker nodes. These records must be resolvable by the nodes within the cluster. Note In OpenShift Container Platform 4.4 and later, you do not need to specify etcd host and SRV records in your DNS configuration. Tip You can use the dig command to verify name and reverse name resolution. See the section on Validating DNS resolution for user-provisioned infrastructure for detailed validation steps. 5.7.1. Example DNS configuration for user-provisioned clusters This section provides A and PTR record configuration samples that meet the DNS requirements for deploying OpenShift Container Platform on user-provisioned infrastructure. The samples are not meant to provide advice for choosing one DNS solution over another. In the examples, the cluster name is ocp4 and the base domain is example.com . Example DNS A record configuration for a user-provisioned cluster The following example is a BIND zone file that shows sample A records for name resolution in a user-provisioned cluster. Example 5.1. Sample DNS zone database USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF 1 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer. 2 Provides name resolution for the Kubernetes API. The record refers to the IP address of the API load balancer and is used for internal cluster communications. 3 Provides name resolution for the wildcard routes. The record refers to the IP address of the application ingress load balancer. The application ingress load balancer targets the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. 4 Provides name resolution for the bootstrap machine. 5 6 7 Provides name resolution for the control plane machines. 8 9 Provides name resolution for the compute machines. Example DNS PTR record configuration for a user-provisioned cluster The following example BIND zone file shows sample PTR records for reverse name resolution in a user-provisioned cluster. Example 5.2. Sample DNS zone database for reverse records USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF 1 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer. 2 Provides reverse DNS resolution for the Kubernetes API. The PTR record refers to the record name of the API load balancer and is used for internal cluster communications. 3 Provides reverse DNS resolution for the bootstrap machine. 4 5 6 Provides reverse DNS resolution for the control plane machines. 7 8 Provides reverse DNS resolution for the compute machines. Note A PTR record is not required for the OpenShift Container Platform application wildcard. 5.7.2. Load balancing requirements for user-provisioned infrastructure Before you install OpenShift Container Platform, you must provision the API and application Ingress load balancing infrastructure. In production scenarios, you can deploy the API and application Ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you want to deploy the API and application Ingress load balancers with a Red Hat Enterprise Linux (RHEL) instance, you must purchase the RHEL subscription separately. The load balancing infrastructure must meet the following requirements: API load balancer : Provides a common endpoint for users, both human and machine, to interact with and configure the platform. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A stateless load balancing algorithm. The options vary based on the load balancer implementation. Important Do not configure session persistence for an API load balancer. Configuring session persistence for a Kubernetes API server might cause performance issues from excess application traffic for your OpenShift Container Platform cluster and the Kubernetes API that runs inside the cluster. Configure the following ports on both the front and back of the load balancers: Table 5.5. API load balancer Port Back-end machines (pool members) Internal External Description 6443 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. You must configure the /readyz endpoint for the API server health check probe. X X Kubernetes API server 22623 Bootstrap and control plane. You remove the bootstrap machine from the load balancer after the bootstrap machine initializes the cluster control plane. X Machine config server Note The load balancer must be configured to take a maximum of 30 seconds from the time the API server turns off the /readyz endpoint to the removal of the API server instance from the pool. Within the time frame after /readyz returns an error or becomes healthy, the endpoint must have been removed or added. Probing every 5 or 10 seconds, with two successful requests to become healthy and three to become unhealthy, are well-tested values. Application Ingress load balancer : Provides an ingress point for application traffic flowing in from outside the cluster. A working configuration for the Ingress router is required for an OpenShift Container Platform cluster. Configure the following conditions: Layer 4 load balancing only. This can be referred to as Raw TCP or SSL Passthrough mode. A connection-based or session-based persistence is recommended, based on the options available and types of applications that will be hosted on the platform. Tip If the true IP address of the client can be seen by the application Ingress load balancer, enabling source IP-based session persistence can improve performance for applications that use end-to-end TLS encryption. Configure the following ports on both the front and back of the load balancers: Table 5.6. Application Ingress load balancer Port Back-end machines (pool members) Internal External Description 443 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTPS traffic 80 The machines that run the Ingress Controller pods, compute, or worker, by default. X X HTTP traffic Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. 5.7.2.1. Example load balancer configuration for user-provisioned clusters This section provides an example API and application Ingress load balancer configuration that meets the load balancing requirements for user-provisioned clusters. The sample is an /etc/haproxy/haproxy.cfg configuration for an HAProxy load balancer. The example is not meant to provide advice for choosing one load balancing solution over another. In the example, the same load balancer is used for the Kubernetes API and application ingress traffic. In production scenarios, you can deploy the API and application ingress load balancers separately so that you can scale the load balancer infrastructure for each in isolation. Note If you are using HAProxy as a load balancer and SELinux is set to enforcing , you must ensure that the HAProxy service can bind to the configured TCP port by running setsebool -P haproxy_connect_any=1 . Example 5.3. Sample API and application Ingress load balancer configuration global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s 1 Port 6443 handles the Kubernetes API traffic and points to the control plane machines. 2 4 The bootstrap entries must be in place before the OpenShift Container Platform cluster installation and they must be removed after the bootstrap process is complete. 3 Port 22623 handles the machine config server traffic and points to the control plane machines. 5 Port 443 handles the HTTPS traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. 6 Port 80 handles the HTTP traffic and points to the machines that run the Ingress Controller pods. The Ingress Controller pods run on the compute machines by default. Note If you are deploying a three-node cluster with zero compute nodes, the Ingress Controller pods run on the control plane nodes. In three-node cluster deployments, you must configure your application Ingress load balancer to route HTTP and HTTPS traffic to the control plane nodes. Tip If you are using HAProxy as a load balancer, you can check that the haproxy process is listening on ports 6443 , 22623 , 443 , and 80 by running netstat -nltupe on the HAProxy node. 5.8. Setting up the installation machine To run the binary openshift-install installation program and Ansible scripts, set up the RHV Manager or an Red Hat Enterprise Linux (RHEL) computer with network access to the RHV environment and the REST API on the Manager. Procedure Update or install Python3 and Ansible. For example: # dnf update python3 ansible Install the python3-ovirt-engine-sdk4 package to get the Python Software Development Kit. Install the ovirt.image-template Ansible role. On the RHV Manager and other Red Hat Enterprise Linux (RHEL) machines, this role is distributed as the ovirt-ansible-image-template package. For example, enter: # dnf install ovirt-ansible-image-template Install the ovirt.vm-infra Ansible role. On the RHV Manager and other RHEL machines, this role is distributed as the ovirt-ansible-vm-infra package. # dnf install ovirt-ansible-vm-infra Create an environment variable and assign an absolute or relative path to it. For example, enter: USD export ASSETS_DIR=./wrk Note The installation program uses this variable to create a directory where it saves important installation-related files. Later, the installation process reuses this variable to locate those asset files. Avoid deleting this assets directory; it is required for uninstalling the cluster. 5.9. Setting up the CA certificate for RHV Download the CA certificate from the Red Hat Virtualization (RHV) Manager and set it up on the installation machine. You can download the certificate from a webpage on the RHV Manager or by using a curl command. Later, you provide the certificate to the installation program. Procedure Use either of these two methods to download the CA certificate: Go to the Manager's webpage, https://<engine-fqdn>/ovirt-engine/ . Then, under Downloads , click the CA Certificate link. Run the following command: USD curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem 1 1 For <engine-fqdn> , specify the fully qualified domain name of the RHV Manager, such as rhv-env.virtlab.example.com . Configure the CA file to grant rootless user access to the Manager. Set the CA file permissions to have an octal value of 0644 (symbolic value: -rw-r- r-- ): USD sudo chmod 0644 /tmp/ca.pem For Linux, copy the CA certificate to the directory for server certificates. Use -p to preserve the permissions: USD sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem Add the certificate to the certificate manager for your operating system: For macOS, double-click the certificate file and use the Keychain Access utility to add the file to the System keychain. For Linux, update the CA trust: USD sudo update-ca-trust Note If you use your own certificate authority, make sure the system trusts it. Additional resources To learn more, see Authentication and Security in the RHV documentation. 5.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.11. Downloading the Ansible playbooks Download the Ansible playbooks for installing OpenShift Container Platform version 4.13 on RHV. Procedure On your installation machine, run the following commands: USD mkdir playbooks USD cd playbooks USD xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/workers.yml' steps After you download these Ansible playbooks, you must also create the environment variable for the assets directory and customize the inventory.yml file before you create an installation configuration file by running the installation program. 5.12. The inventory.yml file You use the inventory.yml file to define and create elements of the OpenShift Container Platform cluster you are installing. This includes elements such as the Red Hat Enterprise Linux CoreOS (RHCOS) image, virtual machine templates, bootstrap machine, control plane nodes, and worker nodes. You also use inventory.yml to destroy the cluster. The following inventory.yml example shows you the parameters and their default values. The quantities and numbers in these default values meet the requirements for running a production OpenShift Container Platform cluster in a RHV environment. Example inventory.yml file --- all: vars: ovirt_cluster: "Default" ocp: assets_dir: "{{ lookup('env', 'ASSETS_DIR') }}" ovirt_config_path: "{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml" # --- # {op-system} section # --- rhcos: image_url: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz" local_cmp_image_path: "/tmp/rhcos.qcow2.gz" local_image_path: "/tmp/rhcos.qcow2" # --- # Profiles section # --- control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: "rhcos_x64" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: "{{ metadata.infraID }}-bootstrap" ocp_type: bootstrap profile: "{{ control_plane }}" type: server - name: "{{ metadata.infraID }}-master0" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master1" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-master2" ocp_type: master profile: "{{ control_plane }}" - name: "{{ metadata.infraID }}-worker0" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker1" ocp_type: worker profile: "{{ compute }}" - name: "{{ metadata.infraID }}-worker2" ocp_type: worker profile: "{{ compute }}" Important Enter values for parameters whose descriptions begin with "Enter." Otherwise, you can use the default value or replace it with a new value. General section ovirt_cluster : Enter the name of an existing RHV cluster in which to install the OpenShift Container Platform cluster. ocp.assets_dir : The path of a directory the openshift-install installation program creates to store the files that it generates. ocp.ovirt_config_path : The path of the ovirt-config.yaml file the installation program generates, for example, ./wrk/install-config.yaml . This file contains the credentials required to interact with the REST API of the Manager. Red Hat Enterprise Linux CoreOS (RHCOS) section image_url : Enter the URL of the RHCOS image you specified for download. local_cmp_image_path : The path of a local download directory for the compressed RHCOS image. local_image_path : The path of a local directory for the extracted RHCOS image. Profiles section This section consists of two profiles: control_plane : The profile of the bootstrap and control plane nodes. compute : The profile of workers nodes in the compute plane. These profiles have the following parameters. The default values of the parameters meet the minimum requirements for running a production cluster. You can increase or customize these values to meet your workload requirements. cluster : The value gets the cluster name from ovirt_cluster in the General Section. memory : The amount of memory, in GB, for the virtual machine. sockets : The number of sockets for the virtual machine. cores : The number of cores for the virtual machine. template : The name of the virtual machine template. If plan to install multiple clusters, and these clusters use templates that contain different specifications, prepend the template name with the ID of the cluster. operating_system : The type of guest operating system in the virtual machine. With oVirt/RHV version 4.4, this value must be rhcos_x64 so the value of Ignition script can be passed to the VM. type : Enter server as the type of the virtual machine. Important You must change the value of the type parameter from high_performance to server . disks : The disk specifications. The control_plane and compute nodes can have different storage domains. size : The minimum disk size. name : Enter the name of a disk connected to the target cluster in RHV. interface : Enter the interface type of the disk you specified. storage_domain : Enter the storage domain of the disk you specified. nics : Enter the name and network the virtual machines use. You can also specify the virtual network interface profile. By default, NICs obtain their MAC addresses from the oVirt/RHV MAC pool. Virtual machines section This final section, vms , defines the virtual machines you plan to create and deploy in the cluster. By default, it provides the minimum number of control plane and worker nodes for a production environment. vms contains three required elements: name : The name of the virtual machine. In this case, metadata.infraID prepends the virtual machine name with the infrastructure ID from the metadata.yml file. ocp_type : The role of the virtual machine in the OpenShift Container Platform cluster. Possible values are bootstrap , master , worker . profile : The name of the profile from which each virtual machine inherits specifications. Possible values in this example are control_plane or compute . You can override the value a virtual machine inherits from its profile. To do this, you add the name of the profile attribute to the virtual machine in inventory.yml and assign it an overriding value. To see an example of this, examine the name: "{{ metadata.infraID }}-bootstrap" virtual machine in the preceding inventory.yml example: It has a type attribute whose value, server , overrides the value of the type attribute this virtual machine would otherwise inherit from the control_plane profile. Metadata variables For virtual machines, metadata.infraID prepends the name of the virtual machine with the infrastructure ID from the metadata.json file you create when you build the Ignition files. The playbooks use the following code to read infraID from the specific file located in the ocp.assets_dir . --- - name: include metadata.json vars include_vars: file: "{{ ocp.assets_dir }}/metadata.json" name: metadata ... 5.13. Specifying the RHCOS image settings Update the Red Hat Enterprise Linux CoreOS (RHCOS) image settings of the inventory.yml file. Later, when you run this file one of the playbooks, it downloads a compressed Red Hat Enterprise Linux CoreOS (RHCOS) image from the image_url URL to the local_cmp_image_path directory. The playbook then uncompresses the image to the local_image_path directory and uses it to create oVirt/RHV templates. Procedure Locate the RHCOS image download page for the version of OpenShift Container Platform you are installing, such as Index of /pub/openshift-v4/dependencies/rhcos/latest/latest . From that download page, copy the URL of an OpenStack qcow2 image, such as https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz . Edit the inventory.yml playbook you downloaded earlier. In it, paste the URL as the value for image_url . For example: rhcos: "https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz" 5.14. Creating the install config file You create an installation configuration file by running the installation program, openshift-install , and responding to its prompts with information you specified or gathered earlier. When you finish responding to the prompts, the installation program creates an initial version of the install-config.yaml file in the assets directory you specified earlier, for example, ./wrk/install-config.yaml The installation program also creates a file, USDHOME/.ovirt/ovirt-config.yaml , that contains all the connection parameters that are required to reach the Manager and use its REST API. NOTE: The installation process does not use values you supply for some parameters, such as Internal API virtual IP and Ingress virtual IP , because you have already configured them in your infrastructure DNS. It also uses the values you supply for parameters in inventory.yml , like the ones for oVirt cluster , oVirt storage , and oVirt network . And uses a script to remove or replace these same values from install-config.yaml with the previously mentioned virtual IPs . Procedure Run the installation program: USD openshift-install create install-config --dir USDASSETS_DIR Respond to the installation program's prompts with information about your system. Example output ? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********> For Internal API virtual IP and Ingress virtual IP , supply the IP addresses you specified when you configured the DNS service. Together, the values you enter for the oVirt cluster and Base Domain prompts form the FQDN portion of URLs for the REST API and any applications you create, such as https://api.ocp4.example.org:6443/ and https://console-openshift-console.apps.ocp4.example.org . You can get the pull secret from the Red Hat OpenShift Cluster Manager . 5.15. Sample install-config.yaml file for RHV You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{"auths": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 5 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 3 6 Specifies whether to enable or disable simultaneous multithreading (SMT), or hyperthreading. By default, SMT is enabled to increase the performance of the cores in your machines. You can disable it by setting the parameter value to Disabled . If you disable SMT, you must disable it in all cluster machines; this includes both control plane and compute machines. Note Simultaneous multithreading (SMT) is enabled by default. If SMT is not enabled in your BIOS settings, the hyperthreading parameter has no effect. Important If you disable hyperthreading , whether in the BIOS or in the install-config.yaml file, ensure that your capacity planning accounts for the dramatically decreased machine performance. 4 You must set this value to 0 when you install OpenShift Container Platform on user-provisioned infrastructure. In installer-provisioned installations, the parameter controls the number of compute machines that the cluster creates and manages for you. In user-provisioned installations, you must manually deploy the compute machines before you finish installing the cluster. Note If you are installing a three-node cluster, do not deploy any compute machines when you install the Red Hat Enterprise Linux CoreOS (RHCOS) machines. 7 The number of control plane machines that you add to the cluster. Because the cluster uses these values as the number of etcd endpoints in the cluster, the value must match the number of control plane machines that you deploy. 8 The cluster name that you specified in your DNS records. 9 A block of IP addresses from which pod IP addresses are allocated. This block must not overlap with existing physical networks. These IP addresses are used for the pod network. If you need to access the pods from an external network, you must configure load balancers and routers to manage the traffic. Note Class E CIDR range is reserved for a future use. To use the Class E CIDR range, you must ensure your networking environment accepts the IP addresses within the Class E CIDR range. 10 The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 , then each node is assigned a /23 subnet out of the given cidr , which allows for 510 (2^(32 - 23) - 2) pod IP addresses. If you are required to provide access to nodes from an external network, configure load balancers and routers to manage the traffic. 11 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 12 The IP address pool to use for service IP addresses. You can enter only one IP address pool. This block must not overlap with existing physical networks. If you need to access the services from an external network, configure load balancers and routers to manage the traffic. 13 You must set the platform to none . You cannot provide additional platform configuration variables for RHV infrastructure. Important Clusters that are installed with the platform type none are unable to use some features, such as managing compute machines with the Machine API. This limitation applies even if the compute machines that are attached to the cluster are installed on a platform that would normally support the feature. This parameter cannot be changed after installation. 14 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. Important OpenShift Container Platform 4.13 is based on Red Hat Enterprise Linux (RHEL) 9.2. RHEL 9.2 cryptographic modules have not yet been submitted for FIPS validation. For more information, see "About this release" in the 4.13 OpenShift Container Platform Release Notes . 15 The pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 16 The SSH public key for the core user in Red Hat Enterprise Linux CoreOS (RHCOS). Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.15.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.16. Customizing install-config.yaml Here, you use three Python scripts to override some of the installation program's default behaviors: By default, the installation program uses the machine API to create nodes. To override this default behavior, you set the number of compute nodes to zero replicas. Later, you use Ansible playbooks to create the compute nodes. By default, the installation program sets the IP range of the machine network for nodes. To override this default behavior, you set the IP range to match your infrastructure. By default, the installation program sets the platform to ovirt . However, installing a cluster on user-provisioned infrastructure is more similar to installing a cluster on bare metal. Therefore, you delete the ovirt platform section from install-config.yaml and change the platform to none . Instead, you use inventory.yml to specify all of the required settings. Note These snippets work with Python 3 and Python 2. Procedure Set the number of compute nodes to zero replicas: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["compute"][0]["replicas"] = 0 open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Set the IP range of the machine network. For example, to set the range to 172.16.0.0/16 , enter: USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) conf["networking"]["machineNetwork"][0]["cidr"] = "172.16.0.0/16" open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Remove the ovirt section and change the platform to none : USD python3 -c 'import os, yaml path = "%s/install-config.yaml" % os.environ["ASSETS_DIR"] conf = yaml.safe_load(open(path)) platform = conf["platform"] del platform["ovirt"] platform["none"] = {} open(path, "w").write(yaml.dump(conf, default_flow_style=False))' Warning Red Hat Virtualization does not currently support installation with user-provisioned infrastructure on the oVirt platform. Therefore, you must set the platform to none , allowing OpenShift Container Platform to identify each node as a bare-metal node and the cluster as a bare-metal cluster. This is the same as installing a cluster on any platform , and has the following limitations: There will be no cluster provider so you must manually add each machine and there will be no node scaling capabilities. The oVirt CSI driver will not be installed and there will be no CSI capabilities. 5.17. Generate manifest files Use the installation program to generate a set of manifest files in the assets directory. The command to generate the manifest files displays a warning message before it consumes the install-config.yaml file. If you plan to reuse the install-config.yaml file, create a backup copy of it before you back it up before you generate the manifest files. Procedure Optional: Create a backup copy of the install-config.yaml file: USD cp install-config.yaml install-config.yaml.backup Generate a set of manifests in your assets directory: USD openshift-install create manifests --dir USDASSETS_DIR This command displays the following messages. Example output INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings The command generates the following manifest files: Example output USD tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml steps Make control plane nodes non-schedulable. 5.18. Making control-plane nodes non-schedulable Because you are manually creating and deploying the control plane machines, you must configure a manifest file to make the control plane nodes non-schedulable. Procedure To make the control plane nodes non-schedulable, enter: USD python3 -c 'import os, yaml path = "%s/manifests/cluster-scheduler-02-config.yml" % os.environ["ASSETS_DIR"] data = yaml.safe_load(open(path)) data["spec"]["mastersSchedulable"] = False open(path, "w").write(yaml.dump(data, default_flow_style=False))' 5.19. Building the Ignition files To build the Ignition files from the manifest files you just generated and modified, you run the installation program. This action creates a Red Hat Enterprise Linux CoreOS (RHCOS) machine, initramfs , which fetches the Ignition files and performs the configurations needed to create a node. In addition to the Ignition files, the installation program generates the following: An auth directory that contains the admin credentials for connecting to the cluster with the oc and kubectl utilities. A metadata.json file that contains information such as the OpenShift Container Platform cluster name, cluster ID, and infrastructure ID for the current installation. The Ansible playbooks for this installation process use the value of infraID as a prefix for the virtual machines they create. This prevents naming conflicts when there are multiple installations in the same oVirt/RHV cluster. Note Certificates in Ignition configuration files expire after 24 hours. Complete the cluster installation and keep the cluster running in a non-degraded state for 24 hours so that the first certificate rotation can finish. Procedure To build the Ignition files, enter: USD openshift-install create ignition-configs --dir USDASSETS_DIR Example output USD tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign 5.20. Creating templates and virtual machines After confirming the variables in the inventory.yml , you run the first Ansible provisioning playbook, create-templates-and-vms.yml . This playbook uses the connection parameters for the RHV Manager from USDHOME/.ovirt/ovirt-config.yaml and reads metadata.json in the assets directory. If a local Red Hat Enterprise Linux CoreOS (RHCOS) image is not already present, the playbook downloads one from the URL you specified for image_url in inventory.yml . It extracts the image and uploads it to RHV to create templates. The playbook creates a template based on the control_plane and compute profiles in the inventory.yml file. If these profiles have different names, it creates two templates. When the playbook finishes, the virtual machines it creates are stopped. You can get information from them to help configure other infrastructure elements. For example, you can get the virtual machines' MAC addresses to configure DHCP to assign permanent IP addresses to the virtual machines. Procedure In inventory.yml , under the control_plane and compute variables, change both instances of type: high_performance to type: server . Optional: If you plan to perform multiple installations to the same cluster, create different templates for each OpenShift Container Platform installation. In the inventory.yml file, prepend the value of template with infraID . For example: control_plane: cluster: "{{ ovirt_cluster }}" memory: 16GiB sockets: 4 cores: 1 template: "{{ metadata.infraID }}-rhcos_tpl" operating_system: "rhcos_x64" ... Create the templates and virtual machines: USD ansible-playbook -i inventory.yml create-templates-and-vms.yml 5.21. Creating the bootstrap machine You create a bootstrap machine by running the bootstrap.yml playbook. This playbook starts the bootstrap virtual machine, and passes it the bootstrap.ign Ignition file from the assets directory. The bootstrap node configures itself so it can serve Ignition files to the control plane nodes. To monitor the bootstrap process, you use the console in the RHV Administration Portal or connect to the virtual machine by using SSH. Procedure Create the bootstrap machine: USD ansible-playbook -i inventory.yml bootstrap.yml Connect to the bootstrap machine using a console in the Administration Portal or SSH. Replace <bootstrap_ip> with the bootstrap node IP address. To use SSH, enter: USD ssh core@<boostrap.ip> Collect bootkube.service journald unit logs for the release image service from the bootstrap node: [core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service Note The bootkube.service log on the bootstrap node outputs etcd connection refused errors, indicating that the bootstrap server is unable to connect to etcd on control plane nodes. After etcd has started on each control plane node and the nodes have joined the cluster, the errors should stop. 5.22. Creating the control plane nodes You create the control plane nodes by running the masters.yml playbook. This playbook passes the master.ign Ignition file to each of the virtual machines. The Ignition file contains a directive for the control plane node to get the Ignition from a URL such as https://api-int.ocp4.example.org:22623/config/master . The port number in this URL is managed by the load balancer, and is accessible only inside the cluster. Procedure Create the control plane nodes: USD ansible-playbook -i inventory.yml masters.yml While the playbook creates your control plane, monitor the bootstrapping process: USD openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR Example output INFO API v1.26.0 up INFO Waiting up to 40m0s for bootstrapping to complete... When all the pods on the control plane nodes and etcd are up and running, the installation program displays the following output. Example output INFO It is now safe to remove the bootstrap resources 5.23. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 5.24. Removing the bootstrap machine After the wait-for command shows that the bootstrap process is complete, you must remove the bootstrap virtual machine to free up compute, memory, and storage resources. Also, remove settings for the bootstrap machine from the load balancer directives. Procedure To remove the bootstrap machine from the cluster, enter: USD ansible-playbook -i inventory.yml retire-bootstrap.yml Remove settings for the bootstrap machine from the load balancer directives. 5.25. Creating the worker nodes and completing the installation Creating worker nodes is similar to creating control plane nodes. However, worker nodes workers do not automatically join the cluster. To add them to the cluster, you review and approve the workers' pending CSRs (Certificate Signing Requests). After approving the first requests, you continue approving CSR until all of the worker nodes are approved. When you complete this process, the worker nodes become Ready and can have pods scheduled to run on them. Finally, monitor the command line to see when the installation process completes. Procedure Create the worker nodes: USD ansible-playbook -i inventory.yml workers.yml To list all of the CSRs, enter: USD oc get csr -A Eventually, this command displays one CSR per node. For example: Example output NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending To filter the list and see only pending CSRs, enter: USD watch "oc get csr -A | grep pending -i" This command refreshes the output every two seconds and displays only pending CSRs. For example: Example output Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending Inspect each pending request. For example: Example output USD oc describe csr csr-m724n Example output Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none> If the CSR information is correct, approve the request: USD oc adm certificate approve csr-m724n Wait for the installation process to finish: USD openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug When the installation completes, the command line displays the URL of the OpenShift Container Platform web console and the administrator user name and password. 5.26. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.27. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. | [
"curl -k -u <username>@<profile>:<password> \\ 1 https://<engine-fqdn>/ovirt-engine/api 2",
"curl -k -u ocpadmin@internal:pw123 https://rhv-env.virtlab.example.com/ovirt-engine/api",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. IN MX 10 smtp.example.com. ; ; ns1.example.com. IN A 192.168.1.5 smtp.example.com. IN A 192.168.1.5 ; helper.example.com. IN A 192.168.1.5 helper.ocp4.example.com. IN A 192.168.1.5 ; api.ocp4.example.com. IN A 192.168.1.5 1 api-int.ocp4.example.com. IN A 192.168.1.5 2 ; *.apps.ocp4.example.com. IN A 192.168.1.5 3 ; bootstrap.ocp4.example.com. IN A 192.168.1.96 4 ; control-plane0.ocp4.example.com. IN A 192.168.1.97 5 control-plane1.ocp4.example.com. IN A 192.168.1.98 6 control-plane2.ocp4.example.com. IN A 192.168.1.99 7 ; compute0.ocp4.example.com. IN A 192.168.1.11 8 compute1.ocp4.example.com. IN A 192.168.1.7 9 ; ;EOF",
"USDTTL 1W @ IN SOA ns1.example.com. root ( 2019070700 ; serial 3H ; refresh (3 hours) 30M ; retry (30 minutes) 2W ; expiry (2 weeks) 1W ) ; minimum (1 week) IN NS ns1.example.com. ; 5.1.168.192.in-addr.arpa. IN PTR api.ocp4.example.com. 1 5.1.168.192.in-addr.arpa. IN PTR api-int.ocp4.example.com. 2 ; 96.1.168.192.in-addr.arpa. IN PTR bootstrap.ocp4.example.com. 3 ; 97.1.168.192.in-addr.arpa. IN PTR control-plane0.ocp4.example.com. 4 98.1.168.192.in-addr.arpa. IN PTR control-plane1.ocp4.example.com. 5 99.1.168.192.in-addr.arpa. IN PTR control-plane2.ocp4.example.com. 6 ; 11.1.168.192.in-addr.arpa. IN PTR compute0.ocp4.example.com. 7 7.1.168.192.in-addr.arpa. IN PTR compute1.ocp4.example.com. 8 ; ;EOF",
"global log 127.0.0.1 local2 pidfile /var/run/haproxy.pid maxconn 4000 daemon defaults mode http log global option dontlognull option http-server-close option redispatch retries 3 timeout http-request 10s timeout queue 1m timeout connect 10s timeout client 1m timeout server 1m timeout http-keep-alive 10s timeout check 10s maxconn 3000 listen api-server-6443 1 bind *:6443 mode tcp option httpchk GET /readyz HTTP/1.0 option log-health-checks balance roundrobin server bootstrap bootstrap.ocp4.example.com:6443 verify none check check-ssl inter 10s fall 2 rise 3 backup 2 server master0 master0.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master1 master1.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 server master2 master2.ocp4.example.com:6443 weight 1 verify none check check-ssl inter 10s fall 2 rise 3 listen machine-config-server-22623 3 bind *:22623 mode tcp server bootstrap bootstrap.ocp4.example.com:22623 check inter 1s backup 4 server master0 master0.ocp4.example.com:22623 check inter 1s server master1 master1.ocp4.example.com:22623 check inter 1s server master2 master2.ocp4.example.com:22623 check inter 1s listen ingress-router-443 5 bind *:443 mode tcp balance source server worker0 worker0.ocp4.example.com:443 check inter 1s server worker1 worker1.ocp4.example.com:443 check inter 1s listen ingress-router-80 6 bind *:80 mode tcp balance source server worker0 worker0.ocp4.example.com:80 check inter 1s server worker1 worker1.ocp4.example.com:80 check inter 1s",
"dnf update python3 ansible",
"dnf install ovirt-ansible-image-template",
"dnf install ovirt-ansible-vm-infra",
"export ASSETS_DIR=./wrk",
"curl -k 'https://<engine-fqdn>/ovirt-engine/services/pki-resource?resource=ca-certificate&format=X509-PEM-CA' -o /tmp/ca.pem 1",
"sudo chmod 0644 /tmp/ca.pem",
"sudo cp -p /tmp/ca.pem /etc/pki/ca-trust/source/anchors/ca.pem",
"sudo update-ca-trust",
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"mkdir playbooks",
"cd playbooks",
"xargs -n 1 curl -O <<< ' https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/common-auth.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/create-templates-and-vms.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/inventory.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-bootstrap.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-masters.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/retire-workers.yml https://raw.githubusercontent.com/openshift/installer/release-4.13/upi/ovirt/workers.yml'",
"--- all: vars: ovirt_cluster: \"Default\" ocp: assets_dir: \"{{ lookup('env', 'ASSETS_DIR') }}\" ovirt_config_path: \"{{ lookup('env', 'HOME') }}/.ovirt/ovirt-config.yaml\" # --- # {op-system} section # --- rhcos: image_url: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\" local_cmp_image_path: \"/tmp/rhcos.qcow2.gz\" local_image_path: \"/tmp/rhcos.qcow2\" # --- # Profiles section # --- control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab compute: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: worker_rhcos_tpl operating_system: \"rhcos_x64\" type: high_performance graphical_console: headless_mode: false protocol: - spice - vnc disks: - size: 120GiB name: os interface: virtio_scsi storage_domain: depot_nvme nics: - name: nic1 network: lab profile: lab # --- # Virtual machines section # --- vms: - name: \"{{ metadata.infraID }}-bootstrap\" ocp_type: bootstrap profile: \"{{ control_plane }}\" type: server - name: \"{{ metadata.infraID }}-master0\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master1\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-master2\" ocp_type: master profile: \"{{ control_plane }}\" - name: \"{{ metadata.infraID }}-worker0\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker1\" ocp_type: worker profile: \"{{ compute }}\" - name: \"{{ metadata.infraID }}-worker2\" ocp_type: worker profile: \"{{ compute }}\"",
"--- - name: include metadata.json vars include_vars: file: \"{{ ocp.assets_dir }}/metadata.json\" name: metadata",
"rhcos: \"https://mirror.openshift.com/pub/openshift-v4/dependencies/rhcos/4.13/latest/rhcos-openstack.x86_64.qcow2.gz\"",
"openshift-install create install-config --dir USDASSETS_DIR",
"? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>",
"? SSH Public Key /home/user/.ssh/id_dsa.pub ? Platform <ovirt> ? Engine FQDN[:PORT] [? for help] <engine.fqdn> ? Enter ovirt-engine username <ocpadmin@internal> ? Enter password <******> ? oVirt cluster <cluster> ? oVirt storage <storage> ? oVirt network <net> ? Internal API virtual IP <172.16.0.252> ? Ingress virtual IP <172.16.0.251> ? Base Domain <example.org> ? Cluster Name <ocp4> ? Pull Secret [? for help] <********>",
"apiVersion: v1 baseDomain: example.com 1 compute: 2 - hyperthreading: Enabled 3 name: worker replicas: 0 4 controlPlane: 5 hyperthreading: Enabled 6 name: master replicas: 3 7 metadata: name: test 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 10 networkType: OVNKubernetes 11 serviceNetwork: 12 - 172.30.0.0/16 platform: none: {} 13 fips: false 14 pullSecret: '{\"auths\": ...}' 15 sshKey: 'ssh-ed25519 AAAA...' 16",
"apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5",
"./openshift-install wait-for install-complete --log-level debug",
"python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"compute\"][0][\"replicas\"] = 0 open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'",
"python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) conf[\"networking\"][\"machineNetwork\"][0][\"cidr\"] = \"172.16.0.0/16\" open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'",
"python3 -c 'import os, yaml path = \"%s/install-config.yaml\" % os.environ[\"ASSETS_DIR\"] conf = yaml.safe_load(open(path)) platform = conf[\"platform\"] del platform[\"ovirt\"] platform[\"none\"] = {} open(path, \"w\").write(yaml.dump(conf, default_flow_style=False))'",
"cp install-config.yaml install-config.yaml.backup",
"openshift-install create manifests --dir USDASSETS_DIR",
"INFO Consuming Install Config from target directory WARNING Making control-plane schedulable by setting MastersSchedulable to true for Scheduler cluster settings",
"tree . └── wrk ├── manifests │ ├── 04-openshift-machine-config-operator.yaml │ ├── cluster-config.yaml │ ├── cluster-dns-02-config.yml │ ├── cluster-infrastructure-02-config.yml │ ├── cluster-ingress-02-config.yml │ ├── cluster-network-01-crd.yml │ ├── cluster-network-02-config.yml │ ├── cluster-proxy-01-config.yaml │ ├── cluster-scheduler-02-config.yml │ ├── cvo-overrides.yaml │ ├── etcd-ca-bundle-configmap.yaml │ ├── etcd-client-secret.yaml │ ├── etcd-host-service-endpoints.yaml │ ├── etcd-host-service.yaml │ ├── etcd-metric-client-secret.yaml │ ├── etcd-metric-serving-ca-configmap.yaml │ ├── etcd-metric-signer-secret.yaml │ ├── etcd-namespace.yaml │ ├── etcd-service.yaml │ ├── etcd-serving-ca-configmap.yaml │ ├── etcd-signer-secret.yaml │ ├── kube-cloud-config.yaml │ ├── kube-system-configmap-root-ca.yaml │ ├── machine-config-server-tls-secret.yaml │ └── openshift-config-secret-pull-secret.yaml └── openshift ├── 99_kubeadmin-password-secret.yaml ├── 99_openshift-cluster-api_master-user-data-secret.yaml ├── 99_openshift-cluster-api_worker-user-data-secret.yaml ├── 99_openshift-machineconfig_99-master-ssh.yaml ├── 99_openshift-machineconfig_99-worker-ssh.yaml └── openshift-install-manifests.yaml",
"python3 -c 'import os, yaml path = \"%s/manifests/cluster-scheduler-02-config.yml\" % os.environ[\"ASSETS_DIR\"] data = yaml.safe_load(open(path)) data[\"spec\"][\"mastersSchedulable\"] = False open(path, \"w\").write(yaml.dump(data, default_flow_style=False))'",
"openshift-install create ignition-configs --dir USDASSETS_DIR",
"tree . └── wrk ├── auth │ ├── kubeadmin-password │ └── kubeconfig ├── bootstrap.ign ├── master.ign ├── metadata.json └── worker.ign",
"control_plane: cluster: \"{{ ovirt_cluster }}\" memory: 16GiB sockets: 4 cores: 1 template: \"{{ metadata.infraID }}-rhcos_tpl\" operating_system: \"rhcos_x64\"",
"ansible-playbook -i inventory.yml create-templates-and-vms.yml",
"ansible-playbook -i inventory.yml bootstrap.yml",
"ssh core@<boostrap.ip>",
"[core@ocp4-lk6b4-bootstrap ~]USD journalctl -b -f -u release-image.service -u bootkube.service",
"ansible-playbook -i inventory.yml masters.yml",
"openshift-install wait-for bootstrap-complete --dir USDASSETS_DIR",
"INFO API v1.26.0 up INFO Waiting up to 40m0s for bootstrapping to complete",
"INFO It is now safe to remove the bootstrap resources",
"export KUBECONFIG=USDASSETS_DIR/auth/kubeconfig",
"oc get nodes",
"oc get clusterversion",
"oc get clusteroperator",
"oc get pods -A",
"ansible-playbook -i inventory.yml retire-bootstrap.yml",
"ansible-playbook -i inventory.yml workers.yml",
"oc get csr -A",
"NAME AGE SIGNERNAME REQUESTOR CONDITION csr-2lnxd 63m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master0.ocp4.example.org Approved,Issued csr-hff4q 64m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-hsn96 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master2.ocp4.example.org Approved,Issued csr-m724n 6m2s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-p4dz2 60m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-t9vfj 60m kubernetes.io/kubelet-serving system:node:ocp4-lk6b4-master1.ocp4.example.org Approved,Issued csr-tggtr 61m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Approved,Issued csr-wcbrf 7m6s kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"watch \"oc get csr -A | grep pending -i\"",
"Every 2.0s: oc get csr -A | grep pending -i csr-m724n 10m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-wcbrf 11m kubernetes.io/kube-apiserver-client-kubelet system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending",
"oc describe csr csr-m724n",
"Name: csr-m724n Labels: <none> Annotations: <none> CreationTimestamp: Sun, 19 Jul 2020 15:59:37 +0200 Requesting User: system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Signer: kubernetes.io/kube-apiserver-client-kubelet Status: Pending Subject: Common Name: system:node:ocp4-lk6b4-worker1.ocp4.example.org Serial Number: Organization: system:nodes Events: <none>",
"oc adm certificate approve csr-m724n",
"openshift-install wait-for install-complete --dir USDASSETS_DIR --log-level debug",
"oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_rhv/installing-rhv-restricted-network |
Chapter 48. Returning Information to the Consumer | Chapter 48. Returning Information to the Consumer Abstract RESTful requests require that at least an HTTP response code be returned to the consumer. In many cases, a request can be satisfied by returning a plain JAXB object or a GenericEntity object. When the resource method needs to return additional metadata along with the response entity, JAX-RS resource methods can return a Response object containing any needed HTTP headers or other metadata. 48.1. Return Types The information returned to the consumer determines the exact type of object a resource method returns. This may seem obvious, but the mapping between Java return objects and what is returned to a RESTful consumer is not one-to-one. At a minimum, RESTful consumers need to be returned a valid HTTP return code in addition to any response entity body. The mapping of the data contained within a Java object to a response entity is effected by the MIME types a consumer is willing to accept. To address the issues involved in mapping Java object to RESTful response messages, resource methods are allowed to return four types of Java constructs: Section 48.2, "Returning plain Java constructs" return basic information with HTTP return codes determined by the JAX-RS runtime. Section 48.2, "Returning plain Java constructs" return complex information with HTTP return codes determined by the JAX-RS runtime. Section 48.3, "Fine tuning an application's responses" return complex information with a programmatically determined HTTP return status. The Response object also allows HTTP headers to be specified. Section 48.4, "Returning entities with generic type information" return complex information with HTTP return codes determined by the JAX-RS runtime. The GenericEnitity object provides more information to the runtime components serializing the data. 48.2. Returning plain Java constructs Overview In many cases a resource class can return a standard Java type, a JAXB object, or any object for which the application has an entity provider. In these cases the runtime determines the MIME type information using the Java class of the object being returned. The runtime also determines the appropriate HTTP return code to send to the consumer. Returnable types Resource methods can return void or any Java type for which an entity writer is provided. By default, the runtime has providers for the following: the Java primitives the Number representations of the Java primitives JAXB objects the section called "Natively supported types" lists all of the return types supported by default. the section called "Custom writers" describes how to implement a custom entity writer. MIME types The runtime determines the MIME type of the returned entity by first checking the resource method and resource class for a @Produces annotation. If it finds one, it uses the MIME type specified in the annotation. If it does not find one specified by the resource implementation, it relies on the entity providers to determine the proper MIME type. By default the runtime assign MIME types as follows: Java primitives and their Number representations are assigned a MIME type of application/octet-stream . JAXB objects are assigned a MIME type of application/xml . Applications can use other mappings by implementing custom entity providers as described in the section called "Custom writers" . Response codes When resource methods return plain Java constructs, the runtime automatically sets the response's status code if the resource method completes without throwing an exception. The status code is set as follows: 204 (No Content)-the resource method's return type is void 204 (No Content)-the value of the returned entity is null 200 (OK)-the value of the returned entity is not null If an exception is thrown before the resource method completes the return status code is set as described in Chapter 50, Handling Exceptions . 48.3. Fine tuning an application's responses 48.3.1. Basics of building responses Overview RESTful services often need more precise control over the response returned to a consumer than is allowed when a resource method returns a plain Java construct. The JAX-RS Response class allows a resource method to have some control over the return status sent to the consumer and to specify HTTP message headers and cookies in the response. Response objects wrap the object representing the entity that is returned to the consumer. Response objects are instantiated using the ResponseBuilder class as a factory. The ResponseBuilder class also has many of the methods used to manipulate the response's metadata. For instance the ResonseBuilder class contains the methods for setting HTTP headers and cache control directives. Relationship between a response and a response builder The Response class has a protected constructor, so they cannot be instantiated directly. They are created using the ResponseBuilder class enclosed by the Response class. The ResponseBuilder class is a holder for all of the information that will be encapsulated in the response created from it. The ResponseBuilder class also has all of the methods responsible for setting HTTP header properties on the message. The Response class does provide some methods that ease setting the proper response code and wrapping the entity. There are methods for each of the common response status codes. The methods corresponding to status that include an entity body, or required metadata, include versions that allow for directly setting the information into the associated response builder. The ResponseBuilder class' build() method returns a response object containing the information stored in the response builder at the time the method is invoked. After the response object is returned, the response builder is returned to a clean state. Getting a response builder There are two ways to get a response builder: Using the static methods of the Response class as shown in Getting a response builder using the Response class . Getting a response builder using the Response class When getting a response builder this way you do not get access to an instance you can manipulate in multiple steps. You must string all of the actions into a single method call. Using the Apache CXF specific ResponseBuilderImpl class. This class allows you to work directly with a response builder. However, it requires that you manually set all of the response builders information manually. Example 48.1, "Getting a response builder using the ResponseBuilderImpl class" shows how Getting a response builder using the Response class could be rewritten using the ResponseBuilderImpl class. Example 48.1. Getting a response builder using the ResponseBuilderImpl class Note You could also simply assign the ResponseBuilder returned from a Response class' method to a ResponseBuilderImpl object. More information For more information about the Response class see the Response class' Javadoc . For more information about the ResponseBuilder class see the ResponseBuilder class' Javadoc . For more information on the Apache CXF ResponseBuilderIml class see the ResponseBuilderImpl Javadoc . 48.3.2. Creating responses for common use cases Overview The Response class provides shortcut methods for handling the more common responses that a RESTful service will need. These methods handle setting the proper headers using either provided values or default values. They also handle populating the entity body when appropriate. Creating responses for successful requests When a request is successfully processed the application needs to send a response to acknowledge that the request has been fulfilled. That response may contain an entity. The most common response when successfully completing a response is OK . An OK response typically contains an entity that corresponds to the request. The Response class has an overloaded ok() method that sets the response status to 200 and adds a supplied entity to the enclosed response builder. There are five versions of the ok() method. The most commonly used variant are: Response.ok() -creates a response with a status of 200 and an empty entity body. Response.ok(java.lang.Object entity) -creates a response with a status of 200 , stores the supplied object in the responses entity body, and determines the entities media type by introspecting the object. Creating a response with an 200 response shows an example of creating a response with an OK status. Creating a response with an 200 response For cases where the requester is not expecting an entity body, it may be more appropriate to send a 204 No Content status instead of an 200 OK status. The Response.noContent() method will create an appropriate response object. Creating a response with a 204 status shows an example of creating a response with an 204 status. Creating a response with a 204 status Creating responses for redirection The Response class provides methods for handling three of the redirection response statuses. 303 See Other The 303 See Other status is useful when the requested resource needs to permanently redirect the consumer to a new resource to process the request. The Response classes seeOther() method creates a response with a 303 status and places the new resource URI in the message's Location field. The seeOther() method takes a single parameter that specifies the new URI as a java.net.URI object. 304 Not Modified The 304 Not Modified status can be used for different things depending on the nature of the request. It can be used to signify that the requested resource has not changed since a GET request. It can also be used to signify that a request to modify the resource did not result in the resource being changed. The Response classes notModified() methods creates a response with a 304 status and sets the modified date property on the HTTP message. There are three versions of the notModified() method: notModified notModified javax.ws.rs.core.Entity tag notModified java.lang.String tag 307 Temporary Redirect The 307 Temporary Redirect status is useful when the requested resource needs to direct the consumer to a new resource, but wants the consumer to continue using this resource to handle future requests. The Response classes temporaryRedirect() method creates a response with a 307 status and places the new resource URI in the message's Location field. The temporaryRedirect() method takes a single parameter that specifies the new URI as a java.net.URI object. Creating a response with a 304 status shows an example of creating a response with an 304 status. Creating a response with a 304 status Creating responses to signal errors The Response class provides methods to create responses for two basic processing errors: serverError -creates a response with a status of 500 Internal Server Error . notAcceptable java.util.List<javax.ws.rs.core.Variant> variants -creates a response with a 406 Not Acceptable status and an entity body containing a list of acceptable resource types. Creating a response with a 500 status shows an example of creating a response with an 500 status. Creating a response with a 500 status 48.3.3. Handling more advanced responses Overview The Response class methods provide short cuts for creating responses for common cases. When you need to address more complicated cases such as specifying cache control directives, adding custom HTTP headers, or sending a status not handled by the Response class, you need to use the ResponseBuilder classes methods to populate the response before using the build() method to generate the response object. As discussed in the section called "Getting a response builder" , you can use the Apache CXF ResponseBuilderImpl class to create a response builder instance that can be manipulated directly. Adding custom headers Custom headers are added to a response using the ResponseBuilder class' header() method. The header() method takes two parameters: name -a string specifying the name of the header value -a Java object containing the data stored in the header You can set multiple headers on the message by calling the header() method repeatedly. Adding a header to a response shows code for adding a header to a response. Adding a header to a response Adding a cookie Custom headers are added to a response using the ResponseBuilder class' cookie() method. The cookie() method takes one or more cookies. Each cookie is stored in a javax.ws.rs.core.NewCookie object. The easiest of the NewCookie class' contructors to use takes two parameters: name -a string specifying the name of the cookie value -a string specifying the value of the cookie You can set multiple cookies by calling the cookie() method repeatedly. Adding a cookie to a response shows code for adding a cookie to a response. Adding a cookie to a response Warning Calling the cookie() method with a null parameter list erases any cookies already associated with the response. Setting the response status When you want to return a status other than one of the statuses supported by the Response class' helper methods, you can use the ResponseBuilder class' status() method to set the response's status code. The status() method has two variants. One takes an int that specifies the response code. The other takes a Response.Status object to specify the response code. The Response.Status class is an enumeration enclosed in the Response class. It has entries for most of the defined HTTP response codes. Adding a header to a response shows code for setting the response status to 404 Not Found . Adding a header to a response Setting cache control directives The ResponseBuilder class' cacheControl() method allows you to set the cache control headers on the response. The cacheControl() method takes a javax.ws.rs.CacheControl object that specifies the cache control directives for the response. The CacheControl class has methods that correspond to all of the cache control directives supported by the HTTP specification. Where the directive is a simple on or off value the setter method takes a boolean value. Where the directive requires a numeric value, such as the max-age directive, the setter takes an int value. Adding a header to a response shows code for setting the no-store cache control directive. Adding a header to a response 48.4. Returning entities with generic type information Overview There are occasions where the application needs more control over the MIME type of the returned object or the entity provider used to serialize the response. The JAX-RS javax.ws.rs.core.GenericEntity<T> class provides finer control over the serializing of entities by providing a mechanism for specifying the generic type of the object representing the entity. Using a GenericEntity<T> object One of the criteria used for selecting the entity provider that serializes a response is the generic type of the object. The generic type of an object represents the Java type of the object. When a common Java type or a JAXB object is returned, the runtime can use Java reflection to determine the generic type. However, when a JAX-RS Response object is returned, the runtime cannot determine the generic type of the wrapped entity and the actual Java class of the object is used as the Java type. To ensure that the entity provider is provided with correct generic type information, the entity can be wrapped in a GenericEntity<T> object before being added to the Response object being returned. Resource methods can also directly return a GenericEntity<T> object. In practice, this approach is rarely used. The generic type information determined by reflection of an unwrapped entity and the generic type information stored for an entity wrapped in a GenericEntity<T> object are typically the same. Creating a GenericEntity<T> object There are two ways to create a GenericEntity<T> object: Create a subclass of the GenericEntity<T> class using the entity being wrapped. Creating a GenericEntity<T> object using a subclass shows how to create a GenericEntity<T> object containing an entity of type List<String> whose generic type will be available at runtime. Creating a GenericEntity<T> object using a subclass The subclass used to create a GenericEntity<T> object is typically anonymous. Create an instance directly by supplying the generic type information with the entity. Example 48.2, "Directly instantiating a GenericEntity<T> object" shows how to create a response containing an entity of type AtomicInteger . Example 48.2. Directly instantiating a GenericEntity<T> object 48.5. Asynchronous Response 48.5.1. Asynchronous Processing on the Server Overview The purpose of asynchronous processing of invocations on the server side is to enable more efficient use of threads and, ultimately, to avoid the scenario where client connection attempts are refused because all of the server's request threads are blocked. When an invocation is processed asynchronously, the request thread is freed up almost immediately. Note Note that even when asynchronous processing is enabled on the server side, a client will still remain blocked until it receives a response from the server. If you want to see asynchronous behaviour on the client side, you must implement client-side asynchronous processing. See Section 49.6, "Asynchronous Processing on the Client" . Basic model for asynchronous processing Figure 48.1, "Threading Model for Asynchronous Processing" shows an overview of the basic model for asynchronous processing on the server side. Figure 48.1. Threading Model for Asynchronous Processing In outline, a request is processed as follows in the asynchronous model: An asynchronous resource method is invoked within a request thread (and receives a reference to an AsyncResponse object, which will be needed later to send back the response). The resource method encapsulates the suspended request in a Runnable object, which contains all of the information and processing logic required to process the request. The resource method pushes the Runnable object onto the blocking queue of the executor thread pool. The resource method can now return, thus freeing up the request thread. When the Runnable object gets to the top of the queue, it is processed by one of the threads in the executor thread pool. The encapsulated AsyncResponse object is then used to send the response back to the client. Thread pool implementation with Java executor The java.util.concurrent API is a powerful API that enables you to create a complete thread pool implementation very easily. In the terminology of the Java concurrency API, a thread pool is called an executor . It requires only a single line of code to create a complete working thread pool, including the working threads and the blocking queue that feeds them. For example, to create a complete working thread pool like the Executor Thread Pool shown in Figure 48.1, "Threading Model for Asynchronous Processing" , create a java.util.concurrent.Executor instance, as follows: This constructor creates a new thread pool with five threads, fed by a single blocking queue with which can hold up to 10 Runnable objects. To submit a task to the thread pool, call the executor.execute method, passing in a reference to a Runnable object (which encapsulates the asynchronous task). Defining an asynchronous resource method To define a resource method that is asynchronous, inject an argument of type javax.ws.rs.container.AsyncResponse using the @Suspended annotation and make sure that the method returns void . For example: Note that the resource method must return void , because the injected AsyncResponse object will be used to return the response at a later time. AsyncResponse class The javax.ws.rs.container.AsyncResponse class provides an a abstract handle on an incoming client connection. When an AsyncResponse object is injected into a resource method, the underlying TCP client connection is initially in a suspended state. At a later time, when you are ready to return the response, you can re-activate the underlying TCP client connection and pass back the response, by calling resume on the AsyncResponse instance. Alternatively, if you need to abort the invocation, you could call cancel on the AsyncResponse instance. Encapsulating a suspended request as a Runnable In the asynchronous processing scenario shown in Figure 48.1, "Threading Model for Asynchronous Processing" , you push the suspended request onto a queue, from where it can be processed at a later time by a dedicated thread pool. In order for this approach to work, however, you need to have some way of encapsulating the suspended request in an object. The suspended request object needs to encapsulate the following things: Parameters from the incoming request (if any). The AsyncResponse object, which provides a handle on the incoming client connection and a way of sending back the response. The logic of the invocation. A convenient way to encapsulate these things is to define a Runnable class to represent the suspended request, where the Runnable.run() method encapsulates the logic of the invocation. The most elegant way to do this is to implement the Runnable as a local class, as shown in the following example. Example of asynchronous processing To implement the asynchronous processing scenario, the implementation of the resource method must pass a Runnable object (representing the suspended request) to the executor thread pool. In Java 7 and 8, you can exploit some novel syntax to define the Runnable class as a local class, as shown in the following example: Note how the resource method arguments, id and response , are passed straight into the definition of the Runnable local class. This special syntax enables you to use the resource method arguments directly in the Runnable.run() method, without having to define corresponding fields in the local class. Important In order for this special syntax to work, the resource method parameters must be declared as final (which implies that they must not be changed in the method implementation). 48.5.2. Timeouts and Timeout Handlers Overview The asynchronous processing model also provides support for imposing timeouts on REST invocations. By default, a timeout results in a HTTP error response being sent back to the client. But you also have the option of registering a timeout handler callback, which enables you to customize the response to a timeout event. Example of setting a timeout without a handler To define a simple invocation timeout, without specifying a timeout handler, call the setTimeout method on the AsyncResponse object, as shown in the following example: Note that you can specify the timeout value using any time unit from the java.util.concurrent.TimeUnit class. The preceding example does not show the code for sending the request to the executor thread pool. If you just wanted to test the timeout behaviour, you could include just the call to async.SetTimeout in the resource method body, and the timeout would be triggered on every invocation. The AsyncResponse.NO_TIMEOUT value represents an infinite timeout. Default timeout behaviour By default, if the invocation timeout is triggered, the JAX-RS runtime raises a ServiceUnavailableException exception and sends back a HTTP error response with the status 503 . TimeoutHandler interface If you want to customize the timeout behaviour, you must define a timeout handler, by implementing the TimeoutHandler interface: When you override the handleTimeout method in your implementation class, you can choose between the following approaches to dealing with the timeout: Cancel the response, by calling the asyncResponse.cancel method. Send a response, by calling the asyncResponse.resume method with the response value. Extend the waiting period, by calling the asyncResponse.setTimeout method. (For example, to wait for a further 10 seconds, you could call asyncResponse.setTimeout(10, TimeUnit.SECONDS) ). Example of setting a timeout with a handler To define an invocation timeout with a timeout handler, call both the setTimeout method and the setTimeoutHandler method on the AsyncResponse object, as shown in the following example: Where this example registers an instance of the CancelTimeoutHandlerImpl timeout handler to handle the invocation timeout. Using a timeout handler to cancel the response The CancelTimeoutHandlerImpl timeout handler is defined as follows: The effect of calling cancel on the AsyncResponse object is to send a HTTP 503 ( Service unavailable ) error response to the client. You can optionally specify an argument to the cancel method (either an int or a java.util.Date value), which would be used to set a Retry-After: HTTP header in the response message. Clients often ignore the Retry-After: header, however. Dealing with a cancelled response in the Runnable instance If you have encapsulated a suspended request as a Runnable instance, which is queued for processing in an executor thread pool, you might find that the AsyncResponse has been cancelled by the time the thread pool gets around to processing the request. For this reason, you ought to add some code to your Runnable instance, which enables it to cope with a cancelled AsyncResponse object. For example: 48.5.3. Handling Dropped Connections Overview It is possible to add a callback to deal with the case where the client connection is lost. ConnectionCallback interface To add a callback for dropped connections, you must implement the javax.ws.rs.container.ConnectionCallback interface, which is defined as follows: Registering a connection callback After implementing a connection callback, you must register it with the current AsyncResponse object, by calling one of the register methods. For example, to register a connection callback of type, MyConnectionCallback : Typical scenario for connection callback Typically, the main reason for implementing a connection callback would be to free up resources associated with the dropped client connection (where you could use the AsyncResponse instance as the key to identify the resources that need to be freed). 48.5.4. Registering Callbacks Overview You can optionally add a callback to an AsyncResponse instance, in order to be notified when the invocation has completed. There are two alternative points in the processing when this callback can be invoked, either: After the request processing is finished and the response has already been sent back to the client, or After the request processing is finished and an unmapped Throwable has been propagated to the hosting I/O container. CompletionCallback interface To add a completion callback, you must implement the javax.ws.rs.container.CompletionCallback interface, which is defined as follows: Usually, the throwable argument is null . However, if the request processing resulted in an unmapped exception, throwable contains the unmapped exception instance. Registering a completion callback After implementing a completion callback, you must register it with the current AsyncResponse object, by calling one of the register methods. For example, to register a completion callback of type, MyCompletionCallback : | [
"import javax.ws.rs.core.Response; Response r = Response.ok().build();",
"import javax.ws.rs.core.Response; import org.apache.cxf.jaxrs.impl.ResponseBuilderImpl; ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.status(200); Response r = builder.build();",
"import javax.ws.rs.core.Response; import demo.jaxrs.server.Customer; Customer customer = new Customer(\"Jane\", 12); return Response.ok(customer).build();",
"import javax.ws.rs.core.Response; return Response.noContent().build();",
"import javax.ws.rs.core.Response; return Response.notModified().build();",
"import javax.ws.rs.core.Response; return Response.serverError().build();",
"import javax.ws.rs.core.Response; import org.apache.cxf.jaxrs.impl.ResponseBuilderImpl; ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.header(\"username\", \"joe\"); Response r = builder.build();",
"import javax.ws.rs.core.Response; import javax.ws.rs.core.NewCookie; NewCookie cookie = new NewCookie(\"username\", \"joe\"); Response r = Response.ok().cookie(cookie).build();",
"import javax.ws.rs.core.Response; import org.apache.cxf.jaxrs.impl.ResponseBuilderImpl; ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.status(404); Response r = builder.build();",
"import javax.ws.rs.core.Response; import javax.ws.rs.core.CacheControl; import org.apache.cxf.jaxrs.impl.ResponseBuilderImpl; CacheControl cache = new CacheControl(); cache.setNoCache(true); ResponseBuilderImpl builder = new ResponseBuilderImpl(); builder.cacheControl(cache); Response r = builder.build();",
"import javax.ws.rs.core.GenericEntity; List<String> list = new ArrayList<String>(); GenericEntity<List<String>> entity = new GenericEntity<List<String>>(list) {}; Response response = Response.ok(entity).build();",
"import javax.ws.rs.core.GenericEntity; AtomicInteger result = new AtomicInteger(12); GenericEntity<AtomicInteger> entity = new GenericEntity<AtomicInteger>(result, result.getClass().getGenericSuperclass()); Response response = Response.ok(entity).build();",
"Executor executor = new ThreadPoolExecutor( 5, // Core pool size 5, // Maximum pool size 0, // Keep-alive time TimeUnit.SECONDS, // Time unit new ArrayBlockingQueue<Runnable>(10) // Blocking queue );",
"// Java import javax.ws.rs.GET; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.Suspended; @Path(\"/bookstore\") public class BookContinuationStore { @GET @Path(\"{id}\") public void handleRequestInPool(@PathParam(\"id\") String id, @Suspended AsyncResponse response) { } }",
"// Java package org.apache.cxf.systest.jaxrs; import java.util.HashMap; import java.util.Map; import java.util.concurrent.ArrayBlockingQueue; import java.util.concurrent.Executor; import java.util.concurrent.ThreadPoolExecutor; import java.util.concurrent.TimeUnit; import java.util.concurrent.atomic.AtomicInteger; import javax.ws.rs.GET; import javax.ws.rs.NotFoundException; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.CompletionCallback; import javax.ws.rs.container.ConnectionCallback; import javax.ws.rs.container.Suspended; import javax.ws.rs.container.TimeoutHandler; import org.apache.cxf.phase.PhaseInterceptorChain; @Path(\"/bookstore\") public class BookContinuationStore { private Map<String, String> books = new HashMap<String, String>(); private Executor executor = new ThreadPoolExecutor(5, 5, 0, TimeUnit.SECONDS, new ArrayBlockingQueue<Runnable>(10)); public BookContinuationStore() { init(); } @GET @Path(\"{id}\") public void handleRequestInPool(final @PathParam(\"id\") String id, final @Suspended AsyncResponse response) { executor.execute(new Runnable() { public void run() { // Retrieve the book data for 'id' // which is presumed to be a very slow, blocking operation // bookdata = // Re-activate the client connection with 'resume' // and send the 'bookdata' object as the response response.resume(bookdata); } }); } }",
"// Java // Java import java.util.concurrent.TimeUnit; import javax.ws.rs.GET; import javax.ws.rs.NotFoundException; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.Suspended; import javax.ws.rs.container.TimeoutHandler; @Path(\"/bookstore\") public class BookContinuationStore { @GET @Path(\"/books/defaulttimeout\") public void getBookDescriptionWithTimeout(@Suspended AsyncResponse async) { async.setTimeout(2000, TimeUnit.MILLISECONDS); // Optionally, send request to executor queue for processing // } }",
"// Java package javax.ws.rs.container; public interface TimeoutHandler { public void handleTimeout(AsyncResponse asyncResponse); }",
"// Java import javax.ws.rs.GET; import javax.ws.rs.NotFoundException; import javax.ws.rs.Path; import javax.ws.rs.PathParam; import javax.ws.rs.Produces; import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.Suspended; import javax.ws.rs.container.TimeoutHandler; @Path(\"/bookstore\") public class BookContinuationStore { @GET @Path(\"/books/cancel\") public void getBookDescriptionWithCancel(@PathParam(\"id\") String id, @Suspended AsyncResponse async) { async.setTimeout(2000, TimeUnit.MILLISECONDS); async.setTimeoutHandler(new CancelTimeoutHandlerImpl()); // Optionally, send request to executor queue for processing // } }",
"// Java import javax.ws.rs.container.AsyncResponse; import javax.ws.rs.container.TimeoutHandler; @Path(\"/bookstore\") public class BookContinuationStore { private class CancelTimeoutHandlerImpl implements TimeoutHandler { @Override public void handleTimeout(AsyncResponse asyncResponse) { asyncResponse.cancel(); } } }",
"// Java @Path(\"/bookstore\") public class BookContinuationStore { private void sendRequestToThreadPool(final String id, final AsyncResponse response) { executor.execute(new Runnable() { public void run() { if ( !response.isCancelled() ) { // Process the suspended request // } } }); } }",
"// Java package javax.ws.rs.container; public interface ConnectionCallback { public void onDisconnect(AsyncResponse disconnected); }",
"asyncResponse.register(new MyConnectionCallback());",
"// Java package javax.ws.rs.container; public interface CompletionCallback { public void onComplete(Throwable throwable); }",
"asyncResponse.register(new MyCompletionCallback());"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/RESTResponses |
Chapter 3. Configuring external alertmanager instances | Chapter 3. Configuring external alertmanager instances The OpenShift Container Platform monitoring stack includes a local Alertmanager instance that routes alerts from Prometheus. You can add external Alertmanager instances by configuring the cluster-monitoring-config config map in either the openshift-monitoring project or the user-workload-monitoring-config project. If you add the same external Alertmanager configuration for multiple clusters and disable the local instance for each cluster, you can then manage alert routing for multiple clusters by using a single external Alertmanager instance. Prerequisites You have installed the OpenShift CLI ( oc ). If you are configuring core OpenShift Container Platform monitoring components in the openshift-monitoring project : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config config map. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. Procedure Edit the ConfigMap object. To configure additional Alertmanagers for routing alerts from core OpenShift Container Platform projects : Edit the cluster-monitoring-config config map in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Add an additionalAlertmanagerConfigs: section under data/config.yaml/prometheusK8s . Add the configuration details for additional Alertmanagers in this section: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification> For <alertmanager_specification> , substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager using a bearer token with client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com To configure additional Alertmanager instances for routing alerts from user-defined projects : Edit the user-workload-monitoring-config config map in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Add a <component>/additionalAlertmanagerConfigs: section under data/config.yaml/ . Add the configuration details for additional Alertmanagers in this section: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification> For <component> , substitute one of two supported external Alertmanager components: prometheus or thanosRuler . For <alertmanager_specification> , substitute authentication and other configuration details for additional Alertmanager instances. Currently supported authentication methods are bearer token ( bearerToken ) and client TLS ( tlsConfig ). The following sample config map configures an additional Alertmanager using Thanos Ruler with a bearer token and client TLS authentication: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: "30s" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. 3.1. Attaching additional labels to your time series and alerts You can attach custom labels to all time series and alerts leaving Prometheus by using the external labels feature of Prometheus. Prerequisites If you are configuring core OpenShift Container Platform monitoring components : You have access to the cluster as a user with the cluster-admin cluster role. You have created the cluster-monitoring-config ConfigMap object. If you are configuring components that monitor user-defined projects : You have access to the cluster as a user with the cluster-admin cluster role, or as a user with the user-workload-monitoring-config-edit role in the openshift-user-workload-monitoring project. A cluster administrator has enabled monitoring for user-defined projects. You have installed the OpenShift CLI ( oc ). Procedure Edit the ConfigMap object: To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors core OpenShift Container Platform projects : Edit the cluster-monitoring-config ConfigMap object in the openshift-monitoring project: USD oc -n openshift-monitoring edit configmap cluster-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. For example, to add metadata about the region and environment to all time series and alerts, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod Save the file to apply the changes. The new configuration is applied automatically. To attach custom labels to all time series and alerts leaving the Prometheus instance that monitors user-defined projects : Edit the user-workload-monitoring-config ConfigMap object in the openshift-user-workload-monitoring project: USD oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config Define a map of labels you want to add for every metric under data/config.yaml : apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1 1 Substitute <key>: <value> with a map of key-value pairs where <key> is a unique name for the new label and <value> is its value. Warning Do not use prometheus or prometheus_replica as key names, because they are reserved and will be overwritten. Do not use cluster or managed_cluster as key names. Using them can cause issues where you are unable to see data in the developer dashboards. Note In the openshift-user-workload-monitoring project, Prometheus handles metrics and Thanos Ruler handles alerting and recording rules. Setting externalLabels for prometheus in the user-workload-monitoring-config ConfigMap object will only configure external labels for metrics and not for any rules. For example, to add metadata about the region and environment to all time series and alerts related to user-defined projects, use the following example: apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod Save the file to apply the changes. The pods affected by the new configuration are automatically redeployed. Additional resources See Preparing to configure the monitoring stack for steps to create monitoring config maps. Enabling monitoring for user-defined projects | [
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | <component>: additionalAlertmanagerConfigs: - <alertmanager_specification>",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | thanosRuler: additionalAlertmanagerConfigs: - scheme: https pathPrefix: / timeout: \"30s\" apiVersion: v1 bearerToken: name: alertmanager-bearer-token key: token tlsConfig: key: name: alertmanager-tls key: tls.key cert: name: alertmanager-tls key: tls.crt ca: name: alertmanager-tls key: tls.ca staticConfigs: - external-alertmanager1-remote.com - external-alertmanager1-remote2.com",
"oc -n openshift-monitoring edit configmap cluster-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: cluster-monitoring-config namespace: openshift-monitoring data: config.yaml: | prometheusK8s: externalLabels: region: eu environment: prod",
"oc -n openshift-user-workload-monitoring edit configmap user-workload-monitoring-config",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: <key>: <value> 1",
"apiVersion: v1 kind: ConfigMap metadata: name: user-workload-monitoring-config namespace: openshift-user-workload-monitoring data: config.yaml: | prometheus: externalLabels: region: eu environment: prod"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/monitoring/monitoring-configuring-external-alertmanagers_configuring-the-monitoring-stack |
Chapter 4. Configuring the all-in-one Red Hat OpenStack Platform environment | Chapter 4. Configuring the all-in-one Red Hat OpenStack Platform environment To create an all-in-one Red Hat OpenStack Platform environment, include four environment files with the openstack tripleo deploy command. You must create two of the configuration files, shown below: USDHOME/containers-prepare-parameters.yaml USDHOME/standalone_parameters.yaml For more information see Section 4.1, "Generating YAML files for the all-in-one Red Hat OpenStack Platform (RHOSP) environment" . Two environment files are provided for you in the /usr/share/openstack-tripleo-heat-templates/ directory: /usr/share/openstack-tripleo-heat-templates/environments/standalone/standalone-tripleo.yaml /usr/share/openstack-tripleo-heat-templates/roles/Standalone.yaml You can customize the all-in-one environment for development or testing. Include modified values for the parameters in either the standalone-tripleo.yaml or Standalone.yaml configuration files in a newly created yaml file in your home directory. Include this file in the openstack tripleo deploy command. 4.1. Generating YAML files for the all-in-one Red Hat OpenStack Platform (RHOSP) environment To generate the containers-prepare-parameters.yaml and standalone_parameters.yaml files, complete the following steps: Generate the containers-prepare-parameters.yaml file that contains the default ContainerImagePrepare parameters: Edit the containers-prepare-parameters.yaml file and include your Red Hat credentials in the ContainerImageRegistryCredentials parameter so that the deployment process can authenticate with registry.redhat.io and pull container images successfully: Note To avoid entering your password in plain text, create a Red Hat Service Account. For more information, see Red Hat Container Registry Authentication : Set the ContainerImageRegistryLogin parameter to true in the containers-prepare-parameters.yaml : If you want to use the all-in-one host as the container registry, omit this parameter and include --local-push-destination in the openstack tripleo container image prepare command. For more information, see Preparing container images . Create the USDHOME/standalone_parameters.yaml file and configure basic parameters for your all-in-one RHOSP environment, including network configuration and some deployment options. In this example, network interface eth1 is the interface on the management network that you use to deploy RHOSP. eth1 has the IP address 192.168.25.2: If you use only a single network interface, you must define the default route: If you have an internal time source, or if your environment blocks access to external time sources, use the NtpServer parameter to define the time source that you want to use: If you want to use the all-in-one RHOSP installation in a virtual environment, you must define the virtualization type with the NovaComputeLibvirtType parameter: The Load-balancing service (octavia) does not require that you configure SSH. However, if you want SSH access to the load-balancing instances (amphorae), add the OctaviaAmphoraSshKeyFile parameter with a value of the absolute path to your public key file for the stack user: OctaviaAmphoraSshKeyFile: "/home/stack/.ssh/id_rsa.pub" | [
"[stack@all-in-one]USD openstack tripleo container image prepare default --output-env-file USDHOME/containers-prepare-parameters.yaml",
"parameter_defaults: ContainerImagePrepare: ContainerImageRegistryCredentials: registry.redhat.io: <USERNAME>: \"<PASSWORD>\"",
"parameter_defaults: ContainerImagePrepare: ContainerImageRegistryCredentials: registry.redhat.io: <USERNAME>: \"<PASSWORD>\" ContainerImageRegistryLogin: true",
"[stack@all-in-one]USD export IP=192.168.25.2 [stack@all-in-one]USD export VIP=192.168.25.3 [stack@all-in-one]USD export NETMASK=24 [stack@all-in-one]USD export INTERFACE=eth1 [stack@all-in-one]USD export DNS1=1.1.1.1 [stack@all-in-one]USD export DNS2=8.8.8.8 [stack@all-in-one]USD cat <<EOF > USDHOME/standalone_parameters.yaml parameter_defaults: CloudName: USDIP CloudDomain: localdomain ControlPlaneStaticRoutes: [] Debug: true DeploymentUser: USDUSER KernelIpNonLocalBind: 1 DockerInsecureRegistryAddress: - USDIP:8787 NeutronPublicInterface: USDINTERFACE NeutronDnsDomain: localdomain NeutronBridgeMappings: datacentre:br-ctlplane NeutronPhysicalBridge: br-ctlplane StandaloneEnableRoutedNetworks: false StandaloneHomeDir: USDHOME StandaloneLocalMtu: 1500 EOF",
"ControlPlaneStaticRoutes: - ip_netmask: 0.0.0.0/0 next_hop: USDGATEWAY default: true",
"parameter_defaults: NtpServer: - clock.example.com",
"parameter_defaults: NovaComputeLibvirtType: qemu"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/standalone_deployment_guide/configuring-the-all-in-one-openstack-installation |
Developing Applications with Red Hat build of Apache Camel for Quarkus | Developing Applications with Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel 4.4 Developing Applications with Red Hat build of Apache Camel for Quarkus Red Hat build of Apache Camel Documentation Team [email protected] Red Hat build of Apache Camel Support Team http://access.redhat.com/support | null | https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/index |
Jenkins | Jenkins OpenShift Container Platform 4.17 Jenkins Red Hat OpenShift Documentation Team | [
"podman pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"oc new-app -e JENKINS_PASSWORD=<password> ocp-tools-4/jenkins-rhel8",
"oc describe serviceaccount jenkins",
"Name: default Labels: <none> Secrets: { jenkins-token-uyswp } { jenkins-dockercfg-xcr3d } Tokens: jenkins-token-izv1u jenkins-token-uyswp",
"oc describe secret <secret name from above>",
"Name: jenkins-token-uyswp Labels: <none> Annotations: kubernetes.io/service-account.name=jenkins,kubernetes.io/service-account.uid=32f5b661-2a8f-11e5-9528-3c970e3bf0b7 Type: kubernetes.io/service-account-token Data ==== ca.crt: 1066 bytes token: eyJhbGc..<content cut>....wRA",
"pluginId:pluginVersion",
"apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: custom-jenkins-build spec: source: 1 git: uri: https://github.com/custom/repository type: Git strategy: 2 sourceStrategy: from: kind: ImageStreamTag name: jenkins:2 namespace: openshift type: Source output: 3 to: kind: ImageStreamTag name: custom-jenkins:latest",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template1: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template1</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template1</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/tmp</workingDir> <command></command> <args>USD{computer.jnlpmac} USD{computer.name}</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"kind: ConfigMap apiVersion: v1 metadata: name: jenkins-agent labels: role: jenkins-agent data: template2: |- <org.csanchez.jenkins.plugins.kubernetes.PodTemplate> <inheritFrom></inheritFrom> <name>template2</name> <instanceCap>2147483647</instanceCap> <idleMinutes>0</idleMinutes> <label>template2</label> <serviceAccount>jenkins</serviceAccount> <nodeSelector></nodeSelector> <volumes/> <containers> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>jnlp</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command></command> <args>\\USD(JENKINS_SECRET) \\USD(JENKINS_NAME)</args> <ttyEnabled>false</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> <name>java</name>  <privileged>false</privileged> <alwaysPullImage>true</alwaysPullImage> <workingDir>/home/jenkins/agent</workingDir> <command>cat</command> <args></args> <ttyEnabled>true</ttyEnabled> <resourceRequestCpu></resourceRequestCpu> <resourceRequestMemory></resourceRequestMemory> <resourceLimitCpu></resourceLimitCpu> <resourceLimitMemory></resourceLimitMemory> <envVars/> </org.csanchez.jenkins.plugins.kubernetes.ContainerTemplate> </containers> <envVars/> <annotations/> <imagePullSecrets/> <nodeProperties/> </org.csanchez.jenkins.plugins.kubernetes.PodTemplate>",
"oc new-app jenkins-persistent",
"oc new-app jenkins-ephemeral",
"oc describe jenkins-ephemeral",
"kind: List apiVersion: v1 items: - kind: ImageStream apiVersion: image.openshift.io/v1 metadata: name: openshift-jee-sample - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample-docker spec: strategy: type: Docker source: type: Docker dockerfile: |- FROM openshift/wildfly-101-centos7:latest COPY ROOT.war /wildfly/standalone/deployments/ROOT.war CMD USDSTI_SCRIPTS_PATH/run binary: asFile: ROOT.war output: to: kind: ImageStreamTag name: openshift-jee-sample:latest - kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- node(\"maven\") { sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } triggers: - type: ConfigChange",
"kind: BuildConfig apiVersion: build.openshift.io/v1 metadata: name: openshift-jee-sample spec: strategy: type: JenkinsPipeline jenkinsPipelineStrategy: jenkinsfile: |- podTemplate(label: \"mypod\", 1 cloud: \"openshift\", 2 inheritFrom: \"maven\", 3 containers: [ containerTemplate(name: \"jnlp\", 4 image: \"openshift/jenkins-agent-maven-35-centos7:v3.10\", 5 resourceRequestMemory: \"512Mi\", 6 resourceLimitMemory: \"512Mi\", 7 envVars: [ envVar(key: \"CONTAINER_HEAP_PERCENT\", value: \"0.25\") 8 ]) ]) { node(\"mypod\") { 9 sh \"git clone https://github.com/openshift/openshift-jee-sample.git .\" sh \"mvn -B -Popenshift package\" sh \"oc start-build -F openshift-jee-sample-docker --from-file=target/ROOT.war\" } } triggers: - type: ConfigChange",
"def nodeLabel = 'java-buidler' pipeline { agent { kubernetes { cloud 'openshift' label nodeLabel yaml \"\"\" apiVersion: v1 kind: Pod metadata: labels: worker: USD{nodeLabel} spec: containers: - name: jnlp image: image-registry.openshift-image-registry.svc:5000/openshift/jenkins-agent-base-rhel8:latest args: ['\\USD(JENKINS_SECRET)', '\\USD(JENKINS_NAME)'] - name: java image: image-registry.openshift-image-registry.svc:5000/openshift/java:latest command: - cat tty: true \"\"\" } } options { timeout(time: 20, unit: 'MINUTES') } stages { stage('Build App') { steps { container(\"java\") { sh \"mvn --version\" } } } } }",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-rhel8:<image_tag>",
"docker pull registry.redhat.io/ocp-tools-4/jenkins-agent-base-rhel8:<image_tag>",
"podTemplate(label: \"mypod\", cloud: \"openshift\", inheritFrom: \"maven\", podRetention: onFailure(), 1 containers: [ ]) { node(\"mypod\") { } }",
"pipeline { agent any stages { stage('Build') { steps { sh 'make' } } stage('Test'){ steps { sh 'make check' junit 'reports/**/*.xml' } } stage('Deploy') { steps { sh 'make publish' } } } }",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-build spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myproject-test spec: workspaces: - name: source steps: - image: my-ci-image command: [\"make check\"] workingDir: USD(workspaces.source.path) - image: junit-report-image script: | #!/usr/bin/env bash junit-report reports/**/*.xml workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: myprojectd-deploy spec: workspaces: - name: source steps: - image: my-deploy-image command: [\"make deploy\"] workingDir: USD(workspaces.source.path)",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: myproject-pipeline spec: workspaces: - name: shared-dir tasks: - name: build taskRef: name: myproject-build workspaces: - name: source workspace: shared-dir - name: test taskRef: name: myproject-test workspaces: - name: source workspace: shared-dir - name: deploy taskRef: name: myproject-deploy workspaces: - name: source workspace: shared-dir",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: demo-pipeline spec: params: - name: repo_url - name: revision workspaces: - name: source tasks: - name: fetch-from-git taskRef: name: git-clone params: - name: url value: USD(params.repo_url) - name: revision value: USD(params.revision) workspaces: - name: output workspace: source",
"apiVersion: tekton.dev/v1beta1 kind: Task metadata: name: maven-test spec: workspaces: - name: source steps: - image: my-maven-image command: [\"mvn test\"] workingDir: USD(workspaces.source.path)",
"steps: image: ubuntu script: | #!/usr/bin/env bash /workspace/my-script.sh",
"steps: image: python script: | #!/usr/bin/env python3 print(\"hello from python!\")",
"#!/usr/bin/groovy node('maven') { stage 'Checkout' checkout scm stage 'Build' sh 'cd helloworld && mvn clean' sh 'cd helloworld && mvn compile' stage 'Run Unit Tests' sh 'cd helloworld && mvn test' stage 'Package' sh 'cd helloworld && mvn package' stage 'Archive artifact' sh 'mkdir -p artifacts/deployments && cp helloworld/target/*.war artifacts/deployments' archive 'helloworld/target/*.war' stage 'Create Image' sh 'oc login https://kubernetes.default -u admin -p admin --insecure-skip-tls-verify=true' sh 'oc new-project helloworldproject' sh 'oc project helloworldproject' sh 'oc process -f helloworld/jboss-eap70-binary-build.json | oc create -f -' sh 'oc start-build eap-helloworld-app --from-dir=artifacts/' stage 'Deploy' sh 'oc new-app helloworld/jboss-eap70-deploy.json' }",
"apiVersion: tekton.dev/v1beta1 kind: Pipeline metadata: name: maven-pipeline spec: workspaces: - name: shared-workspace - name: maven-settings - name: kubeconfig-dir optional: true params: - name: repo-url - name: revision - name: context-path tasks: - name: fetch-repo taskRef: name: git-clone workspaces: - name: output workspace: shared-workspace params: - name: url value: \"USD(params.repo-url)\" - name: subdirectory value: \"\" - name: deleteExisting value: \"true\" - name: revision value: USD(params.revision) - name: mvn-build taskRef: name: maven runAfter: - fetch-repo workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"-DskipTests\", \"clean\", \"compile\"] - name: mvn-tests taskRef: name: maven runAfter: - mvn-build workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"test\"] - name: mvn-package taskRef: name: maven runAfter: - mvn-tests workspaces: - name: source workspace: shared-workspace - name: maven-settings workspace: maven-settings params: - name: CONTEXT_DIR value: \"USD(params.context-path)\" - name: GOALS value: [\"package\"] - name: create-image-and-deploy taskRef: name: openshift-client runAfter: - mvn-package workspaces: - name: manifest-dir workspace: shared-workspace - name: kubeconfig-dir workspace: kubeconfig-dir params: - name: SCRIPT value: | cd \"USD(params.context-path)\" mkdir -p ./artifacts/deployments && cp ./target/*.war ./artifacts/deployments oc new-project helloworldproject oc project helloworldproject oc process -f jboss-eap70-binary-build.json | oc create -f - oc start-build eap-helloworld-app --from-dir=artifacts/ oc new-app jboss-eap70-deploy.json",
"oc import-image jenkins-agent-nodejs -n openshift",
"oc import-image jenkins-agent-maven -n openshift",
"oc patch dc jenkins -p '{\"spec\":{\"triggers\":[{\"type\":\"ImageChange\",\"imageChangeParams\":{\"automatic\":true,\"containerNames\":[\"jenkins\"],\"from\":{\"kind\":\"ImageStreamTag\",\"namespace\":\"<namespace>\",\"name\":\"jenkins:<image_stream_tag>\"}}}]}}'"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/jenkins/index |
Chapter 2. System requirements | Chapter 2. System requirements Use this information when planning your Red Hat Ansible Automation Platform installations and designing automation mesh topologies that fit your use case. Prerequisites You can obtain root access either through the sudo command, or through privilege escalation. For more on privilege escalation, see Understanding privilege escalation . You can de-escalate privileges from root to users such as: AWX, PostgreSQL, Event-Driven Ansible, or Pulp. You have configured an NTP client on all nodes. 2.1. Red Hat Ansible Automation Platform system requirements Your system must meet the following minimum system requirements to install and run Red Hat Ansible Automation Platform. A resilient deployment requires 10 virtual machines with a minimum of 16 gigabytes (GB) of RAM and 4 virtual CPUs (vCPU). See Tested deployment models for more information on topology options. Table 2.1. Base system Requirement Required Notes Subscription Valid Red Hat Ansible Automation Platform OS Red Hat Enterprise Linux 8.8 or later minor versions of Red Hat Enterprise Linux 8 (x86_64, aarch64). Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9 (x86_64, aarch64). Red Hat Ansible Automation Platform are also supported on OpenShift, see Installing on OpenShift Container Platform for more information. Ansible-core Ansible-core version 2.16 or later Ansible Automation Platform uses the system-wide ansible-core package to install the platform, but uses ansible-core 2.16 for both its control plane and built-in execution environments. Database PostgreSQL 15 Red Hat Ansible Automation Platform 2.5 requires the external (customer supported) databases to have ICU support. Table 2.2. Virtual machine requirements Component RAM VCPU Storage Platform gateway 16GB 4 20GB minimum Control nodes 16GB 4 80GB minimum with at least 20GB available under /var/lib/awx Execution nodes 16GB 4 40GB minimum Hop nodes 16GB 4 40GB minimum Automation hub 16GB 4 40GB minimum allocated to /var/lib/pulp Database 16GB 4 100GB minimum allocated to /var/lib/pgsql Event-Driven Ansible controller 16GB 4 40GB minimum Note These are minimum requirements and can be increased for larger workloads in increments of 2x (for example 16GB becomes 32GB and 4 vCPU becomes 8vCPU). See the horizontal scaling guide for more information. Repository requirements Enable the following repositories only when installing Red Hat Ansible Automation Platform: RHEL BaseOS RHEL AppStream Note If you enable repositories besides those mentioned above, the Red Hat Ansible Automation Platform installation could fail unexpectedly. The following are necessary for you to work with project updates and collections: Ensure that the Network ports and protocols listed in Table 6.3. Automation Hub are available for successful connection and download of collections from automation hub or Ansible Galaxy server. Additional notes for Red Hat Ansible Automation Platform requirements If performing a bundled Ansible Automation Platform installation, the installation setup.sh script attempts to install ansible-core (and its dependencies) from the bundle for you. If you have installed Ansible-core manually, the Ansible Automation Platform installation setup.sh script detects that Ansible has been installed and does not attempt to reinstall it. Note You must use Ansible-core, which is installed via dnf. Ansible-core version 2.16 is required for versions 2.5 and later. 2.2. Platform gateway system requirements The platform gateway is the service that handles authentication and authorization for Ansible Automation Platform. It provides a single entry into the platform and serves the platform's user interface. You are required to set umask=0022 . 2.3. Automation controller system requirements Automation controller is a distributed system, where different software components can be co-located or deployed across multiple compute nodes. In the installer, four node types are provided as abstractions to help you design the topology appropriate for your use case: control, hybrid, execution, and hop nodes. Use the following recommendations for node sizing: Execution nodes Execution nodes run automation. Increase memory and CPU to increase capacity for running more forks. Note The RAM and CPU resources stated are minimum recommendations to handle the job load for a node to run an average number of jobs simultaneously. Recommended RAM and CPU node sizes are not supplied. The required RAM or CPU depends directly on the number of jobs you are running in that environment. For capacity based on forks in your configuration, see Automation controller capacity determination and job impact . For further information about required RAM and CPU levels, see Performance tuning for automation controller . Control nodes Control nodes process events and run cluster jobs including project updates and cleanup jobs. Increasing CPU and memory can help with job event processing. 40GB minimum with at least 20GB available under /var/lib/awx Storage volume must be rated for a minimum baseline of 1500 IOPS Projects are stored on control and hybrid nodes, and for the duration of jobs, are also stored on execution nodes. If the cluster has many large projects, consider doubling the GB in /var/lib/awx/projects, to avoid disk space errors. Hop nodes Hop nodes serve to route traffic from one part of the automation mesh to another (for example, a hop node could be a bastion host into another network). RAM can affect throughput, CPU activity is low. Network bandwidth and latency are generally a more important factor than either RAM or CPU. Actual RAM requirements vary based on how many hosts automation controller manages simultaneously (which is controlled by the forks parameter in the job template or the system ansible.cfg file). To avoid possible resource conflicts, Ansible recommends 1 GB of memory per 10 forks and 2 GB reservation for automation controller. See Automation controller capacity determination and job impact . If forks is set to 400, 42 GB of memory is recommended. Automation controller hosts check if umask is set to 0022. If not, the setup fails. Set umask=0022 to avoid this error. A larger number of hosts can be addressed, but if the fork number is less than the total host count, more passes across the hosts are required. You can avoid these RAM limitations by using any of the following approaches: Use rolling updates. Use the provisioning callback system built into automation controller, where each system requesting configuration enters a queue and is processed as quickly as possible. In cases where automation controller is producing or deploying images such as AMIs. Additional resources For more information about obtaining an automation controller subscription, see Attaching your Red Hat Ansible Automation Platform subscription . For questions, contact Ansible support through the Red Hat Customer Portal . 2.4. Automation hub system requirements Automation hub allows you to discover and use new certified automation content from Red Hat Ansible and Certified Partners. On Ansible automation hub, you can discover and manage Ansible Collections, which are supported automation content developed by Red Hat and its partners for use cases such as cloud automation, network automation, and security automation. Note Private automation hub If you install private automation hub from an internal address, and have a certificate which only encompasses the external address, this can result in an installation which cannot be used as container registry without certificate issues. To avoid this, use the automationhub_main_url inventory variable with a value such as https://pah.example.com linking to the private automation hub node in the installation inventory file. This adds the external address to /etc/pulp/settings.py . This implies that you only want to use the external address. For information about inventory file variables, see Inventory file variables . 2.4.1. High availability automation hub requirements Before deploying a high availability (HA) automation hub, ensure that you have a shared storage file system installed in your environment and that you have configured your network storage system, if applicable. 2.4.1.1. Required shared storage Shared storage is required when installing more than one Automation hub with a file storage backend. The supported shared storage type for RPM-based installations is Network File System (NFS). Before you run the Red Hat Ansible Automation Platform installer, verify that you installed the /var/lib/pulp directory across your cluster as part of the shared storage file system installation. The Red Hat Ansible Automation Platform installer returns an error if /var/lib/pulp is not detected in one of your nodes, causing your high availability automation hub setup to fail. If you receive an error stating /var/lib/pulp is not detected in one of your nodes, ensure /var/lib/pulp is properly mounted in all servers and re-run the installer. 2.4.1.2. Installing firewalld for HA hub deployment If you intend to install a HA automation hub using a network storage on the automation hub nodes itself, you must first install and use firewalld to open the necessary ports as required by your shared storage system before running the Ansible Automation Platform installer. Install and configure firewalld by executing the following commands: Install the firewalld daemon: USD dnf install firewalld Add your network storage under <service> using the following command: USD firewall-cmd --permanent --add-service=<service> Note For a list of supported services, use the USD firewall-cmd --get-services command Reload to apply the configuration: USD firewall-cmd --reload 2.5. Event-Driven Ansible controller system requirements The Event-Driven Ansible controller is a single-node system capable of handling a variable number of long-running processes (such as rulebook activations) on-demand, depending on the number of CPU cores. Note If you want to use Event-Driven Ansible 2.5 with a 2.4 automation controller version, see Using Event-Driven Ansible 2.5 with Ansible Automation Platform 2.4 . Use the following minimum requirements to run, by default, a maximum of 12 simultaneous activations: Requirement Required RAM 16 GB CPUs 4 Local disk Hard drive must be 40 GB minimum with at least 20 GB available under /var. Storage volume must be rated for a minimum baseline of 1500 IOPS. If the cluster has many large projects or decision environment images, consider doubling the GB in /var to avoid disk space errors. Important If you are running Red Hat Enterprise Linux 8 and want to set your memory limits, you must have cgroup v2 enabled before you install Event-Driven Ansible. For specific instructions, see the Knowledge-Centered Support (KCS) article, Ansible Automation Platform Event-Driven Ansible controller for Red Hat Enterprise Linux 8 requires cgroupv2 . When you activate an Event-Driven Ansible rulebook under standard conditions, it uses about 250 MB of memory. However, the actual memory consumption can vary significantly based on the complexity of your rules and the volume and size of the events processed. In scenarios where a large number of events are anticipated or the rulebook complexity is high, conduct a preliminary assessment of resource usage in a staging environment. This ensures that your maximum number of activations is based on the capacity of your resources. For an example of setting Event-Driven Ansible controller maximumrunning activations, see Single automation controller, single automation hub, and single Event-Driven Ansible controller node with external (installer managed) database . 2.6. PostgreSQL requirements Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. PostgreSQL user passwords are hashed with SCRAM-SHA-256 secure hashing algorithm before storing in the database. To determine if your automation controller instance has access to the database, you can do so with the command, awx-manage check_db command. Note Automation controller data is stored in the database. Database storage increases with the number of hosts managed, number of jobs run, number of facts stored in the fact cache, and number of tasks in any individual job. For example, a playbook runs every hour (24 times a day) across 250 hosts, with 20 tasks, stores over 800000 events in the database every week. If not enough space is reserved in the database, the old job runs and facts must be cleaned on a regular basis. For more information, see Management Jobs in the Configuring automation execution guide. PostgreSQL Configurations Optionally, you can configure the PostgreSQL database as separate nodes that are not managed by the Red Hat Ansible Automation Platform installer. When the Ansible Automation Platform installer manages the database server, it configures the server with defaults that are generally recommended for most workloads. For more information about the settings you can use to improve database performance, see PostgreSQL database configuration and maintenance for automation controller in the Configuring automation execution guide. Additional resources For more information about tuning your PostgreSQL server, see the PostgreSQL documentation . 2.6.1. Setting up an external (customer supported) database Important When using an external database with Ansible Automation Platform, you must create and maintain that database. Ensure that you clear your external database when uninstalling Ansible Automation Platform. Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. Red Hat Ansible Automation Platform 2.5 uses PostgreSQL 15 and requires the external (customer supported) databases to have ICU support. Use the following procedure to configure an external PostgreSQL compliant database for use with an Ansible Automation Platform component, for example automation controller, Event-Driven Ansible, automation hub, and platform gateway. Procedure Connect to a PostgreSQL compliant database server with superuser privileges. # psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>: Where the default value for <hostname> is hostname : -h hostname --host=hostname Specify the hostname of the machine on which the server is running. If the value begins with a slash, it is used as the directory for the UNIX-domain socket. -d dbname --dbname=dbname Specify the name of the database to connect to. This is equal to specifying dbname as the first non-option argument on the command line. The dbname can be a connection string. If so, connection string parameters override any conflicting command line options. -U username --username=username Connect to the database as the user username instead of the default (you must have permission to do so). Create the user, database, and password with the createDB or administrator role assigned to the user. For further information, see Database Roles . Add the database credentials and host details to the installation program's inventory file under the [all:vars] group. Without mutual TLS (mTLS) authentication to the database Use the following inventory file snippet to configure each component's database without mTLS authentication. Uncomment the configuration you need. [all:vars] # Automation controller database variables # awx_install_pg_host=data.example.com # awx_install_pg_port=<port_number> # awx_install_pg_database=<database_name> # awx_install_pg_username=<username> # awx_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database # pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database # Event-Driven Ansible database variables # automationedacontroller_install_pg_host=data.example.com # automationedacontroller_install_pg_port=<port_number> # automationedacontroller_install_pg_database=<database_name> # automationedacontroller_install_pg_username=<username> # automationedacontroller_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database # automationedacontroller_pg_sslmode=prefer # Set to verify-full to enable mTLS authentication to the database # Automation hub database variables # automationhub_pg_host=data.example.com # automationhub_pg_port=<port_number> # automationhub_pg_database=<database_name> # automationhub_pg_username=<username> # automationhub_pg_password=<password> # This is not required if you enable mTLS authentication to the database # automationhub_pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database # Platform gateway database variables # automationgateway_install_pg_host=data.example.com # automationgateway_install_pg_port=<port_number> # automationgateway_install_pg_database=<database_name> # automationgateway_install_pg_username=<username> # automationgateway_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database # automationgateway_pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database With mTLS authentication to the database Use the following inventory file snippet to configure each component's database with mTLS authentication. Uncomment the configuration you need. [all:vars] # Automation controller database variables # awx_install_pg_host=data.example.com # awx_install_pg_port=<port_number> # awx_install_pg_database=<database_name> # awx_install_pg_username=<username> # pg_sslmode=verify-full # This can be either verify-ca or verify-full # pgclient_sslcert=/path/to/cert # Path to the certificate file # pgclient_sslkey=/path/to/key # Path to the key file # Event-Driven Ansible database variables # automationedacontroller_install_pg_host=data.example.com # automationedacontroller_install_pg_port=<port_number> # automationedacontroller_install_pg_database=<database_name> # automationedacontroller_install_pg_username=<username> # automationedacontroller_pg_sslmode=verify-full # EDA does not support verify-ca # automationedacontroller_pgclient_sslcert=/path/to/cert # Path to the certificate file # automationedacontroller_pgclient_sslkey=/path/to/key # Path to the key file # Automation hub database variables # automationhub_pg_host=data.example.com # automationhub_pg_port=<port_number> # automationhub_pg_database=<database_name> # automationhub_pg_username=<username> # automationhub_pg_sslmode=verify-full # This can be either verify-ca or verify-full # automationhub_pgclient_sslcert=/path/to/cert # Path to the certificate file # automationhub_pgclient_sslkey=/path/to/key # Path to the key file # Platform gateway database variables # automationgateway_install_pg_host=data.example.com # automationgateway_install_pg_port=<port_number> # automationgateway_install_pg_database=<database_name> # automationgateway_install_pg_username=<username> # automationgateway_pg_sslmode=verify-full # This can be either verify-ca or verify-full # automationgateway_pgclient_sslcert=/path/to/cert # Path to the certificate file # automationgateway_pgclient_sslkey=/path/to/key # Path to the key file Run the installation program. If you are using a PostgreSQL database, the database is owned by the connecting user and must have a createDB or administrator role assigned to it. Check that you can connect to the created database with the credentials provided in the inventory file. Check the permission of the user. The user should have the createDB or administrator role. Note During this procedure, you must check the External Database coverage. For further information, see https://access.redhat.com/articles/4010491 2.6.2. Enabling the hstore extension for the automation hub PostgreSQL database Added in Ansible Automation Platform 2.5, the database migration script uses hstore fields to store information, therefore the hstore extension must be enabled in the automation hub PostgreSQL database. This process is automatic when using the Ansible Automation Platform installer and a managed PostgreSQL server. If the PostgreSQL database is external, you must enable the hstore extension in the automation hub PostgreSQL database manually before installation. If the hstore extension is not enabled before installation, a failure raises during database migration. Procedure Check if the extension is available on the PostgreSQL server (automation hub database). USD psql -d <automation hub database> -c "SELECT * FROM pg_available_extensions WHERE name='hstore'" Where the default value for <automation hub database> is automationhub . Example output with hstore available : name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row) Example output with hstore not available : name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows) On a RHEL based server, the hstore extension is included in the postgresql-contrib RPM package, which is not installed automatically when installing the PostgreSQL server RPM package. To install the RPM package, use the following command: dnf install postgresql-contrib Load the hstore PostgreSQL extension into the automation hub database with the following command: USD psql -d <automation hub database> -c "CREATE EXTENSION hstore;" In the following output, the installed_version field lists the hstore extension used, indicating that hstore is enabled. name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row) 2.6.3. Benchmarking storage performance for the Ansible Automation Platform PostgreSQL database Check whether the minimum Ansible Automation Platform PostgreSQL database requirements are met by using the Flexible I/O Tester (FIO) tool. FIO is a tool used to benchmark read and write IOPS performance of the storage system. Prerequisites You have installed the Flexible I/O Tester ( fio ) storage performance benchmarking tool. To install fio , run the following command as the root user: # yum -y install fio You have adequate disk space to store the fio test data log files. The examples shown in the procedure require at least 60GB disk space in the /tmp directory: numjobs sets the number of jobs run by the command. size=10G sets the file size generated by each job. You have adjusted the value of the size parameter. Adjusting this value reduces the amount of test data. Procedure Run a random write test: USD fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G \ --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 \ --verify=0 --bs=4K --iodepth=64 --rw=randwrite \ --group_reporting=1 > /tmp/fio_benchmark_write_iops.log \ 2>> /tmp/fio_write_iops_error.log Run a random read test: USD fio --name=read_iops --directory=/tmp \ --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s \ --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread \ --group_reporting=1 > /tmp/fio_benchmark_read_iops.log \ 2>> /tmp/fio_read_iops_error.log Review the results: In the log files written by the benchmark commands, search for the line beginning with iops . This line shows the minimum, maximum, and average values for the test. The following example shows the line in the log file for the random read test: USD cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 [...] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 [...] Note The above is a baseline to help evaluate the best case performance on your systems. Systems can and will change and performance may vary depending on what else is happening on your systems, storage or network at the time of testing. You must review, monitor, and revisit the log files according to your own business requirements, application workloads, and new demands. | [
"dnf install firewalld",
"firewall-cmd --permanent --add-service=<service>",
"firewall-cmd --reload",
"psql -h <db.example.com> -U superuser -p 5432 -d postgres <Password for user superuser>:",
"-h hostname --host=hostname",
"-d dbname --dbname=dbname",
"-U username --username=username",
"Automation controller database variables awx_install_pg_host=data.example.com awx_install_pg_port=<port_number> awx_install_pg_database=<database_name> awx_install_pg_username=<username> awx_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database Event-Driven Ansible database variables automationedacontroller_install_pg_host=data.example.com automationedacontroller_install_pg_port=<port_number> automationedacontroller_install_pg_database=<database_name> automationedacontroller_install_pg_username=<username> automationedacontroller_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database automationedacontroller_pg_sslmode=prefer # Set to verify-full to enable mTLS authentication to the database Automation hub database variables automationhub_pg_host=data.example.com automationhub_pg_port=<port_number> automationhub_pg_database=<database_name> automationhub_pg_username=<username> automationhub_pg_password=<password> # This is not required if you enable mTLS authentication to the database automationhub_pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database Platform gateway database variables automationgateway_install_pg_host=data.example.com automationgateway_install_pg_port=<port_number> automationgateway_install_pg_database=<database_name> automationgateway_install_pg_username=<username> automationgateway_install_pg_password=<password> # This is not required if you enable mTLS authentication to the database automationgateway_pg_sslmode=prefer # Set to verify-ca or verify-full to enable mTLS authentication to the database",
"Automation controller database variables awx_install_pg_host=data.example.com awx_install_pg_port=<port_number> awx_install_pg_database=<database_name> awx_install_pg_username=<username> pg_sslmode=verify-full # This can be either verify-ca or verify-full pgclient_sslcert=/path/to/cert # Path to the certificate file pgclient_sslkey=/path/to/key # Path to the key file Event-Driven Ansible database variables automationedacontroller_install_pg_host=data.example.com automationedacontroller_install_pg_port=<port_number> automationedacontroller_install_pg_database=<database_name> automationedacontroller_install_pg_username=<username> automationedacontroller_pg_sslmode=verify-full # EDA does not support verify-ca automationedacontroller_pgclient_sslcert=/path/to/cert # Path to the certificate file automationedacontroller_pgclient_sslkey=/path/to/key # Path to the key file Automation hub database variables automationhub_pg_host=data.example.com automationhub_pg_port=<port_number> automationhub_pg_database=<database_name> automationhub_pg_username=<username> automationhub_pg_sslmode=verify-full # This can be either verify-ca or verify-full automationhub_pgclient_sslcert=/path/to/cert # Path to the certificate file automationhub_pgclient_sslkey=/path/to/key # Path to the key file Platform gateway database variables automationgateway_install_pg_host=data.example.com automationgateway_install_pg_port=<port_number> automationgateway_install_pg_database=<database_name> automationgateway_install_pg_username=<username> automationgateway_pg_sslmode=verify-full # This can be either verify-ca or verify-full automationgateway_pgclient_sslcert=/path/to/cert # Path to the certificate file automationgateway_pgclient_sslkey=/path/to/key # Path to the key file",
"psql -d <automation hub database> -c \"SELECT * FROM pg_available_extensions WHERE name='hstore'\"",
"name | default_version | installed_version |comment ------+-----------------+-------------------+--------------------------------------------------- hstore | 1.7 | | data type for storing sets of (key, value) pairs (1 row)",
"name | default_version | installed_version | comment ------+-----------------+-------------------+--------- (0 rows)",
"dnf install postgresql-contrib",
"psql -d <automation hub database> -c \"CREATE EXTENSION hstore;\"",
"name | default_version | installed_version | comment -----+-----------------+-------------------+------------------------------------------------------ hstore | 1.7 | 1.7 | data type for storing sets of (key, value) pairs (1 row)",
"yum -y install fio",
"fio --name=write_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randwrite --group_reporting=1 > /tmp/fio_benchmark_write_iops.log 2>> /tmp/fio_write_iops_error.log",
"fio --name=read_iops --directory=/tmp --numjobs=3 --size=10G --time_based --runtime=60s --ramp_time=2s --ioengine=libaio --direct=1 --verify=0 --bs=4K --iodepth=64 --rw=randread --group_reporting=1 > /tmp/fio_benchmark_read_iops.log 2>> /tmp/fio_read_iops_error.log",
"cat /tmp/fio_benchmark_read_iops.log read_iops: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=64 [...] iops : min=50879, max=61603, avg=56221.33, stdev=679.97, samples=360 [...]"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/rpm_installation/platform-system-requirements |
function::return_str | function::return_str Name function::return_str - Formats the return value as a string Synopsis Arguments format Variable to determine return type base value ret Return value (typically USDreturn ) Description This function is used by the syscall tapset, and returns a string. Set format equal to 1 for a decimal, 2 for hex, 3 for octal. Note that this function is preferred over returnstr . | [
"return_str:string(format:long,ret:long)"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-return-str |
28.4. Configuring ABRT | 28.4. Configuring ABRT A problem life cycle is driven by events in ABRT . For example: Event 1 - a problem data directory is created. Event 2 - problem data is analyzed. Event 3 - a problem is reported to Bugzilla. When a problem is detected and its defining data is stored, the problem is processed by running events on the problem's data directory. For more information on events and how to define one, see Section 28.4.1, "ABRT Events" . Standard ABRT installation currently supports several default events that can be selected and used during problem reporting process. See Section 28.4.2, "Standard ABRT Installation Supported Events" to see the list of these events. Upon installation, ABRT and libreport place their respective configuration files into the several directories on a system: /etc/libreport/ - contains the report_event.conf main configuration file. More information about this configuration file can be found in Section 28.4.1, "ABRT Events" . /etc/libreport/events/ - holds files specifying the default setting of predefined events. /etc/libreport/events.d/ - keeps configuration files defining events. /etc/libreport/plugins/ - contains configuration files of programs that take part in events. /etc/abrt/ - holds ABRT specific configuration files used to modify the behavior of ABRT 's services and programs. More information about certain specific configuration files can be found in Section 28.4.4, "ABRT Specific Configuration" . /etc/abrt/plugins/ - keeps configuration files used to override the default setting of ABRT 's services and programs. For more information on some specific configuration files see Section 28.4.4, "ABRT Specific Configuration" . 28.4.1. ABRT Events Each event is defined by one rule structure in a respective configuration file. The configuration files are typically stored in the /etc/libreport/events.d/ directory. These configuration files are used by the main configuration file, /etc/libreport/report_event.conf . The /etc/libreport/report_event.conf file consists of include directives and rules . Rules are typically stored in other configuration files in the /etc/libreport/events.d/ directory. In the standard installation, the /etc/libreport/report_event.conf file contains only one include directive: If you would like to modify this file, please note that it respects shell metacharacters (*,USD,?, etc.) and interprets relative paths relatively to its location. Each rule starts with a line with a non-space leading character, all subsequent lines starting with the space character or the tab character are considered a part of this rule. Each rule consists of two parts, a condition part and a program part. The condition part contains conditions in one of the following forms: VAR = VAL , VAR != VAL , or VAL ~= REGEX ...where: VAR is either the EVENT key word or a name of a problem data directory element (such as executable , package , hostname , etc.), VAL is either a name of an event or a problem data element, and REGEX is a regular expression. The program part consists of program names and shell interpretable code. If all conditions in the condition part are valid, the program part is run in the shell. The following is an event example: EVENT=post-create date > /tmp/dt echo USDHOSTNAME `uname -r` This event would overwrite the contents of the /tmp/dt file with the current date and time, and print the host name of the machine and its kernel version on the standard output. Here is an example of a yet more complex event which is actually one of the predefined events. It saves relevant lines from the ~/.xsession-errors file to the problem report for any problem for which the abrt-ccpp services has been used to process that problem, and the crashed application has loaded any X11 libraries at the time of crash: EVENT=analyze_xsession_errors analyzer=CCpp dso_list~=.*/libX11.* test -f ~/.xsession-errors || { echo "No ~/.xsession-errors"; exit 1; } test -r ~/.xsession-errors || { echo "Can't read ~/.xsession-errors"; exit 1; } executable=`cat executable` && base_executable=USD{executable##*/} && grep -F -e "USDbase_executable" ~/.xsession-errors | tail -999 >xsession_errors && echo "Element 'xsession_errors' saved" The set of possible events is not hard-set. System administrators can add events according to their need. Currently, the following event names are provided with standard ABRT and libreport installation: post-create This event is run by abrtd on newly created problem data directories. When the post-create event is run, abrtd checks whether the UUID identifier of the new problem data matches the UUID of any already existing problem directories. If such a problem directory exists, the new problem data is deleted. analyze_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to process collected data. For example, the analyze_LocalGDB runs the GNU Debugger ( GDB ) utility on a core dump of an application and produces a backtrace of a program. You can view the list of analyze events and choose from it using abrt-gui . collect_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to collect additional information on a problem. You can view the list of collect events and choose from it using abrt-gui . report_ name_suffix ...where name_suffix is the adjustable part of the event name. This event is used to report a problem. You can view the list of report events and choose from it using abrt-gui . Additional information about events (such as their description, names and types of parameters which can be passed to them as environment variables, and other properties) is stored in the /etc/libreport/events/ event_name .xml files. These files are used by abrt-gui and abrt-cli to make the user interface more friendly. Do not edit these files unless you want to modify the standard installation. | [
"include events.d/*.conf",
"EVENT=post-create date > /tmp/dt echo USDHOSTNAME `uname -r`",
"EVENT=analyze_xsession_errors analyzer=CCpp dso_list~=.*/libX11.* test -f ~/.xsession-errors || { echo \"No ~/.xsession-errors\"; exit 1; } test -r ~/.xsession-errors || { echo \"Can't read ~/.xsession-errors\"; exit 1; } executable=`cat executable` && base_executable=USD{executable##*/} && grep -F -e \"USDbase_executable\" ~/.xsession-errors | tail -999 >xsession_errors && echo \"Element 'xsession_errors' saved\""
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sect-abrt-configuration |
Chapter 2. Working with sysctl and kernel tunables | Chapter 2. Working with sysctl and kernel tunables 2.1. What is a kernel tunable? Kernel tunables are used to customize the behavior of Red Hat Enterprise Linux at boot, or on demand while the system is running. Some hardware parameters are specified at boot time only and cannot be altered once the system is running, most however, can be altered as required and set permanent for the boot. 2.2. How to work with kernel tunables There are three ways to modify kernel tunables. Using the sysctl command By manually modifying configuration files in the /etc/sysctl.d/ directory Through a shell, interacting with the virtual file system mounted at /proc/sys Note Not all boot time parameters are under control of the sysfs subsystem, some hardware specific option must be set on the kernel command line, the Kernel Parameters section of this guide addresses those options 2.2.1. Using the sysctl command The sysctl command is used to list, read, and set kernel tunables. It can filter tunables when listing or reading and set tunables temporarily or permanently. Listing variables Reading variables Writing variables temporarily Writing variables permanently 2.2.2. Modifying files in /etc/sysctl.d To override a default at boot, you can also manually populate files in /etc/sysctl.d . Create a new file in /etc/sysctl.d Include the variables you wish to set, one per line, in the following form Save the file Either reboot the machine to make the changes take effect or Execute sysctl -p /etc/sysctl.d/99-custom.conf to apply the changes without rebooting 2.3. What tunables can be controlled? Tunables are divided into groups by kernel sybsystem. A Red Hat Enterprise Linux system has the following classes of tunables: Table 2.1. Table of sysctl interfaces Class Subsystem abi Execution domains and personalities crypto Cryptographic interfaces debug Kernel debugging interfaces dev Device specific information fs Global and specific filesystem tunables kernel Global kernel tunables net Network tunables sunrpc Sun Remote Procedure Call (NFS) user User Namespace limits vm Tuning and management of memory, buffer, and cache 2.3.1. Network interface tunables System administrators are able to adjust the network configuration on a running system through the networking tunables. Networking tunables are included in the /proc/sys/net directory, which contains multiple subdirectories for various networking topics. To adjust the network configuration, system administrators need to modify the files within such subdirectories. The most frequently used directories are: /proc/sys/net/core/ /proc/sys/net/ipv4/ The /proc/sys/net/core/ directory contains a variety of settings that control the interaction between the kernel and networking layers. By adjusting some of those tunables, you can improve performance of a system, for example by increasing the size of a receive queue, increasing the maximum connections or the memory dedicated to network interfaces. Note that the performance of a system depends on different aspects according to the individual issues. The /proc/sys/net/ipv4/ directory contains additional networking settings, which are useful when preventing attacks on the system or when using the system to act as a router. The directory contains both IP and TCP variables. For detailed explaination of those variables, see /usr/share/doc/kernel-doc-<version>/Documentation/networking/ip-sysctl.txt . Other directories within the /proc/sys/net/ipv4/ directory cover different aspects of the network stack: /proc/sys/net/ipv4/conf/ - allows you to configure each system interface in different ways, including the use of default settings for unconfigured devices and settings that override all special configurations /proc/sys/net/ipv4/neigh/ - contains settings for communicating with a host directly connected to the system and also contains different settings for systems more than one step away /proc/sys/net/ipv4/route/ - contains specifications that apply to routing with any interfaces on the system This list of network tunables is relevant to IPv4 interfaces and are accessible from the /proc/sys/net/ipv4/{all,<interface_name>}/ directory. Description of the following parameters have been adopted from the kernel documentation sites. [1] log_martians Log packets with impossible addresses to kernel log. Type Default Boolean 0 Enabled if one or more of conf/{all,interface}/log_martians is set to TRUE Further Resources What is the kernel parameter net.ipv4.conf.all.log_martians for? Why do I see "martian source" logs in the messages file ? accept_redirects Accept ICMP redirect messages. Type Default Boolean 1 accept_redirects for the interface is enabled under the following conditions: Both conf/{all,interface}/accept_redirects are TRUE (when forwarding for the interface is enabled) At least one of conf/{all,interface}/accept_redirects is TRUE (forwarding for the interface is disabled) For more information refer to How to enable or disable ICMP redirects forwarding Enable IP forwarding on an interface. Type Default Boolean 0 Further Resources Turning on Packet Forwarding and Nonlocal Binding mc_forwarding Do multicast routing. Type Default Boolean 0 Read only value A multicast routing daemon is required. conf/all/mc_forwarding must also be set to TRUE to enable multicast routing for the interface Further Resources For an explanation of the read only behavior, see Why system reports "permission denied on key" while setting the kernel parameter "net.ipv4.conf.all.mc_forwarding"? medium_id Arbitrary value used to differentiate the devices by the medium they are attached to. Type Default Integer 0 Notes Two devices on the same medium can have different id values when the broadcast packets are received only on one of them. The default value 0 means that the device is the only interface to its medium value of -1 means that medium is not known. Currently, it is used to change the proxy_arp behavior: the proxy_arp feature is enabled for packets forwarded between two devices attached to different media. Further Resources - For examples, see Using the "medium_id" feature in Linux 2.2 and 2.4 proxy_arp Do proxy arp. Type Default Boolean 0 proxy_arp for the interface is enabled if at least one of conf/{all,interface}/proxy_arp is set to TRUE, otherwise it is disabled proxy_arp_pvlan Private VLAN proxy arp. Type Default Boolean 0 Allow proxy arp replies back to the same interface, to support features like RFC 3069 shared_media Send(router) or accept(host) RFC1620 shared media redirects. Type Default Boolean 1 Notes Overrides secure_redirects. shared_media for the interface is enabled if at least one of conf/{all,interface}/shared_media is set to TRUE secure_redirects Accept ICMP redirect messages only to gateways listed in the interface's current gateway list. Type Default Boolean 1 Notes Even if disabled, RFC1122 redirect rules still apply. Overridden by shared_media. secure_redirects for the interface is enabled if at least one of conf/{all,interface}/secure_redirects is set to TRUE send_redirects Send redirects, if router. Type Default Boolean 1 Notes send_redirects for the interface is enabled if at least one of conf/{all,interface}/send_redirects is set to TRUE bootp_relay Accept packets with source address 0.b.c.d destined not to this host as local ones. Type Default Boolean 0 Notes A BOOTP daemon must be enabled to manage these packets conf/all/bootp_relay must also be set to TRUE to enable BOOTP relay for the interface Not implemented, see DHCP Relay Agent in the Red Hat Enterprise Linux Networking Guide accept_source_route Accept packets with SRR option. Type Default Boolean 1 Notes conf/all/accept_source_route must also be set to TRUE to accept packets with SRR option on the interface accept_local Accept packets with local source addresses. Type Default Boolean 0 Notes In combination with suitable routing, this can be used to direct packets between two local interfaces over the wire and have them accepted properly. rp_filter must be set to a non-zero value in order for accept_local to have an effect. route_localnet Do not consider loopback addresses as martian source or destination while routing. Type Default Boolean 0 Notes This enables the use of 127/8 for local routing purposes. rp_filter Enable source Validation Type Default Integer 0 Value Effect 0 No source validation. 1 Strict mode as defined in RFC3704 Strict Reverse Path 2 Loose mode as defined in RFC3704 Loose Reverse Path Notes Current recommended practice in RFC3704 is to enable strict mode to prevent IP spoofing from DDos attacks. If using asymmetric routing or other complicated routing, then loose mode is recommended. The highest value from conf/{all,interface}/rp_filter is used when doing source validation on the {interface} arp_filter Type Default Boolean 0 Value Effect 0 (default) The kernel can respond to arp requests with addresses from other interfaces. It usually makes sense, because it increases the chance of successful communication. 1 Allows you to have multiple network interfaces on the samesubnet, and have the ARPs for each interface be answered based on whether or not the kernel would route a packet from the ARP'd IP out that interface (therefore you must use source based routing for this to work). In other words it allows control of cards (usually 1) that respond to an arp request. Note IP addresses are owned by the complete host on Linux, not by particular interfaces. Only for more complex setups like load-balancing, does this behavior cause problems. arp_filter for the interface is enabled if at least one of conf/{all,interface}/arp_filter is set to TRUE arp_announce Define different restriction levels for announcing the local source IP address from IP packets in ARP requests sent on interface Type Default Integer 0 Value Effect 0 (default) Use any local address, configured on any interface 1 Try to avoid local addresses that are not in the target's subnet for this interface. This mode is useful when target hosts reachable via this interface require the source IP address in ARP requests to be part of their logical network configured on the receiving interface. When we generate the request we check all our subnets that include the target IP and preserve the source address if it is from such subnet. If there is no such subnet we select source address according to the rules for level 2. 2 Always use the best local address for this target. In this mode we ignore the source address in the IP packet and try to select local address that we prefer for talks with the target host. Such local address is selected by looking for primary IP addresses on all our subnets on the outgoing interface that include the target IP address. If no suitable local address is found we select the first local address we have on the outgoing interface or on all other interfaces, with the hope we receive reply for our request and even sometimes no matter the source IP address we announce. Notes The highest value from conf/{all,interface}/arp_announce is used. Increasing the restriction level gives more chance for receiving answer from the resolved target while decreasing the level announces more valid sender's information. arp_ignore Define different modes for sending replies in response to received ARP requests that resolve local target IP addresses Type Default Integer 0 Value Effect 0 (default): reply for any local target IP address, configured on any interface 1 reply only if the target IP address is local address configured on the incoming interface 2 reply only if the target IP address is local address configured on the incoming interface and both with the sender's IP address are part from same subnet on this interface 3 do not reply for local addresses configured with scope host, only resolutions for global and link addresses are replied 4-7 reserved 8 do not reply for all local addresses The max value from conf/{all,interface}/arp_ignore is used when ARP request is received on the {interface} Notes arp_notify Define mode for notification of address and device changes. Type Default Boolean 0 Value Effect 0 do nothing 1 Generate gratuitous arp requests when device is brought up or hardware address changes. Notes arp_accept Define behavior for gratuitous ARP frames who's IP is not already present in the ARP table Type Default Boolean 0 Value Effect 0 do not create new entries in the ARP table 1 create new entries in the ARP table. Notes Both replies and requests type gratuitous arp trigger the ARP table to be updated, if this setting is on. If the ARP table already contains the IP address of the gratuitous arp frame, the arp table is updated regardless if this setting is on or off. app_solicit The maximum number of probes to send to the user space ARP daemon via netlink before dropping back to multicast probes (see mcast_solicit). Type Default Integer 0 Notes See mcast_solicit disable_policy Disable IPSEC policy (SPD) for this interface Type Default Boolean 0 needinfo disable_xfrm Disable IPSEC encryption on this interface, whatever the policy Type Default Boolean 0 needinfo igmpv2_unsolicited_report_interval The interval in milliseconds in which the unsolicited IGMPv1 or IGMPv2 report retransmit takes place. Type Default Integer 10000 Notes Milliseconds igmpv3_unsolicited_report_interval The interval in milliseconds in which the unsolicited IGMPv3 report retransmit takes place. Type Default Integer 1000 Notes Milliseconds tag Allows you to write a number, which can be used as required. Type Default Integer 0 xfrm4_gc_thresh The threshold at which we start garbage collecting for IPv4 destination cache entries. Type Default Integer 1 Notes At twice this value the system refuses new allocations. 2.3.2. Global kernel tunables System administrators are able to configure and monitor general settings on a running system through the global kernel tunables. Global kernel tunables are included in the /proc/sys/kernel/ directory either directly as named control files or grouped in further subdirectories for various configuration topics. To adjust the global kernel tunables, system administrators need to modify the control files. Descriptions of the following parameters have been adopted from the kernel documentation sites. [2] dmesg_restrict Indicates whether unprivileged users are prevented from using the dmesg command to view messages from the kernel's log buffer. For further information, see Kernel sysctl documentation . core_pattern Specifies a core dumpfile pattern name. Max length Default 128 characters "core" For further information, see Kernel sysctl documentation . hardlockup_panic Controls the kernel panic when a hard lockup is detected. Type Value Effect Integer 0 kernel does not panic on hard lockup Integer 1 kernel panics on hard lockup In order to panic, the system needs to detect a hard lockup first. The detection is controlled by the nmi_watchdog parameter. Further Resources Kernel sysctl documentation Softlockup detector and hardlockup detector softlockup_panic Controls the kernel panic when a soft lockup is detected. Type Value Effect Integer 0 kernel does not panic on soft lockup Integer 1 kernel panics on soft lockup By default, on RHEL7 this value is 0. For more information about softlockup_panic , see kernel_parameters . kptr_restrict Indicates whether restrictions are placed on exposing kernel addresses via /proc and other interfaces. Type Default Integer 0 Value Effect 0 hashes the kernel address before printing 1 replaces printed kernel pointers with 0's under certain conditions 2 replaces printed kernel pointers with 0's unconditionally To learn more, see Kernel sysctl documentation . nmi_watchdog Controls the hard lockup detector on x86 systems. Type Default Integer 0 Value Effect 0 disables the lockup detector 1 enables the lockup detector The hard lockup detector monitors each CPU for its ability to respond to interrupts. For more details, see Kernel sysctl documentation . watchdog_thresh Controls frequency of watchdog hrtimer , NMI events, and soft/hard lockup thresholds. Default threshold Soft lockup threshold 10 seconds 2 * watchdog_thresh Setting this tunable to zero disables lockup detection altogether. For more info, consult Kernel sysctl documentation . panic, panic_on_oops, panic_on_stackoverflow, panic_on_unrecovered_nmi, panic_on_warn, panic_on_rcu_stall, hung_task_panic These tunables specify under what circumstances the kernel should panic. To see more details about a group of panic parameters, see Kernel sysctl documentation . printk, printk_delay, printk_ratelimit, printk_ratelimit_burst, printk_devkmsg These tunables control logging or printing of kernel error messages. For more details about a group of printk parameters, see Kernel sysctl documentation . shmall, shmmax, shm_rmid_forced These tunables control limits for shared memory. For more information about a group of shm parameters, see Kernel sysctl documentation . threads-max Controls the maximum number of threads created by the fork() system call. Min value Max value 20 Given by FUTEX_TID_MASK (0x3fffffff) The threads-max value is checked against the available RAM pages. If the thread structures occupy too much of the available RAM pages, threads-max is reduced accordingly. For more details, see Kernel sysctl documentation . pid_max PID allocation wrap value. To see more information, refer to Kernel sysctl documentation . numa_balancing This parameter enables or disables automatic NUMA memory balancing. On NUMA machines, there is a performance penalty if remote memory is accessed by a CPU. For more details, see Kernel sysctl documentation . numa_balancing_scan_period_min_ms, numa_balancing_scan_delay_ms, numa_balancing_scan_period_max_ms, numa_balancing_scan_size_mb These tunables detect if pages are properly placed of if the data should be migrated to a memory node local to where the task is running. For more details about a group of numa_balancing_scan parameters, see Kernel sysctl documentation . [1] https://www.kernel.org/doc/Documentation/ [2] https://www.kernel.org/doc/Documentation/ | [
"sysctl -a",
"sysctl kernel.version kernel.version = #1 SMP Fri Jan 19 13:19:54 UTC 2018",
"sysctl <tunable class>.<tunable>=<value>",
"sysctl -w <tunable class>.<tunable>=<value> >> /etc/sysctl.conf",
"vim /etc/sysctl.d/99-custom.conf",
"<tunable class>.<tunable> = <value> + <tunable class>.<tunable> = <value>"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/kernel_administration_guide/working_with_sysctl_and_kernel_tunables |
20.23. Adding Multifunction PCI Devices to KVM Guest Virtual Machines | 20.23. Adding Multifunction PCI Devices to KVM Guest Virtual Machines To add a multi-function PCI device to a KVM guest virtual machine: Run the virsh edit guestname command to edit the XML configuration file for the guest virtual machine. In the <address> element, add a multifunction='on' attribute. This enables the use of other functions for the particular multifunction PCI device. For a PCI device with two functions, amend the XML configuration file to include a second device with the same slot number as the first device and a different function number, such as function='0x1' . For Example: Run the lspci command. The output from the KVM guest virtual machine shows the virtio block device: Note The SeaBIOS application runs in real mode for compatibility with BIOS interfaces. This limits the amount of memory available. As a consequence, SeaBIOS is only able to handle a limited number of disks. Currently, the supported number of disks is: virtio-scsi - 64 virtio-blk - 4 ahci/sata - 24 (4 controllers with all 6 ports connected) usb-storage - 4 As a workaround for this problem, when attaching a large number of disks to your virtual machine, make sure that your system disk has a small pci slot number, so SeaBIOS sees it first when scanning the pci bus. It is also recommended to use the virtio-scsi device instead of virtio-blk as the per-disk memory overhead is smaller. | [
"<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel62-1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </disk>",
"<disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel62-1.img'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </disk> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='none'/> <source file='/var/lib/libvirt/images/rhel62-2.img'/> <target dev='vdb' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </disk>",
"lspci 00:05.0 SCSI storage controller: Red Hat, Inc Virtio block device 00:05.1 SCSI storage controller: Red Hat, Inc Virtio block device"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/virtualization_deployment_and_administration_guide/sect-editing_a_guest_virtual_machines_configuration_file-adding_multifunction_pci_devices_to_kvm_guest_virtual_machines |
Chapter 1. About disconnected installation mirroring | Chapter 1. About disconnected installation mirroring You can use a mirror registry to ensure that your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. 1.1. Creating a mirror registry If you already have a container image registry, such as Red Hat Quay, you can use it as your mirror registry. If you do not already have a registry, you can create a mirror registry using the mirror registry for Red Hat OpenShift . 1.2. Mirroring images for a disconnected installation You can use one of the following procedures to mirror your OpenShift Container Platform image repository to your mirror registry: Mirroring images for a disconnected installation Mirroring images for a disconnected installation using the oc-mirror plugin | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.14/html/disconnected_installation_mirroring/installing-mirroring-disconnected-about |
Chapter 3. Migrating to Apache Camel 3 | Chapter 3. Migrating to Apache Camel 3 This guide provides information on migrating from Red Hat Fuse 7 to Camel 3 on Spring Boot. NOTE There are important differences between Fuse 7 and Camel 3 in the components, such as modularization and XML Schema changes. See each component section for details. docs/modules/camel-spring-boot/camel-spring-boot-migration-guide/ref-migrating-to-camel-spring-boot-3.adoc == Java versions Camel 3 supports Java 17 and Java 11 but not Java 8. In Java 11 the JAXB modules have been removed from the JDK, therefore you will need to add them as Maven dependencies (if you use JAXB such as when using XML DSL or the camel-jaxb component): <dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency> NOTE : The Java Platform, Standard Edition 11 Development Kit (JDK 11) is deprecated in Camel Spring Boot 3.x release version and not supported from the further 4.x release versions. 3.1. Modularization of camel-core In Camel 3.x, camel-core has been split into many JARs as follows: camel-api camel-base camel-caffeine-lrucache camel-cloud camel-core camel-jaxp camel-main camel-management-api camel-management camel-support camel-util camel-util-json Maven users of Apache Camel can keep using the dependency camel-core which has transitive dependencies on all of its modules, except for camel-main , and therefore no migration is needed. 3.2. Modularization of Components In Camel 3.x, some of the camel-core components are moved into individual components. camel-attachments camel-bean camel-browse camel-controlbus camel-dataformat camel-dataset camel-direct camel-directvm camel-file camel-language camel-log camel-mock camel-ref camel-rest camel-saga camel-scheduler camel-seda camel-stub camel-timer camel-validator camel-vm camel-xpath camel-xslt camel-xslt-saxon camel-zip-deflater 3.3. Default Shutdown Strategy Red Hat build of Apache Camel supports a shutdown strategy using org.apache.camel.spi.ShutdownStrategy which is responsible for shutting down routes in a graceful manner. Red Hat build of Apache Camel provides a default strategy in the org.apache.camel.impl.engine.DefaultShutdownStrategy to handle the graceful shutdown of the routes. Note The DefaultShutdownStrategy class has been moved from package org.apache.camel.impl to org.apache.camel.impl.engine in Apache Camel 3.x. When you configure a simple scheduled route policy to stop a route, the route stopping algorithm is automatically integrated with the graceful shutdown procedure. This means that the task waits until the current exchange has finished processing before shutting down the route. You can set a timeout, however, that forces the route to stop after the specified time, irrespective of whether or not the route has finished processing the exchange. During graceful shutdown, If you enable the DEBUG logging level on org.apache.camel.impl.engine.DefaultShutdownStrategy , then it logs the same inflight exchange information. If you do not want to see these logs, you can turn this off by setting the option logInflightExchangesOnTimeout to false. 3.4. Multiple CamelContexts per application not supported Support for multiple CamelContexts has been removed and only one CamelContext per deployment is recommended and supported. The context attribute on the various Camel annotations such as @EndpointInject , @Produce , @Consume etc. has therefore been removed. 3.5. Deprecated APIs and Components All deprecated APIs and components from Camel 2.x have been removed in Camel 3. 3.5.1. Removed components All deprecated components from Camel 2.x are removed in Camel 3.x, including the old camel-http , camel-hdfs , camel-mina , camel-mongodb , camel-netty , camel-netty-http , camel-quartz , camel-restlet and camel-rx components. Removed camel-jibx component. Removed camel-boon dataformat. Removed the camel-linkedin component as the Linkedin API 1.0 is no longer supported . Support for the new 2.0 API is tracked by CAMEL-13813 . The camel-zookeeper has its route policy functionality removed, instead use ZooKeeperClusterService or the camel-zookeeper-master component. The camel-jetty component no longer supports producer (which has been removed), use camel-http component instead. The twitter-streaming component has been removed as it relied on the deprecated Twitter Streaming API and is no longer functional. 3.5.2. Renamed components Following components are renamed in Camel 3.x. The Camel-microprofile-metrics has been renamed to camel-micrometer The test component has been renamed to dataset-test and moved out of camel-core into camel-dataset JAR. The http4 component has been renamed to http , and it's corresponding component package from org.apache.camel.component.http4 to org.apache.camel.component.http . The supported schemes are now only http and https . The hdfs2 component has been renamed to hdfs , and it's corresponding component package from org.apache.camel.component.hdfs2 to org.apache.camel.component.hdfs . The supported scheme is now hdfs . The mina2 component has been renamed to mina , and it's corresponding component package from org.apache.camel.component.mina2 to org.apache.camel.component.mina . The supported scheme is now mina . The mongodb3 component has been renamed to mongodb , and it's corresponding component package from org.apache.camel.component.mongodb3 to org.apache.camel.component.mongodb . The supported scheme is now mongodb . The netty4-http component has been renamed to netty-http , and it's corresponding component package from org.apache.camel.component.netty4.http to org.apache.camel.component.netty.http . The supported scheme is now netty-http . The netty4 component has been renamed to netty , and it's corresponding component package from org.apache.camel.component.netty4 to org.apache.camel.component.netty . The supported scheme is now netty . The quartz2 component has been renamed to quartz , and it's corresponding component package from org.apache.camel.component.quartz2 to org.apache.camel.component.quartz . The supported scheme is now quartz . The rxjava2 component has been renamed to rxjava , and it's corresponding component package from org.apache.camel.component.rxjava2 to org.apache.camel.component.rxjava . Renamed camel-jetty9 to camel-jetty . The supported scheme is now jetty . 3.6. Changes to Camel components 3.6.1. Mock component The mock component has been moved out of camel-core . Because of this a number of methods on its assertion clause builder are removed. 3.6.2. ActiveMQ If you are using the activemq-camel component, then you should migrate to use camel-activemq component, where the component name has changed from org.apache.activemq.camel.component.ActiveMQComponent to org.apache.camel.component.activemq.ActiveMQComponent . 3.6.3. AWS The component camel-aws has been split into multiple components: camel-aws-cw camel-aws-ddb (which contains both ddb and ddbstreams components) camel-aws-ec2 camel-aws-iam camel-aws-kinesis (which contains both kinesis and kinesis-firehose components) camel-aws-kms camel-aws-lambda camel-aws-mq camel-aws-s3 camel-aws-sdb camel-aws-ses camel-aws-sns camel-aws-sqs camel-aws-swf Note It is recommended to add specific dependencies for these components. 3.6.4. Camel CXF The camel-cxf JAR has been divided into SOAP vs REST and Spring and non Spring JARs. It is recommended to choose the specific JAR from the following list when migrating from came-cxf . camel-cxf-soap camel-cxf-spring-soap camel-cxf-rest camel-cxf-spring-rest camel-cxf-transport camel-cxf-spring-transport For example, if you were using CXF for SOAP and with Spring XML, then select camel-cxf-spring-soap and camel-cxf-spring-transport when migrating from camel-cxf . When using Spring Boot, choose from the following starter when you migrate from camel-cxf-starter to SOAP or REST: camel-cxf-soap-starter camel-cxf-rest-starter 3.6.4.1. Camel CXF changed namespaces The camel-cxf XML XSD schemas has also changed namespaces. Table 3.1. Changes to namespaces Old Namespace New Namespace http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxws http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxws/camel-cxf.xsd http://camel.apache.org/schema/cxf http://camel.apache.org/schema/cxf/jaxrs http://camel.apache.org/schema/cxf/camel-cxf.xsd http://camel.apache.org/schema/cxf/jaxrs/camel-cxf.xsd The camel-cxf SOAP component is moved to a new jaxws sub-package, that is, org.apache.camel.component.cxf is now org.apache.camel.component.cxf.jaws . For example, the CxfComponent class is now located in org.apache.camel.component.cxf.jaxws . 3.6.5. FHIR The camel-fhir component has upgraded it's hapi-fhir dependency to 4.1.0. The default FHIR version has been changed to R4. Therefore, if DSTU3 is desired it has to be explicitly set. 3.6.6. Kafka The camel-kafka component has removed the options bridgeEndpoint and circularTopicDetection as this is no longer needed as the component is acting as bridging would work on Camel 2.x. In other words camel-kafka will send messages to the topic from the endpoint uri. To override this use the KafkaConstants.OVERRIDE_TOPIC header with the new topic. See more details in the camel-kafka component documentation. 3.6.7. Telegram The camel-telegram component has moved the authorization token from uri-path to a query parameter instead, e.g. migrate to 3.6.8. JMX If you run Camel standalone with just camel-core as a dependency, and you want JMX enabled out of the box, then you need to add camel-management as a dependency. For using ManagedCamelContext you now need to get this extension from CamelContext as follows: 3.6.9. XSLT The XSLT component has moved out of camel-core into camel-xslt and camel-xslt-saxon . The component is separated so camel-xslt is for using the JDK XSTL engine (Xalan), and camel-xslt-saxon is when you use Saxon. This means that you should use xslt and xslt-saxon as component name in your Camel endpoint URIs. If you are using XSLT aggregation strategy, then use org.apache.camel.component.xslt.saxon.XsltSaxonAggregationStrategy for Saxon support. And use org.apache.camel.component.xslt.saxon.XsltSaxonBuilder for Saxon support if using xslt builder. Also notice that allowStax is also only supported in camel-xslt-saxon as this is not supported by the JDK XSLT. 3.6.10. XML DSL Migration The XML DSL has been changed slightly. The custom load balancer EIP has changed from <custom> to <customLoadBalancer> The XMLSecurity data format has renamed the attribute keyOrTrustStoreParametersId to keyOrTrustStoreParametersRef in the <secureXML> tag. The <zipFile> data format has been renamed to <zipfile> . 3.7. Migrating Camel Maven Plugins The camel-maven-plugin has been split up into two maven plugins: camel-maven-plugin camel-maven-plugin has the run goal, which is intended for quickly running Camel applications standalone. See https://camel.apache.org/manual/camel-maven-plugin.html for more information. camel-report-maven-plugin The camel-report-maven-plugin has the validate and route-coverage goals which is used for generating reports of your Camel projects such as validating Camel endpoint URIs and route coverage reports, etc. See https://camel.apache.org/manual/camel-report-maven-plugin.html for more information. | [
"<dependency> <groupId>javax.xml.bind</groupId> <artifactId>jaxb-api</artifactId> <version>2.3.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-core</artifactId> <version>2.3.0.1</version> </dependency> <dependency> <groupId>com.sun.xml.bind</groupId> <artifactId>jaxb-impl</artifactId> <version>2.3.2</version> </dependency>",
"2015-01-12 13:23:23,656 [- ShutdownTask] INFO DefaultShutdownStrategy - There are 1 inflight exchanges: InflightExchange: [exchangeId=ID-test-air-62213-1421065401253-0-3, fromRouteId=route1, routeId=route1, nodeId=delay1, elapsed=2007, duration=2017]",
"context.getShutdownStrategegy().setLogInflightExchangesOnTimeout(false);",
"telegram:bots/myTokenHere",
"telegram:bots?authorizationToken=myTokenHere",
"ManagedCamelContext managed = camelContext.getExtension(ManagedCamelContext.class);"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/migrating_to_red_hat_build_of_apache_camel_for_spring_boot/migrating-to-camel-spring-boot3 |
Chapter 4. Project deployment options with Red Hat Decision Manager | Chapter 4. Project deployment options with Red Hat Decision Manager After you develop, test, and build your Red Hat Decision Manager project, you can deploy the project to begin using the business assets you have created. You can deploy a Red Hat Decision Manager project to a configured KIE Server, to an embedded Java application, or into a Red Hat OpenShift Container Platform environment for an enhanced containerized implementation. The following options are the main methods for Red Hat Decision Manager project deployment: Table 4.1. Project deployment options Deployment option Description Documentation Deployment to an OpenShift environment Red Hat OpenShift Container Platform combines Docker and Kubernetes and enables you to create and manage containers. You can install both Business Central and KIE Server on OpenShift. Red Hat Decision Manager provides templates that you can use to deploy a Red Hat Decision Manager authoring environment, managed server environment, immutable server environment, or trial environment on OpenShift. With OpenShift, components of Red Hat Decision Manager are deployed as separate OpenShift pods. You can scale each of the pods up and down individually, providing as few or as many containers as necessary for a particular component. You can use standard OpenShift methods to manage the pods and balance the load. Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Decision Manager environment on Red Hat OpenShift Container Platform 3 using templates Deployment to KIE Server KIE Server is the server provided with Red Hat Decision Manager that runs the decision services, process applications, and other deployable assets from a packaged and deployed Red Hat Decision Manager project (KJAR file). These services are consumed at run time through an instantiated KIE container, or deployment unit . You can deploy and maintain deployment units in KIE Server using Business Central or using a headless Process Automation Manager controller with its associated REST API (considered a managed KIE Server instance). You can also deploy and maintain deployment units using the KIE Server REST API or Java client API from a standalone Maven project, an embedded Java application, or other custom environment (considered an unmanaged KIE Server instance). Packaging and deploying an Red Hat Decision Manager project Interacting with Red Hat Decision Manager using KIE APIs Managing and monitoring KIE Server Deployment to an embedded Java application If you want to deploy Red Hat Decision Manager projects to your own Java virtual machine (JVM) environment, microservice, or application server, you can bundle the application resources in the project WAR files to create a deployment unit similar to a KIE container. You can also use the core KIE APIs (not KIE Server APIs) to configure a KIE scanner to periodically update KIE containers. KIE Public API | null | https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_decision_manager/project-deployment-options-ref_decision-management-architecture |
Providing feedback on JBoss EAP documentation | Providing feedback on JBoss EAP documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Please include the Document URL , the section number and describe the issue . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Submit creates and routes the issue to the appropriate documentation team. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/how_to_set_up_sso_with_saml_v2/proc_providing-feedback-on-red-hat-documentation_default |
4.12. augeas | 4.12. augeas 4.12.1. RHEA-2011:1659 - augeas bug fix and enhancement update Updated augeas packages that fix multiple bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. Augeas is a configuration editing tool. Augeas parses configuration files in their native formats and transforms them into a tree. Configuration changes are made by manipulating this tree and saving it back into native config files. The augeas packages have been upgraded to upstream version 0.9.0, which provides a number of bug fixes and enhancements over the version. (BZ# 691483 ) Bug Fix BZ# 693539 Previously, due to a bug in the source code, parsing invalid files failed silently without any error message. With this update, error messages are provided to inform users about the problem. All users of Augeas are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/augeas |
Chapter 3. Monitoring a Ceph storage cluster | Chapter 3. Monitoring a Ceph storage cluster As a storage administrator, you can monitor the overall health of the Red Hat Ceph Storage cluster, along with monitoring the health of the individual components of Ceph. Once you have a running Red Hat Ceph Storage cluster, you might begin monitoring the storage cluster to ensure that the Ceph Monitor and Ceph OSD daemons are running, at a high-level. Ceph storage cluster clients connect to a Ceph Monitor and receive the latest version of the storage cluster map before they can read and write data to the Ceph pools within the storage cluster. So the monitor cluster must have agreement on the state of the cluster before Ceph clients can read and write data. Ceph OSDs must peer the placement groups on the primary OSD with the copies of the placement groups on secondary OSDs. If faults arise, peering will reflect something other than the active + clean state. 3.1. High-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of the Ceph daemons to ensure that they are up and running. High level monitoring also involves checking the storage cluster capacity to ensure that the storage cluster does not exceed its full ratio . The Red Hat Ceph Storage Dashboard is the most common way to conduct high-level monitoring. However, you can also use the command-line interface, the Ceph admin socket or the Ceph API to monitor the storage cluster. 3.1.1. Checking the storage cluster health After you start the Ceph storage cluster, and before you start reading or writing data, check the storage cluster's health first. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example You can check on the health of the Ceph storage cluster with the following command: Example You can check the status of the Ceph storage cluster by running ceph status command: Example The output provides the following information: Cluster ID Cluster health status The monitor map epoch and the status of the monitor quorum. The OSD map epoch and the status of OSDs. The status of Ceph Managers. The status of Object Gateways. The placement group map version. The number of placement groups and pools. The notional amount of data stored and the number of objects stored. The total amount of data stored. The IO client operations. An update on the upgrade process if the cluster is upgrading. Upon starting the Ceph cluster, you will likely encounter a health warning such as HEALTH_WARN XXX num placement groups stale . Wait a few moments and check it again. When the storage cluster is ready, ceph health should return a message such as HEALTH_OK . At that point, it is okay to begin using the cluster. 3.1.2. Watching storage cluster events You can watch events that are happening with the Ceph storage cluster using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To watch the cluster's ongoing events, run the following command: Example 3.1.3. How Ceph calculates data usage The used value reflects the actual amount of raw storage used. The xxx GB / xxx GB value means the amount available, the lesser of the two numbers, of the overall storage capacity of the cluster. The notional number reflects the size of the stored data before it is replicated, cloned or snapshotted. Therefore, the amount of data actually stored typically exceeds the notional amount stored, because Ceph creates replicas of the data and may also use storage capacity for cloning and snapshotting. 3.1.4. Understanding the storage clusters usage stats To check a cluster's data usage and data distribution among pools, use the df option. It is similar to the Linux df command. The SIZE / AVAIL / RAW USED in the ceph df and ceph status command output are different if some OSDs are marked OUT of the cluster compared to when all OSDs are IN . The SIZE / AVAIL / RAW USED is calculated from sum of SIZE (osd disk size), RAW USE (total used space on disk), and AVAIL of all OSDs which are in IN state. You can see the total of SIZE / AVAIL / RAW USED for all OSDs in ceph osd df tree command output. Example The ceph df detail command gives more details about other pool statistics such as quota objects, quota bytes, used compression, and under compression. The RAW STORAGE section of the output provides an overview of the amount of storage the storage cluster manages for data. CLASS: The class of OSD device. SIZE: The amount of storage capacity managed by the storage cluster. In the above example, if the SIZE is 90 GiB, it is the total size without the replication factor, which is three by default. The total available capacity with the replication factor is 90 GiB/3 = 30 GiB. Based on the full ratio, which is 0.85% by default, the maximum available space is 30 GiB * 0.85 = 25.5 GiB AVAIL: The amount of free space available in the storage cluster. In the above example, if the SIZE is 90 GiB and the USED space is 6 GiB, then the AVAIL space is 84 GiB. The total available space with the replication factor, which is three by default, is 84 GiB/3 = 28 GiB USED: The amount of raw storage consumed by user data. In the above example, 100 MiB is the total space available after considering the replication factor. The actual available size is 33 MiB. RAW USED: The amount of raw storage consumed by user data, internal overhead, or reserved capacity. % RAW USED: The percentage of RAW USED . Use this number in conjunction with the full ratio and near full ratio to ensure that you are not reaching the storage cluster's capacity. The POOLS section of the output provides a list of pools and the notional usage of each pool. The output from this section DOES NOT reflect replicas, clones or snapshots. For example, if you store an object with 1 MB of data, the notional usage will be 1 MB, but the actual usage may be 3 MB or more depending on the number of replicas for example, size = 3 , clones and snapshots. POOL: The name of the pool. ID: The pool ID. STORED: The actual amount of data stored by the user in the pool. This value changes based on the raw usage data based on (k+M)/K values, number of object copies, and the number of objects degraded at the time of pool stats calculation. OBJECTS: The notional number of objects stored per pool. It is STORED size * replication factor. USED: The notional amount of data stored in kilobytes, unless the number appends M for megabytes or G for gigabytes. %USED: The notional percentage of storage used per pool. MAX AVAIL: An estimate of the notional amount of data that can be written to this pool. It is the amount of data that can be used before the first OSD becomes full. It considers the projected distribution of data across disks from the CRUSH map and uses the first OSD to fill up as the target. In the above example, MAX AVAIL is 153.85 MB without considering the replication factor, which is three by default. See the Red Hat Knowledgebase article titled ceph df MAX AVAIL is incorrect for simple replicated pool to calculate the value of MAX AVAIL . QUOTA OBJECTS: The number of quota objects. QUOTA BYTES: The number of bytes in the quota objects. USED COMPR: The amount of space allocated for compressed data including his includes compressed data, allocation, replication and erasure coding overhead. UNDER COMPR: The amount of data passed through compression and beneficial enough to be stored in a compressed form. Note The numbers in the POOLS section are notional. They are not inclusive of the number of replicas, snapshots or clones. As a result, the sum of the USED and %USED amounts will not add up to the RAW USED and %RAW USED amounts in the GLOBAL section of the output. Note The MAX AVAIL value is a complicated function of the replication or erasure code used, the CRUSH rule that maps storage to devices, the utilization of those devices, and the configured mon_osd_full_ratio . Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. 3.1.5. Understanding the OSD usage stats Use the ceph osd df command to view OSD utilization stats. Example ID: The name of the OSD. CLASS: The type of devices the OSD uses. WEIGHT: The weight of the OSD in the CRUSH map. REWEIGHT: The default reweight value. SIZE: The overall storage capacity of the OSD. USE: The OSD capacity. DATA: The amount of OSD capacity that is used by user data. OMAP: An estimate value of the bluefs storage that is being used to store object map ( omap ) data (key value pairs stored in rocksdb ). META: The bluefs space allocated, or the value set in the bluestore_bluefs_min parameter, whichever is larger, for internal metadata which is calculated as the total space allocated in bluefs minus the estimated omap data size. AVAIL: The amount of free space available on the OSD. %USE: The notional percentage of storage used by the OSD VAR: The variation above or below average utilization. PGS: The number of placement groups in the OSD. MIN/MAX VAR: The minimum and maximum variation across all OSDs. Additional Resources See How Ceph calculates data usage for details. See Understanding the OSD usage stats for details. See CRUSH Weights in Red Hat Ceph Storage Storage Strategies Guide for details. 3.1.6. Checking the storage cluster status You can check the status of the Red Hat Ceph Storage cluster from the command-line interface. The status sub command or the -s argument will display the current status of the storage cluster. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To check a storage cluster's status, execute the following: Example Or Example In interactive mode, type ceph and press Enter : Example 3.1.7. Checking the Ceph Monitor status If the storage cluster has multiple Ceph Monitors, which is a requirement for a production Red Hat Ceph Storage cluster, then you can check the Ceph Monitor quorum status after starting the storage cluster, and before doing any reading or writing of data. A quorum must be present when multiple Ceph Monitors are running. Check the Ceph Monitor status periodically to ensure that they are running. If there is a problem with the Ceph Monitor, that prevents an agreement on the state of the storage cluster, the fault can prevent Ceph clients from reading and writing data. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To display the Ceph Monitor map, execute the following: Example or Example To check the quorum status for the storage cluster, execute the following: Ceph returns the quorum status. Example 3.1.8. Using the Ceph administration socket Use the administration socket to interact with a given daemon directly by using a UNIX socket file. For example, the socket enables you to: List the Ceph configuration at runtime Set configuration values at runtime directly without relying on Monitors. This is useful when Monitors are down . Dump historic operations Dump the operation priority queue state Dump operations without rebooting Dump performance counters In addition, using the socket is helpful when troubleshooting problems related to Ceph Monitors or OSDs. Regardless, if the daemon is not running, a following error is returned when attempting to use the administration socket: Important The administration socket is only available while a daemon is running. When you shut down the daemon properly, the administration socket is removed. However, if the daemon terminates unexpectedly, the administration socket might persist. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To use the socket: Syntax Replace: MONITOR_ID of the daemon COMMAND with the command to run. Use help to list the available commands for a given daemon. To view the status of a Ceph Monitor: Example Example Alternatively, specify the Ceph daemon by using its socket file: Syntax To view the status of a Ceph OSD named osd.0 on the specific host: Example Note You can use help instead of status for the various options that are available for the specific daemon. To list all socket files for the Ceph processes: Example Additional Resources See the Red Hat Ceph Storage Troubleshooting Guide for more information. 3.1.9. Understanding the Ceph OSD status A Ceph OSD's status is either in the storage cluster, or out of the storage cluster. It is either up and running, or it is down and not running. If a Ceph OSD is up , it can be either in the storage cluster, where data can be read and written, or it is out of the storage cluster. If it was in the storage cluster and recently moved out of the storage cluster, Ceph starts migrating placement groups to other Ceph OSDs. If a Ceph OSD is out of the storage cluster, CRUSH will not assign placement groups to the Ceph OSD. If a Ceph OSD is down , it should also be out . Note If a Ceph OSD is down and in , there is a problem, and the storage cluster will not be in a healthy state. If you execute a command such as ceph health , ceph -s or ceph -w , you might notice that the storage cluster does not always echo back HEALTH OK . Do not panic. With respect to Ceph OSDs, you can expect that the storage cluster will NOT echo HEALTH OK in a few expected circumstances: You have not started the storage cluster yet, and it is not responding. You have just started or restarted the storage cluster, and it is not ready yet, because the placement groups are getting created and the Ceph OSDs are in the process of peering. You just added or removed a Ceph OSD. You just modified the storage cluster map. An important aspect of monitoring Ceph OSDs is to ensure that when the storage cluster is up and running that all Ceph OSDs that are in the storage cluster are up and running, too. To see if all OSDs are running, execute: Example or Example The result should tell you the map epoch, eNNNN , the total number of OSDs, x , how many, y , are up , and how many, z , are in : If the number of Ceph OSDs that are in the storage cluster are more than the number of Ceph OSDs that are up . Execute the following command to identify the ceph-osd daemons that are not running: Example Tip The ability to search through a well-designed CRUSH hierarchy can help you troubleshoot the storage cluster by identifying the physical locations faster. If a Ceph OSD is down , connect to the node and start it. You can use Red Hat Storage Console to restart the Ceph OSD daemon, or you can use the command line. Syntax Example Additional Resources See the Red Hat Ceph Storage Dashboard Guide for more details. 3.2. Low-level monitoring of a Ceph storage cluster As a storage administrator, you can monitor the health of a Red Hat Ceph Storage cluster from a low-level perspective. Low-level monitoring typically involves ensuring that Ceph OSDs are peering properly. When peering faults occur, placement groups operate in a degraded state. This degraded state can be the result of many different things, such as hardware failure, a hung or crashed Ceph daemon, network latency, or a complete site outage. 3.2.1. Monitoring Placement Group Sets When CRUSH assigns placement groups to Ceph OSDs, it looks at the number of replicas for the pool and assigns the placement group to Ceph OSDs such that each replica of the placement group gets assigned to a different Ceph OSD. For example, if the pool requires three replicas of a placement group, CRUSH may assign them to osd.1 , osd.2 and osd.3 respectively. CRUSH actually seeks a pseudo-random placement that will take into account failure domains you set in the CRUSH map, so you will rarely see placement groups assigned to nearest neighbor Ceph OSDs in a large cluster. We refer to the set of Ceph OSDs that should contain the replicas of a particular placement group as the Acting Set . In some cases, an OSD in the Acting Set is down or otherwise not able to service requests for objects in the placement group. When these situations arise, do not panic. Common examples include: You added or removed an OSD. Then, CRUSH reassigned the placement group to other Ceph OSDs, thereby changing the composition of the acting set and spawning the migration of data with a "backfill" process. A Ceph OSD was down , was restarted and is now recovering . A Ceph OSD in the acting set is down or unable to service requests, and another Ceph OSD has temporarily assumed its duties. Ceph processes a client request using the Up Set , which is the set of Ceph OSDs that actually handle the requests. In most cases, the up set and the Acting Set are virtually identical. When they are not, it can indicate that Ceph is migrating data, an Ceph OSD is recovering, or that there is a problem, that is, Ceph usually echoes a HEALTH WARN state with a "stuck stale" message in such scenarios. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure Log into the Cephadm shell: Example To retrieve a list of placement groups: Example View which Ceph OSDs are in the Acting Set or in the Up Set for a given placement group: Syntax Example Note If the Up Set and Acting Set do not match, this may be an indicator that the storage cluster rebalancing itself or of a potential problem with the storage cluster. 3.2.2. Ceph OSD peering Before you can write data to a placement group, it must be in an active state, and it should be in a clean state. For Ceph to determine the current state of a placement group, the primary OSD of the placement group that is, the first OSD in the acting set, peers with the secondary and tertiary OSDs to establish agreement on the current state of the placement group. Assuming a pool with three replicas of the PG. Figure 3.1. Peering 3.2.3. Placement Group States If you execute a command such as ceph health , ceph -s or ceph -w , you may notice that the cluster does not always echo back HEALTH OK . After you check to see if the OSDs are running, you should also check placement group states. You should expect that the cluster will NOT echo HEALTH OK in a number of placement group peering-related circumstances: You have just created a pool and placement groups haven't peered yet. The placement groups are recovering. You have just added an OSD to or removed an OSD from the cluster. You have just modified the CRUSH map and the placement groups are migrating. There is inconsistent data in different replicas of a placement group. Ceph is scrubbing a placement group's replicas. Ceph doesn't have enough storage capacity to complete backfilling operations. If one of the foregoing circumstances causes Ceph to echo HEALTH WARN , don't panic. In many cases, the cluster will recover on its own. In some cases, you may need to take action. An important aspect of monitoring placement groups is to ensure that when the cluster is up and running that all placement groups are active , and preferably in the clean state. To see the status of all placement groups, execute: Example The result should tell you the placement group map version, vNNNNNN , the total number of placement groups, x , and how many placement groups, y , are in a particular state such as active+clean : Note It is common for Ceph to report multiple states for placement groups. Snapshot Trimming PG States When snapshots exist, two additional PG states will be reported. snaptrim : The PGs are currently being trimmed snaptrim_wait : The PGs are waiting to be trimmed Example Output: In addition to the placement group states, Ceph will also echo back the amount of data used, aa , the amount of storage capacity remaining, bb , and the total storage capacity for the placement group. These numbers can be important in a few cases: You are reaching the near full ratio or full ratio . Your data isn't getting distributed across the cluster due to an error in the CRUSH configuration. Placement Group IDs Placement group IDs consist of the pool number, and not the pool name, followed by a period (.) and the placement group ID- a hexadecimal number. You can view pool numbers and their names from the output of ceph osd lspools . The default pool names data , metadata and rbd correspond to pool numbers 0 , 1 and 2 respectively. A fully qualified placement group ID has the following form: Syntax Example output: To retrieve a list of placement groups: Example To format the output in JSON format and save it to a file: Syntax Example Query a particular placement group: Syntax Example Additional Resources See the chapter Object Storage Daemon (OSD) configuration options in the OSD Object storage daemon configuratopn options section in Red Hat Ceph Storage Configuration Guide for more details on the snapshot trimming settings. 3.2.4. Placement Group creating state When you create a pool, it will create the number of placement groups you specified. Ceph will echo creating when it is creating one or more placement groups. Once they are created, the OSDs that are part of a placement group's Acting Set will peer. Once peering is complete, the placement group status should be active+clean , which means a Ceph client can begin writing to the placement group. 3.2.5. Placement group peering state When Ceph is Peering a placement group, Ceph is bringing the OSDs that store the replicas of the placement group into agreement about the state of the objects and metadata in the placement group. When Ceph completes peering, this means that the OSDs that store the placement group agree about the current state of the placement group. However, completion of the peering process does NOT mean that each replica has the latest contents. Authoritative History Ceph will NOT acknowledge a write operation to a client, until all OSDs of the acting set persist the write operation. This practice ensures that at least one member of the acting set will have a record of every acknowledged write operation since the last successful peering operation. With an accurate record of each acknowledged write operation, Ceph can construct and disseminate a new authoritative history of the placement group. A complete, and fully ordered set of operations that, if performed, would bring an OSD's copy of a placement group up to date. 3.2.6. Placement group active state Once Ceph completes the peering process, a placement group may become active . The active state means that the data in the placement group is generally available in the primary placement group and the replicas for read and write operations. 3.2.7. Placement Group clean state When a placement group is in the clean state, the primary OSD and the replica OSDs have successfully peered and there are no stray replicas for the placement group. Ceph replicated all objects in the placement group the correct number of times. 3.2.8. Placement Group degraded state When a client writes an object to the primary OSD, the primary OSD is responsible for writing the replicas to the replica OSDs. After the primary OSD writes the object to storage, the placement group will remain in a degraded state until the primary OSD has received an acknowledgement from the replica OSDs that Ceph created the replica objects successfully. The reason a placement group can be active+degraded is that an OSD may be active even though it doesn't hold all of the objects yet. If an OSD goes down , Ceph marks each placement group assigned to the OSD as degraded . The Ceph OSDs must peer again when the Ceph OSD comes back online. However, a client can still write a new object to a degraded placement group if it is active . If an OSD is down and the degraded condition persists, Ceph may mark the down OSD as out of the cluster and remap the data from the down OSD to another OSD. The time between being marked down and being marked out is controlled by mon_osd_down_out_interval , which is set to 600 seconds by default. A placement group can also be degraded , because Ceph cannot find one or more objects that Ceph thinks should be in the placement group. While you cannot read or write to unfound objects, you can still access all of the other objects in the degraded placement group. For example, if there are nine OSDs in a three way replica pool. If OSD number 9 goes down, the PGs assigned to OSD 9 goes into a degraded state. If OSD 9 does not recover, it goes out of the storage cluster and the storage cluster rebalances. In that scenario, the PGs are degraded and then recover to an active state. 3.2.9. Placement Group recovering state Ceph was designed for fault-tolerance at a scale where hardware and software problems are ongoing. When an OSD goes down , its contents may fall behind the current state of other replicas in the placement groups. When the OSD is back up , the contents of the placement groups must be updated to reflect the current state. During that time period, the OSD may reflect a recovering state. Recovery is not always trivial, because a hardware failure might cause a cascading failure of multiple Ceph OSDs. For example, a network switch for a rack or cabinet may fail, which can cause the OSDs of a number of host machines to fall behind the current state of the storage cluster. Each one of the OSDs must recover once the fault is resolved. Ceph provides a number of settings to balance the resource contention between new service requests and the need to recover data objects and restore the placement groups to the current state. The osd recovery delay start setting allows an OSD to restart, re-peer and even process some replay requests before starting the recovery process. The osd recovery threads setting limits the number of threads for the recovery process, by default one thread. The osd recovery thread timeout sets a thread timeout, because multiple Ceph OSDs can fail, restart and re-peer at staggered rates. The osd recovery max active setting limits the number of recovery requests a Ceph OSD works on simultaneously to prevent the Ceph OSD from failing to serve . The osd recovery max chunk setting limits the size of the recovered data chunks to prevent network congestion. 3.2.10. Back fill state When a new Ceph OSD joins the storage cluster, CRUSH will reassign placement groups from OSDs in the cluster to the newly added Ceph OSD. Forcing the new OSD to accept the reassigned placement groups immediately can put excessive load on the new Ceph OSD. Backfilling the OSD with the placement groups allows this process to begin in the background. Once backfilling is complete, the new OSD will begin serving requests when it is ready. During the backfill operations, you might see one of several states: backfill_wait indicates that a backfill operation is pending, but isn't underway yet backfill indicates that a backfill operation is underway backfill_too_full indicates that a backfill operation was requested, but couldn't be completed due to insufficient storage capacity. When a placement group cannot be backfilled, it can be considered incomplete . Ceph provides a number of settings to manage the load spike associated with reassigning placement groups to a Ceph OSD, especially a new Ceph OSD. By default, osd_max_backfills sets the maximum number of concurrent backfills to or from a Ceph OSD to 10. The osd backfill full ratio enables a Ceph OSD to refuse a backfill request if the OSD is approaching its full ratio, by default 85%. If an OSD refuses a backfill request, the osd backfill retry interval enables an OSD to retry the request, by default after 10 seconds. OSDs can also set osd backfill scan min and osd backfill scan max to manage scan intervals, by default 64 and 512. For some workloads, it is beneficial to avoid regular recovery entirely and use backfill instead. Since backfilling occurs in the background, this allows I/O to proceed on the objects in the OSD. You can force a backfill rather than a recovery by setting the osd_min_pg_log_entries option to 1 , and setting the osd_max_pg_log_entries option to 2 . Contact your Red Hat Support account team for details on when this situation is appropriate for your workload. 3.2.11. Placement Group remapped state When the Acting Set that services a placement group changes, the data migrates from the old acting set to the new acting set. It may take some time for a new primary OSD to service requests. So it may ask the old primary to continue to service requests until the placement group migration is complete. Once data migration completes, the mapping uses the primary OSD of the new acting set. 3.2.12. Placement Group stale state While Ceph uses heartbeats to ensure that hosts and daemons are running, the ceph-osd daemons may also get into a stuck state where they aren't reporting statistics in a timely manner. For example, a temporary network fault. By default, OSD daemons report their placement group, up thru, boot and failure statistics every half second, that is, 0.5 , which is more frequent than the heartbeat thresholds. If the Primary OSD of a placement group's acting set fails to report to the monitor or if other OSDs have reported the primary OSD down , the monitors will mark the placement group stale . When you start the storage cluster, it is common to see the stale state until the peering process completes. After the storage cluster has been running for awhile, seeing placement groups in the stale state indicates that the primary OSD for those placement groups is down or not reporting placement group statistics to the monitor. 3.2.13. Placement Group misplaced state There are some temporary backfilling scenarios where a PG gets mapped temporarily to an OSD. When that temporary situation should no longer be the case, the PGs might still reside in the temporary location and not in the proper location. In which case, they are said to be misplaced . That's because the correct number of extra copies actually exist, but one or more copies is in the wrong place. For example, there are 3 OSDs: 0,1,2 and all PGs map to some permutation of those three. If you add another OSD (OSD 3), some PGs will now map to OSD 3 instead of one of the others. However, until OSD 3 is backfilled, the PG will have a temporary mapping allowing it to continue to serve I/O from the old mapping. During that time, the PG is misplaced , because it has a temporary mapping, but not degraded , since there are 3 copies. Example [0,1,2] is a temporary mapping, so the up set is not equal to the acting set and the PG is misplaced but not degraded since [0,1,2] is still three copies. Example OSD 3 is now backfilled and the temporary mapping is removed, not degraded and not misplaced. 3.2.14. Placement Group incomplete state A PG goes into a incomplete state when there is incomplete content and peering fails, that is, when there are no complete OSDs which are current enough to perform recovery. Lets say OSD 1, 2, and 3 are the acting OSD set and it switches to OSD 1, 4, and 3, then osd.1 will request a temporary acting set of OSD 1, 2, and 3 while backfilling 4. During this time, if OSD 1, 2, and 3 all go down, osd.4 will be the only one left which might not have fully backfilled all the data. At this time, the PG will go incomplete indicating that there are no complete OSDs which are current enough to perform recovery. Alternately, if osd.4 is not involved and the acting set is simply OSD 1, 2, and 3 when OSD 1, 2, and 3 go down, the PG would likely go stale indicating that the mons have not heard anything on that PG since the acting set changed. The reason being there are no OSDs left to notify the new OSDs. 3.2.15. Identifying stuck Placement Groups A placement group is not necessarily problematic just because it is not in a active+clean state. Generally, Ceph's ability to self repair might not be working when placement groups get stuck. The stuck states include: Unclean : Placement groups contain objects that are not replicated the desired number of times. They should be recovering. Inactive : Placement groups cannot process reads or writes because they are waiting for an OSD with the most up-to-date data to come back up . Stale : Placement groups are in an unknown state, because the OSDs that host them have not reported to the monitor cluster in a while, and can be configured with the mon osd report timeout setting. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To identify stuck placement groups, execute the following: Syntax Example 3.2.16. Finding an object's location The Ceph client retrieves the latest cluster map and the CRUSH algorithm calculates how to map the object to a placement group, and then calculates how to assign the placement group to an OSD dynamically. Prerequisites A running Red Hat Ceph Storage cluster. Root-level access to the node. Procedure To find the object location, all you need is the object name and the pool name: Syntax Example | [
"root@host01 ~]# cephadm shell",
"ceph health HEALTH_OK",
"ceph status",
"root@host01 ~]# cephadm shell",
"ceph -w cluster: id: 8c9b0072-67ca-11eb-af06-001a4a0002a0 health: HEALTH_OK services: mon: 2 daemons, quorum Ceph5-2,Ceph5-adm (age 3d) mgr: Ceph5-1.nqikfh(active, since 3w), standbys: Ceph5-adm.meckej osd: 5 osds: 5 up (since 2d), 5 in (since 8w) rgw: 2 daemons active (test_realm.test_zone.Ceph5-2.bfdwcn, test_realm.test_zone.Ceph5-adm.acndrh) data: pools: 11 pools, 273 pgs objects: 459 objects, 32 KiB usage: 2.6 GiB used, 72 GiB / 75 GiB avail pgs: 273 active+clean io: client: 170 B/s rd, 730 KiB/s wr, 0 op/s rd, 729 op/s wr 2021-06-02 15:45:21.655871 osd.0 [INF] 17.71 deep-scrub ok 2021-06-02 15:45:47.880608 osd.1 [INF] 1.0 scrub ok 2021-06-02 15:45:48.865375 osd.1 [INF] 1.3 scrub ok 2021-06-02 15:45:50.866479 osd.1 [INF] 1.4 scrub ok 2021-06-02 15:45:01.345821 mon.0 [INF] pgmap v41339: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:05.718640 mon.0 [INF] pgmap v41340: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:53.997726 osd.1 [INF] 1.5 scrub ok 2021-06-02 15:45:06.734270 mon.0 [INF] pgmap v41341: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:45:15.722456 mon.0 [INF] pgmap v41342: 952 pgs: 952 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail 2021-06-02 15:46:06.836430 osd.0 [INF] 17.75 deep-scrub ok 2021-06-02 15:45:55.720929 mon.0 [INF] pgmap v41343: 952 pgs: 1 active+clean+scrubbing+deep, 951 active+clean; 17130 MB data, 115 GB used, 167 GB / 297 GB avail",
"ceph df --- RAW STORAGE --- CLASS SIZE AVAIL USED RAW USED %RAW USED hdd 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 TOTAL 5 TiB 2.9 TiB 2.1 TiB 2.1 TiB 42.98 --- POOLS --- POOL ID PGS STORED OBJECTS USED %USED MAX AVAIL .mgr 1 1 5.3 MiB 3 16 MiB 0 629 GiB .rgw.root 2 32 1.3 KiB 4 48 KiB 0 629 GiB default.rgw.log 3 32 3.6 KiB 209 408 KiB 0 629 GiB default.rgw.control 4 32 0 B 8 0 B 0 629 GiB default.rgw.meta 5 32 1.7 KiB 10 96 KiB 0 629 GiB default.rgw.buckets.index 7 32 5.5 MiB 22 17 MiB 0 629 GiB default.rgw.buckets.data 8 32 807 KiB 3 2.4 MiB 0 629 GiB default.rgw.buckets.non-ec 9 32 1.0 MiB 1 3.1 MiB 0 629 GiB source-ecpool-86 11 32 1.2 TiB 391.13k 2.1 TiB 53.49 1.1 TiB",
"ceph osd df ID CLASS WEIGHT REWEIGHT SIZE USE DATA OMAP META AVAIL %USE VAR PGS 3 hdd 0.90959 1.00000 931GiB 70.1GiB 69.1GiB 0B 1GiB 861GiB 7.53 2.93 66 4 hdd 0.90959 1.00000 931GiB 1.30GiB 308MiB 0B 1GiB 930GiB 0.14 0.05 59 0 hdd 0.90959 1.00000 931GiB 18.1GiB 17.1GiB 0B 1GiB 913GiB 1.94 0.76 57 MIN/MAX VAR: 0.02/2.98 STDDEV: 2.91",
"cephadm shell",
"ceph status",
"ceph -s",
"ceph ceph> status cluster: id: 499829b4-832f-11eb-8d6d-001a4a000635 health: HEALTH_WARN 1 stray daemon(s) not managed by cephadm 1/3 mons down, quorum host03,host02 too many PGs per OSD (261 > max 250) services: mon: 3 daemons, quorum host03,host02 (age 3d), out of quorum: host01 mgr: host01.hdhzwn(active, since 9d), standbys: host05.eobuuv, host06.wquwpj osd: 12 osds: 11 up (since 2w), 11 in (since 5w) rgw: 2 daemons active (test_realm.test_zone.host04.hgbvnq, test_realm.test_zone.host05.yqqilm) rgw-nfs: 1 daemon active (nfs.foo.host06-rgw) data: pools: 8 pools, 960 pgs objects: 414 objects, 1.0 MiB usage: 5.7 GiB used, 214 GiB / 220 GiB avail pgs: 960 active+clean io: client: 41 KiB/s rd, 0 B/s wr, 41 op/s rd, 27 op/s wr ceph> health HEALTH_WARN 1 stray daemon(s) not managed by cephadm; 1/3 mons down, quorum host03,host02; too many PGs per OSD (261 > max 250) ceph> mon stat e3: 3 mons at {host01=[v2:10.74.255.0:3300/0,v1:10.74.255.0:6789/0],host02=[v2:10.74.249.253:3300/0,v1:10.74.249.253:6789/0],host03=[v2:10.74.251.164:3300/0,v1:10.74.251.164:6789/0]}, election epoch 6688, leader 1 host03, quorum 1,2 host03,host02",
"cephadm shell",
"ceph mon stat",
"ceph mon dump",
"ceph quorum_status -f json-pretty",
"{ \"election_epoch\": 6686, \"quorum\": [ 0, 1, 2 ], \"quorum_names\": [ \"host01\", \"host03\", \"host02\" ], \"quorum_leader_name\": \"host01\", \"quorum_age\": 424884, \"features\": { \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"monmap\": { \"epoch\": 3, \"fsid\": \"499829b4-832f-11eb-8d6d-001a4a000635\", \"modified\": \"2021-03-15T04:51:38.621737Z\", \"created\": \"2021-03-12T12:35:16.911339Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.255.0:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.255.0:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.255.0:6789/0\", \"public_addr\": \"10.74.255.0:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.251.164:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.251.164:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.251.164:6789/0\", \"public_addr\": \"10.74.251.164:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.253:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.253:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.253:6789/0\", \"public_addr\": \"10.74.249.253:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] } }",
"Error 111: Connection Refused",
"cephadm shell",
"ceph daemon MONITOR_ID COMMAND",
"ceph daemon mon.host01 help { \"add_bootstrap_peer_hint\": \"add peer address as potential bootstrap peer for cluster bringup\", \"add_bootstrap_peer_hintv\": \"add peer address vector as potential bootstrap peer for cluster bringup\", \"compact\": \"cause compaction of monitor's leveldb/rocksdb storage\", \"config diff\": \"dump diff of current config and default config\", \"config diff get\": \"dump diff get <field>: dump diff of current and default config setting <field>\", \"config get\": \"config get <field>: get the config value\", \"config help\": \"get config setting schema and descriptions\", \"config set\": \"config set <field> <val> [<val> ...]: set a config variable\", \"config show\": \"dump current config settings\", \"config unset\": \"config unset <field>: unset a config variable\", \"connection scores dump\": \"show the scores used in connectivity-based elections\", \"connection scores reset\": \"reset the scores used in connectivity-based elections\", \"counter dump\": \"dump all labeled and non-labeled counters and their values\", \"counter schema\": \"dump all labeled and non-labeled counters schemas\", \"dump_historic_ops\": \"show recent ops\", \"dump_historic_slow_ops\": \"show recent slow ops\", \"dump_mempools\": \"get mempool stats\", \"get_command_descriptions\": \"list available commands\", \"git_version\": \"get git sha1\", \"heap\": \"show heap usage info (available only if compiled with tcmalloc)\", \"help\": \"list available commands\", \"injectargs\": \"inject configuration arguments into running daemon\", \"log dump\": \"dump recent log entries to log file\", \"log flush\": \"flush log entries to log file\", \"log reopen\": \"reopen log file\", \"mon_status\": \"report status of monitors\", \"ops\": \"show the ops currently in flight\", \"perf dump\": \"dump non-labeled counters and their values\", \"perf histogram dump\": \"dump perf histogram values\", \"perf histogram schema\": \"dump perf histogram schema\", \"perf reset\": \"perf reset <name>: perf reset all or one perfcounter name\", \"perf schema\": \"dump non-labeled counters schemas\", \"quorum enter\": \"force monitor back into quorum\", \"quorum exit\": \"force monitor out of the quorum\", \"sessions\": \"list existing sessions\", \"smart\": \"Query health metrics for underlying device\", \"sync_force\": \"force sync of and clear monitor store\", \"version\": \"get ceph version\" }",
"ceph daemon mon.host01 mon_status { \"name\": \"host01\", \"rank\": 0, \"state\": \"leader\", \"election_epoch\": 120, \"quorum\": [ 0, 1, 2 ], \"quorum_age\": 206358, \"features\": { \"required_con\": \"2449958747317026820\", \"required_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"quorum_con\": \"4540138297136906239\", \"quorum_mon\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ] }, \"outside_quorum\": [], \"extra_probe_peers\": [], \"sync_provider\": [], \"monmap\": { \"epoch\": 3, \"fsid\": \"81a4597a-b711-11eb-8cb8-001a4a000740\", \"modified\": \"2021-05-18T05:50:17.782128Z\", \"created\": \"2021-05-17T13:13:13.383313Z\", \"min_mon_release\": 16, \"min_mon_release_name\": \"pacific\", \"election_strategy\": 1, \"disallowed_leaders: \": \"\", \"stretch_mode\": false, \"features\": { \"persistent\": [ \"kraken\", \"luminous\", \"mimic\", \"osdmap-prune\", \"nautilus\", \"octopus\", \"pacific\", \"elector-pinging\" ], \"optional\": [] }, \"mons\": [ { \"rank\": 0, \"name\": \"host01\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.41:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.41:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.41:6789/0\", \"public_addr\": \"10.74.249.41:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 1, \"name\": \"host02\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.55:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.55:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.55:6789/0\", \"public_addr\": \"10.74.249.55:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" }, { \"rank\": 2, \"name\": \"host03\", \"public_addrs\": { \"addrvec\": [ { \"type\": \"v2\", \"addr\": \"10.74.249.49:3300\", \"nonce\": 0 }, { \"type\": \"v1\", \"addr\": \"10.74.249.49:6789\", \"nonce\": 0 } ] }, \"addr\": \"10.74.249.49:6789/0\", \"public_addr\": \"10.74.249.49:6789/0\", \"priority\": 0, \"weight\": 0, \"crush_location\": \"{}\" } ] }, \"feature_map\": { \"mon\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 1 } ], \"osd\": [ { \"features\": \"0x3f01cfb9fffdffff\", \"release\": \"luminous\", \"num\": 3 } ] }, \"stretch_mode\": false }",
"ceph daemon /var/run/ceph/ SOCKET_FILE COMMAND",
"ceph daemon /var/run/ceph/ceph-osd.0.asok status { \"cluster_fsid\": \"9029b252-1668-11ee-9399-001a4a000429\", \"osd_fsid\": \"1de9b064-b7a5-4c54-9395-02ccda637d21\", \"whoami\": 0, \"state\": \"active\", \"oldest_map\": 1, \"newest_map\": 58, \"num_pgs\": 33 }",
"ls /var/run/ceph",
"ceph osd stat",
"ceph osd dump",
"eNNNN: x osds: y up, z in",
"ceph osd tree id weight type name up/down reweight -1 3 pool default -3 3 rack mainrack -2 3 host osd-host 0 1 osd.0 up 1 1 1 osd.1 up 1 2 1 osd.2 up 1",
"systemctl start CEPH_OSD_SERVICE_ID",
"systemctl start [email protected]",
"cephadm shell",
"ceph pg dump",
"ceph pg map PG_NUM",
"ceph pg map 128",
"ceph pg stat",
"vNNNNNN: x pgs: y active+clean; z bytes data, aa MB used, bb GB / cc GB avail",
"244 active+clean+snaptrim_wait 32 active+clean+snaptrim",
"POOL_NUM . PG_ID",
"0.1f",
"ceph pg dump",
"ceph pg dump -o FILE_NAME --format=json",
"ceph pg dump -o test --format=json",
"ceph pg POOL_NUM . PG_ID query",
"ceph pg 5.fe query { \"snap_trimq\": \"[]\", \"snap_trimq_len\": 0, \"state\": \"active+clean\", \"epoch\": 2449, \"up\": [ 3, 8, 10 ], \"acting\": [ 3, 8, 10 ], \"acting_recovery_backfill\": [ \"3\", \"8\", \"10\" ], \"info\": { \"pgid\": \"5.ff\", \"last_update\": \"0'0\", \"last_complete\": \"0'0\", \"log_tail\": \"0'0\", \"last_user_version\": 0, \"last_backfill\": \"MAX\", \"purged_snaps\": [], \"history\": { \"epoch_created\": 114, \"epoch_pool_created\": 82, \"last_epoch_started\": 2402, \"last_interval_started\": 2401, \"last_epoch_clean\": 2402, \"last_interval_clean\": 2401, \"last_epoch_split\": 114, \"last_epoch_marked_full\": 0, \"same_up_since\": 2401, \"same_interval_since\": 2401, \"same_primary_since\": 2086, \"last_scrub\": \"0'0\", \"last_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_deep_scrub\": \"0'0\", \"last_deep_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"last_clean_scrub_stamp\": \"2021-06-17T01:32:03.763988+0000\", \"prior_readable_until_ub\": 0 }, \"stats\": { \"version\": \"0'0\", \"reported_seq\": \"2989\", \"reported_epoch\": \"2449\", \"state\": \"active+clean\", \"last_fresh\": \"2021-06-18T05:16:59.401080+0000\", \"last_change\": \"2021-06-17T01:32:03.764162+0000\", \"last_active\": \"2021-06-18T05:16:59.401080+0000\", .",
"pg 1.5: up=acting: [0,1,2] ADD_OSD_3 pg 1.5: up: [0,3,1] acting: [0,1,2]",
"pg 1.5: up=acting: [0,3,1]",
"ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} {<int>}",
"ceph pg dump_stuck stale OK",
"ceph osd map POOL_NAME OBJECT_NAME",
"ceph osd map mypool myobject"
]
| https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/administration_guide/monitoring-a-ceph-storage-cluster |
Appendix A. Reference material | Appendix A. Reference material A.1. About rule story points A.1.1. What are story points? Story points are an abstract metric commonly used in Agile software development to estimate the level of effort needed to implement a feature or change. The Migration Toolkit for Applications uses story points to express the level of effort needed to migrate particular application constructs, and the application as a whole. It does not necessarily translate to man-hours, but the value should be consistent across tasks. A.1.2. How story points are estimated in rules Estimating the level of effort for the story points for a rule can be tricky. The following are the general guidelines MTA uses when estimating the level of effort required for a rule. Level of Effort Story Points Description Information 0 An informational warning with very low or no priority for migration. Trivial 1 The migration is a trivial change or a simple library swap with no or minimal API changes. Complex 3 The changes required for the migration task are complex, but have a documented solution. Redesign 5 The migration task requires a redesign or a complete library change, with significant API changes. Rearchitecture 7 The migration requires a complete rearchitecture of the component or subsystem. Unknown 13 The migration solution is not known and may need a complete rewrite. A.1.3. Task category In addition to the level of effort, you can categorize migration tasks to indicate the severity of the task. The following categories are used to group issues to help prioritize the migration effort. Mandatory The task must be completed for a successful migration. If the changes are not made, the resulting application will not build or run successfully. Examples include replacement of proprietary APIs that are not supported in the target platform. Optional If the migration task is not completed, the application should work, but the results may not be optimal. If the change is not made at the time of migration, it is recommended to put it on the schedule soon after your migration is completed. Potential The task should be examined during the migration process, but there is not enough detailed information to determine if the task is mandatory for the migration to succeed. An example of this would be migrating a third-party proprietary type where there is no directly compatible type. Information The task is included to inform you of the existence of certain files. These may need to be examined or modified as part of the modernization effort, but changes are typically not required. A.2. Additional resources A.2.1. Additional resources MTA Jira issue tracker: https://issues.redhat.com/projects/MTA/issues MTA mailing list: [email protected] Revised on 2025-02-26 19:49:46 UTC | null | https://docs.redhat.com/en/documentation/migration_toolkit_for_applications/7.2/html/rules_development_guide/reference_material |
Chapter 1. JBoss EAP 8.0 installation methods | Chapter 1. JBoss EAP 8.0 installation methods You can install JBoss EAP 8.0 using the following methods: Archive installation JBoss EAP Installation Manager GUI installer RPM installation From JBoss EAP 8.0 onward, the JBoss EAP Installation Manager and Graphical (GUI) installer methods support both online and offline installation modes. Online installation: You can install JBoss EAP 8.x directly from an online repository. You must have access to the Red Hat repositories or their mirrors to use the online option. When using this mode, the resulting server will always be the latest available JBoss EAP 8.0 update. Offline installation: You can install JBoss EAP 8.x from a local file-system. Use the offline installation mode if you do not have access to the Red Hat repositories or their mirrors or want to install a specific JBoss EAP update. After the installation, you may need to perform an update step to use the latest version of JBoss EAP 8.x. Depending on your requirements, choose the installation method. The following table provides a brief overview of each type of installation method. Table 1.1. Installation Methods Method Description Online/offline options JBoss EAP Installation Manager You can run the installer in a terminal. You must configure JBoss EAP after the installation. Online Offline Graphical (GUI) installer You can run the installer as a graphical wizard. It provides step-by-step instructions for installing JBoss EAP. You can configure some basic options for the server instance during the installation. The installer has additional setup options, that includes installing Quickstarts and configuring the Maven repository. Online Offline Archive Installation You can download the archive file and extract the instance manually. You must configure JBoss EAP after the installation. Offline RPM Installation You can install JBoss EAP using RPM packages on supported installations of Red Hat Enterprise Linux 8 or later. Online Offline (using Red Hat Satellite) You can run JBoss EAP on the following cloud platforms. This documentation does not cover provisioning on other cloud platforms. See the related documentation. JBoss EAP on OpenShift. Additional resources JBoss EAP on OpenShift . 1.1. Understanding channels in JBoss EAP The jboss-eap-installation-manager provides a streamlined and controlled pathway to access the most recent supported versions of JBoss EAP components. These streamlined and controlled pathways are called channels. A channel consists of a curated list of component versions (called channel manifest) and a collection of repositories used to resolve and retrieve those components. Each repository has a unique name (id) and a default Maven repository URL. The jboss-eap-installation-manager allows you to manage these channels effectively in both stand-alone and managed domain configurations. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/red_hat_jboss_enterprise_application_platform_installation_methods/jboss-eap-8-installation-methods_default |
Chapter 6. (Preview) Deploying the Streams for Apache Kafka Console | Chapter 6. (Preview) Deploying the Streams for Apache Kafka Console After you have deployed a Kafka cluster that's managed by Streams for Apache Kafka, you can deploy the Streams for Apache Kafka Console and connect your cluster. The Streams for Apache Kafka Console facilitates the administration of Kafka clusters, providing real-time insights for monitoring, managing, and optimizing each cluster from its user interface. For more information on connecting to and using the Streams for Apache Kafka Console, see the console guide in the Streams for Apache Kafka documentation . Note The Streams for Apache Kafka Console is currently available as a technology preview. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/getting_started_with_streams_for_apache_kafka_on_openshift/deploy-options-console-str |
4.28. cjkuni-fonts | 4.28. cjkuni-fonts 4.28.1. RHBA-2011:0922 - cjkuni-fonts bug fix update Updated cjkuni-fonts packages that fix one bug are now available for Red Hat Enterprise Linux 6. CJK Unifonts are Unicode TrueType fonts derived from original fonts made available by Arphic Technology under the Arphic Public License and extended by the CJK Unifonts project. Bug Fix BZ# 682650 Prior to this update, when viewing the U+4190 CJK character with the AR PL UMing font and the font size 10, this character was not displayed properly. This bug has been corrected in this update so that the character is now correctly displayed as expected. All users of cjkuni-fonts are advised to upgrade to these updated packages, which fix this bug. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/cjkuni-fonts |
13.16. Storage Devices | 13.16. Storage Devices You can install Red Hat Enterprise Linux on a large variety of storage devices. You can see basic, locally accessible, storage devices in the Installation Destination page, as described in Section 13.15, "Installation Destination" . To add a specialized storage device, click the Add a disk button in the Specialized & Network Disks section of the screen. Figure 13.28. Storage Space Overview 13.16.1. The Storage Devices Selection Screen The storage device selection screen displays all storage devices to which the Anaconda installation program has access. The devices are grouped under the following tabs: Multipath Devices Storage devices accessible through more than one path, such as through multiple SCSI controllers or Fiber Channel ports on the same system. The installation program only detects multipath storage devices with serial numbers that are 16 or 32 characters long. Other SAN Devices Devices available on a Storage Area Network (SAN). Firmware RAID Storage devices attached to a firmware RAID controller. Figure 13.29. Tabbed Overview of Specialized Storage Devices A set of buttons is available in the bottom right corner of the screen. Use these buttons to add additional storage devices. The available buttons are: Add iSCSI Target - use to attach iSCSI devices; continue with Section 13.16.1.1.1, "Configure iSCSI Parameters" Add FCoE SAN - use to configure a Fibre Channel Over Internet storage device; continue with Section 13.16.1.1.2, "Configure FCoE Parameters" The overview page also contains the Search tab that allows you to filter storage devices either by their World Wide Identifier (WWID) or by the port, target, or logical unit number (LUN) at which they are accessed. Figure 13.30. The Storage Devices Search Tab The Search tab contains the Search By drop-down menu to select searching by port, target, LUN, or WWID. Searching by WWID or LUN requires additional values in the corresponding input text fields. Click the Find button to start the search. Each device is presented on a separate row, with a check box to its left. Click the check box to make the device available during the installation process. Later in the installation process, you can choose to install Red Hat Enterprise Linux onto any of the devices selected here, and can choose to automatically mount any of the other devices selected here as part of the installed system. Note that the devices that you select here are not automatically erased by the installation process. Selecting a device on this screen does not, in itself, place data stored on the device at risk. Also note that any devices that you do not select here to form part of the installed system can be added to the system after installation by modifying the /etc/fstab file. Important Any storage devices that you do not select on this screen are hidden from Anaconda entirely. To chain load the Red Hat Enterprise Linux boot loader from a different boot loader, select all the devices presented in this screen. When you have selected the storage devices to make available during installation, click Done to return to the Installation Destination screen. 13.16.1.1. Advanced Storage Options To use an advanced storage device, you can configure an iSCSI (SCSI over TCP/IP) target or FCoE (Fibre Channel over Ethernet) SAN (Storage Area Network) by clicking the appropriate button in the lower right corner of the Installation Destination screen. See Appendix B, iSCSI Disks for an introduction to iSCSI. Figure 13.31. Advanced Storage Options 13.16.1.1.1. Configure iSCSI Parameters When you click the Add iSCSI target... button, the Add iSCSI Storage Target dialog appears. Figure 13.32. The iSCSI Discovery Details Dialog To use iSCSI storage devices for the installation, Anaconda must be able to discover them as iSCSI targets and be able to create an iSCSI session to access them. Each of these steps might require a user name and password for CHAP (Challenge Handshake Authentication Protocol) authentication. Additionally, you can configure an iSCSI target to authenticate the iSCSI initiator on the system to which the target is attached ( reverse CHAP ), both for discovery and for the session. Used together, CHAP and reverse CHAP are called mutual CHAP or two-way CHAP . Mutual CHAP provides the greatest level of security for iSCSI connections, particularly if the user name and password are different for CHAP authentication and reverse CHAP authentication. Note Repeat the iSCSI discovery and iSCSI login steps as many times as necessary to add all required iSCSI storage. However, you cannot change the name of the iSCSI initiator after you attempt discovery for the first time. To change the iSCSI initiator name, you must restart the installation. Procedure 13.1. iSCSI Discovery and Starting an iSCSI Session Use the Add iSCSI Storage Target dialog to provide Anaconda with the information necessary to discover the iSCSI target. Enter the IP address of the iSCSI target in the Target IP Address field. Provide a name in the iSCSI Initiator Name field for the iSCSI initiator in iSCSI qualified name (IQN) format. A valid IQN entry contains: the string iqn. (note the period) a date code that specifies the year and month in which your organization's Internet domain or subdomain name was registered, represented as four digits for the year, a dash, and two digits for the month, followed by a period. For example, represent September 2010 as 2010-09. your organization's Internet domain or subdomain name, presented in reverse order with the top-level domain first. For example, represent the subdomain storage.example.com as com.example.storage a colon followed by a string that uniquely identifies this particular iSCSI initiator within your domain or subdomain. For example, :diskarrays-sn-a8675309 A complete IQN can therefore look as follows: iqn.2010-09.storage.example.com:diskarrays-sn-a8675309 . Anaconda prepopulates the iSCSI Initiator Name field with a name in this format to help you with the structure. For more information on IQNs , see 3.2.6. iSCSI Names in RFC 3720 - Internet Small Computer Systems Interface (iSCSI) available from http://tools.ietf.org/html/rfc3720#section-3.2.6 and 1. iSCSI Names and Addresses in RFC 3721 - Internet Small Computer Systems Interface (iSCSI) Naming and Discovery available from http://tools.ietf.org/html/rfc3721#section-1 . Use the Discovery Authentication Type drop-down menu to specify the type of authentication to use for iSCSI discovery. The following options are available: no credentials CHAP pair CHAP pair and a reverse pair If you selected CHAP pair as the authentication type, provide the user name and password for the iSCSI target in the CHAP Username and CHAP Password fields. If you selected CHAP pair and a reverse pair as the authentication type, provide the user name and password for the iSCSI target in the CHAP Username and CHAP Password field and the user name and password for the iSCSI initiator in the Reverse CHAP Username and Reverse CHAP Password fields. Optionally check the box labeled Bind targets to network interfaces . Click the Start Discovery button. Anaconda attempts to discover an iSCSI target based on the information that you provided. If discovery succeeds, the dialog displays a list of all iSCSI nodes discovered on the target. Each node is presented with a check box beside it. Click the check boxes to select the nodes to use for installation. Figure 13.33. The Dialog of Discovered iSCSI Nodes The Node login authentication type menu provides the same options as the Discovery Authentication Type menu described in step 3. However, if you needed credentials for discovery authentication, it is typical to use the same credentials to log into a discovered node. To do that, use the additional Use the credentials from discovery option from the menu. When the proper credentials have been provided, the Log In button becomes available. Click Log In to initiate an iSCSI session. 13.16.1.1.2. Configure FCoE Parameters When you click the Add FCoE SAN... button, a dialog appears for you to configure network interfaces for discovering FCoE storage devices. First, select a network interface that is connected to a FCoE switch in the NIC drop-down menu and click the Add FCoE disk(s) button to scan the network for SAN devices. Figure 13.34. Configure FCoE Parameters There are check boxes with additional options to consider: Use DCB Data Center Bridging (DCB) is a set of enhancements to the Ethernet protocols designed to increase the efficiency of Ethernet connections in storage networks and clusters. Enable or disable the installation program's awareness of DCB with the check box in this dialog. This option should only be enabled for network interfaces that require a host-based DCBX client. Configurations on interfaces that implement a hardware DCBX client should leave this check box empty. Use auto vlan Auto VLAN indicates whether VLAN discovery should be performed. If this box is checked, then the FIP (FCoE Initiation Protocol) VLAN discovery protocol will run on the Ethernet interface once the link configuration has been validated. If they are not already configured, network interfaces for any discovered FCoE VLANs will be automatically created and FCoE instances will be created on the VLAN interfaces. This option is enabled by default. Discovered FCoE devices will be displayed under the Other SAN Devices tab in the Installation Destination screen. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-storage-devices-ppc |
5.218. openjpeg | 5.218. openjpeg 5.218.1. RHSA-2012:1068 - Important: openjpeg security update Updated openjpeg packages that fix two security issues are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. OpenJPEG is an open source library for reading and writing image files in JPEG 2000 format. Security Fixes CVE-2012-3358 An input validation flaw, leading to a heap-based buffer overflow, was found in the way OpenJPEG handled the tile number and size in an image tile header. A remote attacker could provide a specially-crafted image file that, when decoded using an application linked against OpenJPEG, would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. CVE-2009-5030 OpenJPEG allocated insufficient memory when encoding JPEG 2000 files from input images that have certain color depths. A remote attacker could provide a specially-crafted image file that, when opened in an application linked against OpenJPEG (such as image_to_j2k), would cause the application to crash or, potentially, execute arbitrary code with the privileges of the user running the application. Users of OpenJPEG should upgrade to these updated packages, which contain patches to correct these issues. All running applications using OpenJPEG must be restarted for the update to take effect. 5.218.2. RHSA-2012:1283 - Important: openjpeg security update Updated openjpeg packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link(s) associated with each description below. OpenJPEG is an open source library for reading and writing image files in JPEG 2000 format. Security Fix CVE-2012-3535 It was found that OpenJPEG failed to sanity-check an image header field before using it. A remote attacker could provide a specially-crafted image file that could cause an application linked against OpenJPEG to crash or, possibly, execute arbitrary code. This issue was discovered by Huzaifa Sidhpurwala of the Red Hat Security Response Team. Users of OpenJPEG should upgrade to these updated packages, which contain a patch to correct this issue. All running applications using OpenJPEG must be restarted for the update to take effect. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/openjpeg |
Chapter 2. Enhancements | Chapter 2. Enhancements The enhancements added in this release are outlined below. 2.1. Kafka enhancements For an overview of the enhancements introduced with: Kafka 2.6.2, refer to the Kafka 2.6.2 Release Notes (applies only to AMQ Streams 1.6.4) Kafka 2.6.1, refer to the Kafka 2.6.1 Release Notes (applies only to AMQ Streams 1.6.4) Kafka 2.6.0, refer to the Kafka 2.6.0 Release Notes 2.2. Kafka Bridge enhancements This release includes the following enhancements to the Kafka Bridge component of AMQ Streams. Retrieve partitions and metadata The Kafka Bridge now supports the following operations: Retrieve a list of partitions for a given topic: GET /topics/{topicname}/partitions Retrieve metadata for a given partition, such as the partition ID, the leader broker, and the number of replicas: GET /topics/{topicname}/partitions/{partitionid} See the Kafka Bridge API reference . Support for Kafka message headers Messages sent using the Kafka Bridge can now include Kafka message headers. In a POST request to the /topics endpoint, you can optionally specify headers in the message payload, which is contained in the request body. Message header values must be in binary format and encoded as Base64. Example request with Kafka message header curl -X POST \ http://localhost:8080/topics/my-topic \ -H 'content-type: application/vnd.kafka.json.v2+json' \ -d '{ "records": [ { "key": "my-key", "value": "sales-lead-0001" "partition": 2 "headers": [ { "key": "key1", "value": "QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==" } ] }, ] }' See Requests to the Kafka Bridge . 2.3. MirrorMaker 2.0 topic renaming update The MirrorMaker 2.0 architecture supports bidirectional replication by automatically renaming remote topics to represent the source cluster. The name of the originating cluster is prepended to the name of the topic. Optionally, you can now override automatic renaming by adding IdentityReplicationPolicy to the source connector configuration. With this configuration applied, topics retain their original names. replication.policy.class= io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy 1 1 Adds a policy that overrides the automatic renaming of remote topics. Instead of prepending the name with the name of the source cluster, the topic retains its original name. The override is useful, for example, in an active/passive cluster configuration where you want to make backups or migrate data to another cluster. In either situation, you might not want automatic renaming of remote topics. See Using AMQ Streams with MirrorMaker 2.0 2.4. OAuth 2.0 authentication and authorization This release includes the following enhancements to OAuth 2.0 token-based authentication and authorization. Session re-authentication OAuth 2.0 authentication in AMQ Streams now supports session re-authentication for Kafka brokers. This defines the maximum duration of an authenticated OAuth 2.0 session between a Kafka client and a Kafka broker. Session re-authentication is supported for both types of token validation: fast local JWT and introspection endpoint. You configure session re-authentication in the OAuth 2.0 configuration for Kafka brokers, in the server.properties file. To apply to all listeners, set the connections.max.reauth.ms property in milliseconds. To apply to a specific listener, set the listener.name. LISTENER-NAME .oauthbearer.connections.max.reauth.ms property in milliseconds. LISTENER-NAME is the case-insensitive name of the listener. An authenticated session is closed if it exceeds the configured maximum session re-authentication time, or if the access token expiry time is reached. Then, the client must log in to the authorization server again, obtain a new access token, and then re-authenticate to the Kafka broker. This will establish a new authenticated session over the existing connection. When re-authentication is required, any operation that is attempted by the client (apart from re-authentication) will cause the broker to terminate the connection. Example listener configuration for session re-authentication after 6 minutes sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 # ... listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https:// AUTH-SERVER-ADDRESS " \ oauth.jwks.endpoint.uri="https:// AUTH-SERVER-ADDRESS /jwks" \ oauth.username.claim="preferred_username" \ oauth.client.id="kafka-broker" \ oauth.client.secret="kafka-secret" \ oauth.token.endpoint.uri="https:// AUTH-SERVER-ADDRESS /token" ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler listener.name.client.oauthbearer.connections.max.reauth.ms=3600000 See: Session re-authentication for Kafka brokers and Configuring OAuth 2.0 support for Kafka brokers . JWKS keys refresh interval When configuring Kafka brokers to use fast local JWT token validation, you can now set the oauth.jwks.refresh.min.pause.seconds option in the listener configuration (in the server.properties file). This defines the minimum interval between attempts by the broker to refresh JSON Web Key Set (JWKS) public keys issued by the authorization server. With this release, if the Kafka broker detects an unknown signing key, it attempts to refresh JWKS keys immediately and ignores the regular refresh schedule. Example configuration for a 2-minute pause between attempts to refresh JWKS keys listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required \ oauth.valid.issuer.uri="https://AUTH-SERVER-ADDRESS" \ oauth.jwks.endpoint.uri="https://AUTH-SERVER-ADDRESS/jwks" \ oauth.jwks.refresh.seconds="300" \ oauth.jwks.refresh.min.pause.seconds="120" \ # ... oauth.ssl.truststore.type="PKCS12" ; The refresh schedule for JWKS keys is set in the oauth.jwks.refresh.seconds option. When an unknown signing key is encountered, a JWKS keys refresh is scheduled outside of the refresh schedule. The refresh will not start until the time since the last refresh reaches the interval specified in oauth.jwks.refresh.min.pause.seconds . The default value is 1 . See Configuring OAuth 2.0 support for Kafka brokers . Refreshing grants from Red Hat Single Sign-On New configuration options have been added for OAuth 2.0 token-based authorization through Red Hat Single Sign-On. When configuring Kafka brokers, you can now define the following options related to refreshing grants from Red Hat SSO Authorization Services: strimzi.authorization.grants.refresh.period.seconds : The time between two consecutive grants refresh runs. The default value is 60 . If set to 0 or less, refreshing of grants is disabled. strimzi.authorization.grants.refresh.pool.size : The number of threads that can fetch grants for the active session in parallel. The default value is 5 . See Using OAuth 2.0 token-based authorization and Configuring OAuth 2.0 authorization support Detection of permission changes in Red Hat Single Sign-On With this release, the KeycloakRBACAuthorizer (Red Hat SSO) authorization regularly checks for changes in permissions for the active sessions. Central user and permissions management changes are now detected in real time. 2.5. Deprecation of ZooKeeper option in Kafka administrative tools The --zookeeper option was deprecated in the following Kafka administrative tools: bin/kafka-configs.sh bin/kafka-leader-election.sh bin/kafka-topics.sh When using these tools, you should now use the --bootstrap-server option to specify the Kafka broker to connect to. For example: /bin/kafka-topics.sh --bootstrap-server localhost:9092 --list Although the --zookeeper option still works, it will be removed from all the administrative tools in a future Kafka release. This is part of ongoing work in the Apache Kafka project to remove Kafka's dependency on ZooKeeper. The Using AMQ Streams on RHEL guide has been updated to use the --bootstrap-server option in several procedures. | [
"GET /topics/{topicname}/partitions",
"GET /topics/{topicname}/partitions/{partitionid}",
"curl -X POST http://localhost:8080/topics/my-topic -H 'content-type: application/vnd.kafka.json.v2+json' -d '{ \"records\": [ { \"key\": \"my-key\", \"value\": \"sales-lead-0001\" \"partition\": 2 \"headers\": [ { \"key\": \"key1\", \"value\": \"QXBhY2hlIEthZmthIGlzIHRoZSBib21iIQ==\" } ] }, ] }'",
"replication.policy.class= io.strimzi.kafka.connect.mirror.IdentityReplicationPolicy 1",
"sasl.enabled.mechanisms=OAUTHBEARER listeners=CLIENT://0.0.0.0:9092 listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https:// AUTH-SERVER-ADDRESS \" oauth.jwks.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /jwks\" oauth.username.claim=\"preferred_username\" oauth.client.id=\"kafka-broker\" oauth.client.secret=\"kafka-secret\" oauth.token.endpoint.uri=\"https:// AUTH-SERVER-ADDRESS /token\" ; listener.name.client.oauthbearer.sasl.login.callback.handler.class=io.strimzi.kafka.oauth.client.JaasClientOauthLoginCallbackHandler listener.name.client.oauthbearer.connections.max.reauth.ms=3600000",
"listener.name.client.oauthbearer.sasl.jaas.config=org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required oauth.valid.issuer.uri=\"https://AUTH-SERVER-ADDRESS\" oauth.jwks.endpoint.uri=\"https://AUTH-SERVER-ADDRESS/jwks\" oauth.jwks.refresh.seconds=\"300\" oauth.jwks.refresh.min.pause.seconds=\"120\" # oauth.ssl.truststore.type=\"PKCS12\" ;",
"/bin/kafka-topics.sh --bootstrap-server localhost:9092 --list"
]
| https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/release_notes_for_amq_streams_1.6_on_rhel/enhancements-str |
Release notes | Release notes OpenShift Container Platform 4.13 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team | [
"Networking control plane is degraded. Networking configuration updates applied to the cluster will not be implemented while there is no OVN Kubernetes leader. Existing workloads should continue to have connectivity. OVN-Kubernetes control plane is not functional.",
"Netlink messages dropped by OVS vSwitch daemon due to netlink socket buffer overflow. This will result in packet loss.",
"Netlink messages dropped by OVS kernel module due to netlink socket buffer overflow. This will result in packet loss.",
"oc describe packagemanifests <operator_name> -n <catalog_namespace>",
"oc get packagemanifests <operator_name> -n <catalog_namespace> -o <output_format>",
"## Snippet to remove unauthenticated group from all the cluster role bindings for clusterrolebinding in cluster-status-binding discovery system:basic-user system:discovery system:openshift:discovery ; do ### Find the index of unauthenticated group in list of subjects index=USD(oc get clusterrolebinding USD{clusterrolebinding} -o json | jq 'select(.subjects!=null) | .subjects | map(.name==\"system:unauthenticated\") | index(true)'); ### Remove the element at index from subjects array patch clusterrolebinding USD{clusterrolebinding} --type=json --patch \"[{'op': 'remove','path': '/subjects/USDindex'}]\"; done",
"deviceType: netdevice nicSelector: deviceID: \"101d\" pfNames: - ens7f0 - ens7f0np0 vendor: '15b3' nodeSelector: feature.node.kubernetes.io/sriov-capable: 'true' numVfs: 4 #",
"2023-01-15T19:26:33.017221334+00:00 stdout F phc2sys[359186.957]: [ptp4l.0.config] nothing to synchronize",
"mount: /var/lib/kubelet/plugins/kubernetes.io/local-volume/mounts/local-pv-bc42d358: mount(2) system call failed: Structure needs cleaning.",
"oc adm taint nodes <node_name> node.cloudprovider.kubernetes.io/uninitialized:NoSchedule-",
"cd ~/clusterconfigs/openshift vim openshift-worker-0.yaml",
"apiVersion: metal3.io/v1alpha1 kind: BareMetalHost metadata: annotations: bmac.agent-install.openshift.io/installer-args: '[\"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", \"--save-partindex\", \"1\", \"-n\"]' 1 2 3 4 5 inspect.metal3.io: disabled bmac.agent-install.openshift.io/hostname: <fqdn> 6 bmac.agent-install.openshift.io/role: <role> 7 generation: 1 name: openshift-worker-0 namespace: mynamespace spec: automatedCleaningMode: disabled bmc: address: idrac-virtualmedia://<bmc_ip>/redfish/v1/Systems/System.Embedded.1 8 credentialsName: bmc-secret-openshift-worker-0 disableCertificateVerification: true bootMACAddress: 94:6D:AE:AB:EE:E8 bootMode: \"UEFI\" rootDeviceHints: deviceName: /dev/sda",
"curl https://api.openshift.com/api/assisted-install/v2/infra-envs/USD{infra_env_id}/hosts/USD{host_id}/installer-args -X PATCH -H \"Authorization: Bearer USD{API_TOKEN}\" -H \"Content-Type: application/json\" -d ' { \"args\": [ \"--append-karg\", \"ip=<static_ip>::<gateway>:<netmask>:<hostname_1>:<interface>:none\", 1 2 3 4 5 \"--save-partindex\", \"1\", \"-n\" ] } ' | jq",
"oc adm release info 4.13.0 --pullspecs",
"oc adm release info 4.13.1 --pullspecs",
"oc adm release info 4.13.2 --pullspecs",
"oc adm release info 4.13.3 --pullspecs",
"oc adm release info 4.13.4 --pullspecs",
"oc adm release info 4.13.5 --pullspecs",
"oc adm release info 4.13.6 --pullspecs",
"oc adm release info 4.13.8 --pullspecs",
"oc adm release info 4.13.9 --pullspecs",
"oc adm release info 4.13.10 --pullspecs",
"oc adm release info 4.13.11 --pullspecs",
"oc adm release info 4.13.12 --pullspecs",
"oc adm release info 4.13.13 --pullspecs",
"oc adm release info 4.13.14 --pullspecs",
"oc adm release info 4.13.15 --pullspecs",
"oc adm release info 4.13.17 --pullspecs",
"oc adm release info 4.13.18 --pullspecs",
"oc adm release info 4.13.19 --pullspecs",
"oc adm release info 4.13.21 --pullspecs",
"oc adm release info 4.13.22 --pullspecs",
"oc adm release info 4.13.23 --pullspecs",
"oc adm release info 4.13.24 --pullspecs",
"oc adm release info 4.13.25 --pullspecs",
"oc adm release info 4.13.26 --pullspecs",
"oc adm release info 4.13.27 --pullspecs",
"oc adm release info 4.13.28 --pullspecs",
"oc adm release info 4.13.29 --pullspecs",
"oc adm release info 4.13.30 --pullspecs",
"oc adm release info 4.13.31 --pullspecs",
"oc adm release info 4.13.32 --pullspecs",
"oc adm release info 4.13.33 --pullspecs",
"oc adm release info 4.13.34 --pullspecs",
"oc adm release info 4.13.35 --pullspecs",
"oc adm release info 4.13.36 --pullspecs",
"oc adm release info 4.13.37 --pullspecs",
"oc adm release info 4.13.38 --pullspecs",
"oc adm release info 4.13.39 --pullspecs",
"oc adm release info 4.13.40 --pullspecs",
"oc adm release info 4.13.41 --pullspecs",
"oc adm release info 4.13.42 --pullspecs",
"oc patch console.operator.openshift.io/cluster --type='merge' -p='{\"status\":{\"conditions\":null}}'",
"oc adm release info 4.13.43 --pullspecs",
"oc adm release info 4.13.44 --pullspecs",
"oc adm release info 4.13.45 --pullspecs",
"oc adm release info 4.13.46 --pullspecs",
"oc adm release info 4.13.48 --pullspecs",
"oc adm release info 4.13.49 --pullspecs",
"oc adm release info 4.13.50 --pullspecs",
"oc adm release info 4.13.51 --pullspecs",
"oc adm release info 4.13.52 --pullspecs",
"oc adm release info 4.13.53 --pullspecs",
"oc adm release info 4.13.55 --pullspecs"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/release_notes/index |
Chapter 296. XML Security DataFormat | Chapter 296. XML Security DataFormat Available as of Camel version 2.0 The XMLSecurity Data Format facilitates encryption and decryption of XML payloads at the Document, Element, and Element Content levels (including simultaneous multi-node encryption/decryption using XPath). To sign messages using the XML Signature specification, please see the Camel XML Security component. The encryption capability is based on formats supported using the Apache XML Security (Santuario) project. Symmetric encryption/decryption is currently supported using Triple-DES and AES (128, 192, and 256) encryption formats. Additional formats can be easily added later as needed. This capability allows Camel users to encrypt/decrypt payloads while being dispatched or received along a route. Available as of Camel 2.9 The XMLSecurity Data Format supports asymmetric key encryption. In this encryption model a symmetric key is generated and used to perform XML content encryption or decryption. This "content encryption key" is then itself encrypted using an asymmetric encryption algorithm that leverages the recipient's public key as the "key encryption key". Use of an asymmetric key encryption algorithm ensures that only the holder of the recipient's private key can access the generated symmetric encryption key. Thus, only the private key holder can decode the message. The XMLSecurity Data Format handles all of the logic required to encrypt and decrypt the message content and encryption key(s) using asymmetric key encryption. The XMLSecurity Data Format also has improved support for namespaces when processing the XPath queries that select content for encryption. A namespace definition mapping can be included as part of the data format configuration. This enables true namespace matching, even if the prefix values in the XPath query and the target xml document are not equivalent strings. 296.1. XMLSecurity Options The XML Security dataformat supports 13 options, which are listed below. Name Default Java Type Description xmlCipherAlgorithm TRIPLEDES String The cipher algorithm to be used for encryption/decryption of the XML message content. The available choices are: XMLCipher.TRIPLEDES XMLCipher.AES_128 XMLCipher.AES_128_GCM XMLCipher.AES_192 XMLCipher.AES_192_GCM XMLCipher.AES_256 XMLCipher.AES_256_GCM XMLCipher.SEED_128 XMLCipher.CAMELLIA_128 XMLCipher.CAMELLIA_192 XMLCipher.CAMELLIA_256 The default value is MLCipher.TRIPLEDES passPhrase String A String used as passPhrase to encrypt/decrypt content. The passPhrase has to be provided. If no passPhrase is specified, a default passPhrase is used. The passPhrase needs to be put together in conjunction with the appropriate encryption algorithm. For example using TRIPLEDES the passPhase can be a Only another 24 Byte key passPhraseByte byte[] A byte used as passPhrase to encrypt/decrypt content. The passPhrase has to be provided. If no passPhrase is specified, a default passPhrase is used. The passPhrase needs to be put together in conjunction with the appropriate encryption algorithm. For example using TRIPLEDES the passPhase can be a Only another 24 Byte key secureTag String The XPath reference to the XML Element selected for encryption/decryption. If no tag is specified, the entire payload is encrypted/decrypted. secureTagContents false Boolean A boolean value to specify whether the XML Element is to be encrypted or the contents of the XML Element false = Element Level true = Element Content Level keyCipherAlgorithm RSA_OAEP String The cipher algorithm to be used for encryption/decryption of the asymmetric key. The available choices are: XMLCipher.RSA_v1dot5 XMLCipher.RSA_OAEP XMLCipher.RSA_OAEP_11 The default value is XMLCipher.RSA_OAEP recipientKeyAlias String The key alias to be used when retrieving the recipient's public or private key from a KeyStore when performing asymmetric key encryption or decryption. keyOrTrustStoreParametersId String Refers to a KeyStore instance to lookup in the registry, which is used for configuration options for creating and loading a KeyStore instance that represents the sender's trustStore or recipient's keyStore. keyPassword String The password to be used for retrieving the private key from the KeyStore. This key is used for asymmetric decryption. digestAlgorithm SHA1 String The digest algorithm to use with the RSA OAEP algorithm. The available choices are: XMLCipher.SHA1 XMLCipher.SHA256 XMLCipher.SHA512 The default value is XMLCipher.SHA1 mgfAlgorithm MGF1_SHA1 String The MGF Algorithm to use with the RSA OAEP algorithm. The available choices are: EncryptionConstants.MGF1_SHA1 EncryptionConstants.MGF1_SHA256 EncryptionConstants.MGF1_SHA512 The default value is EncryptionConstants.MGF1_SHA1 addKeyValueForEncryptedKey true Boolean Whether to add the public key used to encrypt the session key as a KeyValue in the EncryptedKey structure or not. contentTypeHeader false Boolean Whether the data format should set the Content-Type header with the type from the data format if the data format is capable of doing so. For example application/xml for data formats marshalling to XML, or application/json for data formats marshalling to JSon etc. 296.1.1. Key Cipher Algorithm As of Camel 2.12.0, the default Key Cipher Algorithm is now XMLCipher.RSA_OAEP instead of XMLCipher.RSA_v1dot5. Usage of XMLCipher.RSA_v1dot5 is discouraged due to various attacks. Requests that use RSA v1.5 as the key cipher algorithm will be rejected unless it has been explicitly configured as the key cipher algorithm. 296.2. Marshal In order to encrypt the payload, the marshal processor needs to be applied on the route followed by the secureXML() tag. 296.3. Unmarshal In order to decrypt the payload, the unmarshal processor needs to be applied on the route followed by the secureXML() tag. 296.4. Examples Given below are several examples of how marshalling could be performed at the Document, Element, and Content levels. 296.4.1. Full Payload encryption/decryption from("direct:start") .marshal().secureXML() .unmarshal().secureXML() .to("direct:end"); 296.4.2. Partial Payload Content Only encryption/decryption String tagXPATH = "//cheesesites/italy/cheese"; boolean secureTagContent = true; ... from("direct:start") .marshal().secureXML(tagXPATH, secureTagContent) .unmarshal().secureXML(tagXPATH, secureTagContent) .to("direct:end"); 296.4.3. Partial Multi Node Payload Content Only encryption/decryption String tagXPATH = "//cheesesites/*/cheese"; boolean secureTagContent = true; ... from("direct:start") .marshal().secureXML(tagXPATH, secureTagContent) .unmarshal().secureXML(tagXPATH, secureTagContent) .to("direct:end"); 296.4.4. Partial Payload Content Only encryption/decryption with choice of passPhrase(password) String tagXPATH = "//cheesesites/italy/cheese"; boolean secureTagContent = true; ... String passPhrase = "Just another 24 Byte key"; from("direct:start") .marshal().secureXML(tagXPATH, secureTagContent, passPhrase) .unmarshal().secureXML(tagXPATH, secureTagContent, passPhrase) .to("direct:end"); 296.4.5. Partial Payload Content Only encryption/decryption with passPhrase(password) and Algorithm import org.apache.xml.security.encryption.XMLCipher; .... String tagXPATH = "//cheesesites/italy/cheese"; boolean secureTagContent = true; String passPhrase = "Just another 24 Byte key"; String algorithm= XMLCipher.TRIPLEDES; from("direct:start") .marshal().secureXML(tagXPATH, secureTagContent, passPhrase, algorithm) .unmarshal().secureXML(tagXPATH, secureTagContent, passPhrase, algorithm) .to("direct:end"); 296.4.6. Partial Payload Content with Namespace support Java DSL final Map<String, String> namespaces = new HashMap<String, String>(); namespaces.put("cust", "http://cheese.xmlsecurity.camel.apache.org/"); final KeyStoreParameters tsParameters = new KeyStoreParameters(); tsParameters.setPassword("password"); tsParameters.setResource("sender.ts"); context.addRoutes(new RouteBuilder() { public void configure() { from("direct:start") .marshal().secureXML("//cust:cheesesites/italy", namespaces, true, "recipient", testCypherAlgorithm, XMLCipher.RSA_v1dot5, tsParameters) .to("mock:encrypted"); } } Spring XML A namespace prefix that is defined as part of the camelContext definition can be re-used in context within the data format secureTag attribute of the secureXML element. <camelContext id="springXmlSecurityDataFormatTestCamelContext" xmlns="http://camel.apache.org/schema/spring" xmlns:cheese="http://cheese.xmlsecurity.camel.apache.org/"> <route> <from uri="direct://start"/> <marshal> <secureXML secureTag="//cheese:cheesesites/italy" secureTagContents="true"/> </marshal> ... 296.4.7. Asymmetric Key Encryption Spring XML Sender <!-- trust store configuration --> <camel:keyStoreParameters id="trustStoreParams" resource="./sender.ts" password="password"/> <camelContext id="springXmlSecurityDataFormatTestCamelContext" xmlns="http://camel.apache.org/schema/spring" xmlns:cheese="http://cheese.xmlsecurity.camel.apache.org/"> <route> <from uri="direct://start"/> <marshal> <secureXML secureTag="//cheese:cheesesites/italy" secureTagContents="true" xmlCipherAlgorithm="http://www.w3.org/2001/04/xmlenc#aes128-cbc" keyCipherAlgorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5" recipientKeyAlias="recipient" keyOrTrustStoreParametersId="trustStoreParams"/> </marshal> ... Spring XML Recipient <!-- key store configuration --> <camel:keyStoreParameters id="keyStoreParams" resource="./recipient.ks" password="password" /> <camelContext id="springXmlSecurityDataFormatTestCamelContext" xmlns="http://camel.apache.org/schema/spring" xmlns:cheese="http://cheese.xmlsecurity.camel.apache.org/"> <route> <from uri="direct://encrypted"/> <unmarshal> <secureXML secureTag="//cheese:cheesesites/italy" secureTagContents="true" xmlCipherAlgorithm="http://www.w3.org/2001/04/xmlenc#aes128-cbc" keyCipherAlgorithm="http://www.w3.org/2001/04/xmlenc#rsa-1_5" recipientKeyAlias="recipient" keyOrTrustStoreParametersId="keyStoreParams" keyPassword="privateKeyPassword" /> </unmarshal> ... 296.5. Dependencies This data format is provided within the camel-xmlsecurity component. | [
"from(\"direct:start\") .marshal().secureXML() .unmarshal().secureXML() .to(\"direct:end\");",
"String tagXPATH = \"//cheesesites/italy/cheese\"; boolean secureTagContent = true; from(\"direct:start\") .marshal().secureXML(tagXPATH, secureTagContent) .unmarshal().secureXML(tagXPATH, secureTagContent) .to(\"direct:end\");",
"String tagXPATH = \"//cheesesites/*/cheese\"; boolean secureTagContent = true; from(\"direct:start\") .marshal().secureXML(tagXPATH, secureTagContent) .unmarshal().secureXML(tagXPATH, secureTagContent) .to(\"direct:end\");",
"String tagXPATH = \"//cheesesites/italy/cheese\"; boolean secureTagContent = true; String passPhrase = \"Just another 24 Byte key\"; from(\"direct:start\") .marshal().secureXML(tagXPATH, secureTagContent, passPhrase) .unmarshal().secureXML(tagXPATH, secureTagContent, passPhrase) .to(\"direct:end\");",
"import org.apache.xml.security.encryption.XMLCipher; . String tagXPATH = \"//cheesesites/italy/cheese\"; boolean secureTagContent = true; String passPhrase = \"Just another 24 Byte key\"; String algorithm= XMLCipher.TRIPLEDES; from(\"direct:start\") .marshal().secureXML(tagXPATH, secureTagContent, passPhrase, algorithm) .unmarshal().secureXML(tagXPATH, secureTagContent, passPhrase, algorithm) .to(\"direct:end\");",
"final Map<String, String> namespaces = new HashMap<String, String>(); namespaces.put(\"cust\", \"http://cheese.xmlsecurity.camel.apache.org/\"); final KeyStoreParameters tsParameters = new KeyStoreParameters(); tsParameters.setPassword(\"password\"); tsParameters.setResource(\"sender.ts\"); context.addRoutes(new RouteBuilder() { public void configure() { from(\"direct:start\") .marshal().secureXML(\"//cust:cheesesites/italy\", namespaces, true, \"recipient\", testCypherAlgorithm, XMLCipher.RSA_v1dot5, tsParameters) .to(\"mock:encrypted\"); } }",
"<camelContext id=\"springXmlSecurityDataFormatTestCamelContext\" xmlns=\"http://camel.apache.org/schema/spring\" xmlns:cheese=\"http://cheese.xmlsecurity.camel.apache.org/\"> <route> <from uri=\"direct://start\"/> <marshal> <secureXML secureTag=\"//cheese:cheesesites/italy\" secureTagContents=\"true\"/> </marshal>",
"<!-- trust store configuration --> <camel:keyStoreParameters id=\"trustStoreParams\" resource=\"./sender.ts\" password=\"password\"/> <camelContext id=\"springXmlSecurityDataFormatTestCamelContext\" xmlns=\"http://camel.apache.org/schema/spring\" xmlns:cheese=\"http://cheese.xmlsecurity.camel.apache.org/\"> <route> <from uri=\"direct://start\"/> <marshal> <secureXML secureTag=\"//cheese:cheesesites/italy\" secureTagContents=\"true\" xmlCipherAlgorithm=\"http://www.w3.org/2001/04/xmlenc#aes128-cbc\" keyCipherAlgorithm=\"http://www.w3.org/2001/04/xmlenc#rsa-1_5\" recipientKeyAlias=\"recipient\" keyOrTrustStoreParametersId=\"trustStoreParams\"/> </marshal>",
"<!-- key store configuration --> <camel:keyStoreParameters id=\"keyStoreParams\" resource=\"./recipient.ks\" password=\"password\" /> <camelContext id=\"springXmlSecurityDataFormatTestCamelContext\" xmlns=\"http://camel.apache.org/schema/spring\" xmlns:cheese=\"http://cheese.xmlsecurity.camel.apache.org/\"> <route> <from uri=\"direct://encrypted\"/> <unmarshal> <secureXML secureTag=\"//cheese:cheesesites/italy\" secureTagContents=\"true\" xmlCipherAlgorithm=\"http://www.w3.org/2001/04/xmlenc#aes128-cbc\" keyCipherAlgorithm=\"http://www.w3.org/2001/04/xmlenc#rsa-1_5\" recipientKeyAlias=\"recipient\" keyOrTrustStoreParametersId=\"keyStoreParams\" keyPassword=\"privateKeyPassword\" /> </unmarshal>"
]
| https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/securexml-dataformat |
Chapter 3. Recovering a Red Hat Ansible Automation Platform deployment | Chapter 3. Recovering a Red Hat Ansible Automation Platform deployment If you lose information on your system or experience issues with an upgrade, you can use the backup resources of your deployment instances. Use the following procedures to recover your Ansible Automation Platform deployment files. 3.1. Recovering your Ansible Automation Platform deployment Ansible Automation Platform manages any enabled components (such as, automation controller, automation hub, and Event-Driven Ansible), when you recover Ansible Automation Platform you also restore these components. In versions of the Ansible Automation Platform Operator, it was necessary to create a restore object for each component of the platform. Now, you create a single AnsibleAutomationPlatformRestore resource, which creates and manages the other restore objects: AutomationControllerRestore AutomationHubRestore EDARestore Prerequisites You must be authenticated with an OpenShift cluster. You have installed the Ansible Automation Platform Operator on the cluster. The AnsibleAutomationPlatformBackups deployment is available in your cluster. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Go to your All Instances tab, and click Create New . Select Ansible Automation Platform Restore from the list. For Name enter the name for the recovery deployment. For New Ansible Automation Platform Name enter the new name for your Ansible Automation Platform instance. Backup Source defaults to CR . For Backup name enter the name your chose when creating the backup. Click Create . Your backups starts restoring under the AnsibleAutomationPlatformRestores tab. Note The recovery is not complete until all the resources are successfully restored. Depending on the size of your database this this can take some time. Verification To verify that your recovery was successful you can: Go to Workloads Pods . Confirm that all pods are in a Running or Completed state. 3.2. Recovering the Automation controller deployment Use this procedure to restore a controller deployment from an AutomationControllerBackup. The deployment name you provide will be the name of the new AutomationController custom resource that will be created. Note The name specified for the new AutomationController custom resource must not match an existing deployment. If the backup custom resource being restored is a backup of a currently running AutomationController custom resource the recovery process will fail. See Troubleshooting for steps to resolve the issue. Prerequisites You must be authenticated with an OpenShift cluster. You have deployed automation controller on the cluster. An AutomationControllerBackup is available on a PVC in your cluster. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Controller Restore tab. Click Create AutomationControllerRestore . Enter a Name for the recovery deployment. Enter a New Deployment name for the restored deployment. Note This must be different from the original deployment name. Select the Backup source to restore from . Backup CR is recommended. Enter the Backup Name of the AutomationControllerBackup object. Click Create . A new deployment is created and your backup is restored to it. This can take approximately 5 to 15 minutes depending on the size of your database. Verification Log in to Red Hat Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the AutomationControllerRestore tab. Select the restore resource you want to verify. Scroll to Conditions and check that the Successful status is True . Note If Successful is False , the recovery has failed. Check the automation controller operator logs for the error to fix the issue. 3.3. Using YAML to recover the Automation controller deployment See the following procedure for how to restore a deployment of the automation controller using YAML. Prerequisite The external database must be a PostgreSQL database that is the version supported by the current release of Ansible Automation Platform. Note Ansible Automation Platform 2.5 supports PostgreSQL 15. Procedure The external postgres instance credentials and connection information must be stored in a secret, which is then set on the automation controller spec. Create a external-postgres-configuration-secret YAML file, following the template below: apiVersion: v1 kind: Secret metadata: name: external-restore-postgres-configuration namespace: <target_namespace> 1 stringData: host: "<external_ip_or_url_resolvable_by_the_cluster>" 2 port: "<external_port>" 3 database: "<desired_database_name>" username: "<username_to_connect_as>" password: "<password_to_connect_with>" 4 sslmode: "prefer" 5 type: "unmanaged" type: Opaque 1 Namespace to create the secret in. This should be the same namespace you want to deploy to. 2 The resolvable hostname for your database node. 3 External port defaults to 5432 . 4 Value for variable password should not contain single or double quotes (', ") or backslashes (\) to avoid any issues during deployment, backup or restoration. 5 The variable sslmode is valid for external databases only. The allowed values are: prefer , disable , allow , require , verify-ca , and verify-full . Apply external-postgres-configuration-secret.yml to your cluster using the oc create command. USD oc create -f external-postgres-configuration-secret.yml When creating your AutomationControllerRestore custom resource object, specify the secret on your spec, following the example below: kind: AutomationControllerRestore apiVersion: automationcontroller.ansible.com/v1beta1 metadata: namespace: my-namespace name: AutomationControllerRestore-2024-07-15 spec: deployment_name: restored_controller backup_name: AutomationControllerBackup-2024-07-15 postgres_configuration_secret: 'external-restore-postgres-configuration' 3.4. Recovering the Automation hub deployment Use this procedure to restore a hub deployment into the namespace. The deployment name you provide will be the name of the new AutomationHub custom resource that will be created. Note The name specified for the new AutomationHub custom resource must not match an existing deployment or the recovery process will fail. Prerequisites You must be authenticated with an OpenShift cluster. You have deployed automation hub on the cluster. An AutomationHubBackup is available on a PVC in your cluster. Procedure Log in to Red Hat OpenShift Container Platform. Navigate to Operators Installed Operators . Select your Ansible Automation Platform Operator deployment. Select the Automation Hub Restore tab. Click Create AutomationHubRestore . Enter a Name for the recovery deployment. Select the Backup source to restore from. Backup CR is recommended. Enter the Backup Name of the AutomationHubBackup object. Click Create . This creates a new deployment and restores your backup to it. | [
"apiVersion: v1 kind: Secret metadata: name: external-restore-postgres-configuration namespace: <target_namespace> 1 stringData: host: \"<external_ip_or_url_resolvable_by_the_cluster>\" 2 port: \"<external_port>\" 3 database: \"<desired_database_name>\" username: \"<username_to_connect_as>\" password: \"<password_to_connect_with>\" 4 sslmode: \"prefer\" 5 type: \"unmanaged\" type: Opaque",
"oc create -f external-postgres-configuration-secret.yml",
"kind: AutomationControllerRestore apiVersion: automationcontroller.ansible.com/v1beta1 metadata: namespace: my-namespace name: AutomationControllerRestore-2024-07-15 spec: deployment_name: restored_controller backup_name: AutomationControllerBackup-2024-07-15 postgres_configuration_secret: 'external-restore-postgres-configuration'"
]
| https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/backup_and_recovery_for_operator_environments/aap-recovery |
function::cmdline_str | function::cmdline_str Name function::cmdline_str - Fetch all command line arguments from current process Synopsis Arguments None Description Returns all arguments from the current process delimited by spaces. Returns the empty string when the arguments cannot be retrieved. | [
"cmdline_str:string()"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-cmdline-str |
Chapter 15. Responding to violations | Chapter 15. Responding to violations Using Red Hat Advanced Cluster Security for Kubernetes (RHACS) you can view policy violations, navigate to the actual cause of the violation, and take corrective actions. RHACS's built-in policies identify a variety of security findings, including vulnerabilities (CVEs), violations of DevOps best practices, high-risk build and deployment practices, and suspicious runtime behaviors. Whether you use the default out-of-box security policies or use your own custom policies, RHACS reports a violation when an enabled policy fails. 15.1. Namespace conditions for platform components By understanding the namespace conditions for platform components, you can identify and manage the namespaces that fall under OpenShift Container Platform, layered products, and third party partners in your environment. Table 15.1. Namespace conditions for platform components Platform component Namespace condition OpenShift Container Platform Namespace starts with openshift- Namespace starts with kube- Layered products namespace = stackrox Namespace starts with rhacs-operator Namespace starts with open-cluster-management namespace = multicluster-engine namespace = aap namespace = hive Third party partners namespace = nvidia-gpu-operator Red Hat Advanced Cluster Security for Kubernetes (RHACS) identifies the workloads belonging to platform components by using the following regex pattern: ^kube-.*|^openshift-.*|^stackroxUSD|^rhacs-operatorUSD|^open-cluster-managementUSD|^multicluster-engineUSD|^aapUSD|^hiveUSD|^nvidia-gpu-operatorUSD 15.2. Analyzing all violations By viewing the Violations page, you can analyze all violations and take corrective action. Procedure In the RHACS portal, click Violations . To navigate through the different types of violations, do any of the following tasks: To view the list of active violations, click the Active tab. To view the list of resolved violations, click the Resolved tab. To view the list of violations that RHACS attempted to resolve, click the Attempted tab. Choose the appropriate method to view the violations from the drop-down list, which is in the upper left of the page: To display the findings for application workloads, select Applications view . To display the findings for platform components in OpenShift Container Platform, select Platform view . To display the findings for application workloads and platform components simultaneously, select Full view . Optional: Choose the appropriate method to re-organize the information in the Violations page: To sort the violations in ascending or descending order, select a column heading. To filter violations, use the filter bar. To see more details about the violation, select a violation in the Violations page. Optional: Choose the appropriate method to exclude a deployment from the policy: If you selected a single deployment, click the overflow menu and then select Exclude deployment from policy . If you selected multiple deployments, from the Row actions drop-down list, select Exclude deployments from policy . 15.3. Violations page overview The Violations page shows a list of violations and organizes information into the following groups: Policy : The name of the violated policy. Entity : The entity where the violation occurred. Type : The type of entity. For workload violations, the type indicates the workload type, for example, Deployment , Pod , DaemonSet , and so on. For other resource violations, the type indicates the resource type, for example, Secrets , ConfigMaps , ClusterRoles , and so on. Enforced : Indicates if the policy was enforced when the violation occurred. Severity : The severity of the violation. The following values are associated with the severity of the violation: Low Medium High Critical Categories : The category of the violated policy. To view the policy categories: In the RHACS portal, click the Platform Configuration Policy Management Policy categories tab. Lifecycle : The lifecycle stages to which the policy applies. The following values are associated with the lifecycle stages: Build Deploy Runtime Time : The date and time when the violation occurred. 15.4. Viewing violation details When you select a violation in the Violations view, a window opens with more information about the violation. It provides detailed information grouped by multiple tabs. 15.4.1. Violation tab The Violation tab of the Violation Details panel explains how the policy was violated. If the policy targets deploy-phase attributes, you can view the specific values that violated the policies, such as violation names. If the policy targets runtime activity, you can view detailed information about the process that violated the policy, including its arguments and the ancestor processes that created it. 15.4.2. Deployment tab The Deployment tab of the Details panel displays details of the deployment to which the violation applies. Overview section The Deployment overview section lists the following information: Deployment ID : The alphanumeric identifier for the deployment. Deployment name : The name of the deployment. Deployment type : The type of the deployment. Cluster : The name of the cluster where the container is deployed. Namespace : The unique identifier for the deployed cluster. Replicas : The number of the replicated deployments. Created : The time and date when the deployment was created. Updated : The time and date when the deployment was updated. Labels : The labels that apply to the selected deployment. Annotations : The annotations that apply to the selected deployment. Service Account : The name of the service account for the selected deployment. Container configuration section The Container configuration section lists the following information: containers : For each container, provides the following information: Image name : The name of the image for the selected deployment. Click the name to view more information about the image. Resources : This section provides information for the following fields: CPU request (cores) : The number of cores requested by the container. CPU limit (cores) : The maximum number of cores that can be requested by the container. Memory request (MB) : The memory size requested by the container. Memory limit (MB) : The maximum memory that can be requested by the container. volumes : Volumes mounted in the container, if any. secrets : Secrets associated with the selected deployment. For each secret, provides information for the following fields: Name : Name of the secret. Container path : Location where the secret is stored. Name : The name of the location where the service will be mounted. Source : The data source path. Destination : The path where the data is stored. Type : The type of the volume. Port configuration section The Port configuration section provides information about the ports in the deployment, including the following fields: ports : All ports exposed by the deployment and any Kubernetes services associated with this deployment and port if they exist. For each port, the following fields are listed: containerPort : The port number exposed by the deployment. protocol : Protocol, such as, TCP or UDP, that is used by the port. exposure : Exposure method of the service, for example, load balancer or node port. exposureInfo : This section provides information for the following fields: level : Indicates if the service exposing the port internally or externally. serviceName : Name of the Kubernetes service. serviceID : ID of the Kubernetes service as stored in RHACS. serviceClusterIp : The IP address that another deployment or service within the cluster can use to reach the service. This is not the external IP address. servicePort : The port used by the service. nodePort : The port on the node where external traffic comes into the node. externalIps : The IP addresses that can be used to access the service externally, from outside the cluster, if any exist. This field is not available for an internal service. Security context section The Security context section lists whether the container is running as a privileged container. Privileged : true if it is privileged . false if it is not privileged . Network policy section The Network policy section lists the namespace and all network policies in the namespace containing the violation. Click on a network policy name to view the full YAML file of the network policy. 15.4.3. Policy tab The Policy tab of the Details panel displays details of the policy that caused the violation. Policy overview section The Policy overview section lists the following information: Severity : A ranking of the policy (critical, high, medium, or low) for the amount of attention required. Categories : The policy category of the policy. Policy categories are listed in Platform Configuration Policy Management in the Policy categories tab. Type : Whether the policy is user generated (policies created by a user) or a system policy (policies built into RHACS by default). Description : A detailed explanation of what the policy alert is about. Rationale : Information about the reasoning behind the establishment of the policy and why it matters. Guidance : Suggestions on how to address the violation. MITRE ATT&CK : Indicates if there are MITRE tactics and techniques that apply to this policy. Policy behavior The Policy behavior section provides the following information: Lifecycle Stage : Lifecycle stages that the policy belongs to, Build , Deploy , or Runtime . Event source : This field is only applicable if the lifecycle stage is Runtime . It can be one of the following: Deployment : RHACS triggers policy violations when event sources include process and network activity, pod execution, and pod port forwarding. Audit logs : RHACS triggers policy violations when event sources match Kubernetes audit log records. Response : The response can be one of the following: Inform : Policy violations generate a violation in the violations list. Inform and enforce : The violation is enforced. Enforcement : If the response is set to Inform and enforce , lists the type of enforcement that is set for the following stages: Build : RHACS fails your continuous integration (CI) builds when images match the criteria of the policy. Deploy : For the Deploy stage, RHACS blocks the creation and update of deployments that match the conditions of the policy if the RHACS admission controller is configured and running. In clusters with admission controller enforcement, the Kubernetes or OpenShift Container Platform API server blocks all noncompliant deployments. In other clusters, RHACS edits noncompliant deployments to prevent pods from being scheduled. For existing deployments, policy changes only result in enforcement at the detection of the criteria, when a Kubernetes event occurs. For more information about enforcement, see "Security policy enforcement for the deploy stage". Runtime : RHACS deletes all pods when an event in the pods matches the criteria of the policy. Policy criteria section The Policy criteria section lists the policy criteria for the policy. 15.4.3.1. Security policy enforcement for the deploy stage Red Hat Advanced Cluster Security for Kubernetes supports two forms of security policy enforcement for deploy-time policies: hard enforcement through the admission controller and soft enforcement by RHACS Sensor. The admission controller blocks creation or updating of deployments that violate policy. If the admission controller is disabled or unavailable, Sensor can perform enforcement by scaling down replicas for deployments that violate policy to 0 . Warning Policy enforcement can impact running applications or development processes. Before you enable enforcement options, inform all stakeholders and plan how to respond to the automated enforcement actions. 15.4.3.1.1. Hard enforcement Hard enforcement is performed by the RHACS admission controller. In clusters with admission controller enforcement, the Kubernetes or OpenShift Container Platform API server blocks all noncompliant deployments. The admission controller blocks CREATE and UPDATE operations. Any pod create or update request that satisfies a policy configured with deploy-time enforcement enabled will fail. Note Kubernetes admission webhooks support only CREATE , UPDATE , DELETE , or CONNECT operations. The RHACS admission controller supports only CREATE and UPDATE operations. Operations such as kubectl patch , kubectl set , and kubectl scale are PATCH operations, not UPDATE operations. Because PATCH operations are not supported in Kubernetes, RHACS cannot perform enforcement on PATCH operations. For blocking enforcement, you must enable the following settings for the cluster in RHACS: Enforce on Object Creates : This toggle in the Dynamic Configuration section controls the behavior of the admission control service. You must have the Configure Admission Controller Webhook to listen on Object Creates toggle in the Static Configuration section turned on for this to work. Enforce on Object Updates : This toggle in the Dynamic Configuration section controls the behavior of the admission control service. You must have the Configure Admission Controller Webhook to listen on Object Updates toggle in the Static Configuration section turned on for this to work. If you make changes to settings in the Static Configuration setting, you must redeploy the secured cluster for those changes to take effect. 15.4.3.1.2. Soft enforcement Soft enforcement is performed by RHACS Sensor. This enforcement prevents an operation from being initiated. With soft enforcement, Sensor scales the replicas to 0, and prevents pods from being scheduled. In this enforcement, a non-ready deployment is available in the cluster. If soft enforcement is configured, and Sensor is down, then RHACS cannot perform enforcement. 15.4.3.1.3. Namespace exclusions By default, RHACS excludes certain administrative namespaces, such as the stackrox , kube-system , and istio-system namespaces, from enforcement blocking. The reason for this is that some items in these namespaces must be deployed for RHACS to work correctly. 15.4.3.1.4. Enforcement on existing deployments For existing deployments, policy changes only result in enforcement at the detection of the criteria, when a Kubernetes event occurs. If you make changes to a policy, you must reassess policies by selecting Policy Management and clicking Reassess All . This action applies deploy policies on all existing deployments regardless of whether there are any new incoming Kubernetes events. If a policy is violated, then RHACS performs enforcement. Additional resources Using admission controller enforcement 15.4.4. Network policies tab The Network policies section lists the network policies associated with a namespace. | [
"^kube-.*|^openshift-.*|^stackroxUSD|^rhacs-operatorUSD|^open-cluster-managementUSD|^multicluster-engineUSD|^aapUSD|^hiveUSD|^nvidia-gpu-operatorUSD"
]
| https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/operating/respond-to-violations |
Chapter 5. Fencing: Configuring STONITH | Chapter 5. Fencing: Configuring STONITH STONITH is an acronym for "Shoot The Other Node In The Head" and it protects your data from being corrupted by rogue nodes or concurrent access. Just because a node is unresponsive, this does not mean it is not accessing your data. The only way to be 100% sure that your data is safe, is to fence the node using STONITH so we can be certain that the node is truly offline, before allowing the data to be accessed from another node. STONITH also has a role to play in the event that a clustered service cannot be stopped. In this case, the cluster uses STONITH to force the whole node offline, thereby making it safe to start the service elsewhere. For more complete general information on fencing and its importance in a Red Hat High Availability cluster, see Fencing in a Red Hat High Availability Cluster . 5.1. Available STONITH (Fencing) Agents Use the following command to view of list of all available STONITH agents. You specify a filter, then this command displays only the STONITH agents that match the filter. | [
"pcs stonith list [ filter ]"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/ch-fencing-HAAR |
Chapter 3. Distribution of content in RHEL 8 | Chapter 3. Distribution of content in RHEL 8 3.1. Installation Red Hat Enterprise Linux 8 is installed using ISO images. Two types of ISO image are available for the AMD64, Intel 64-bit, 64-bit ARM, IBM Power Systems, and IBM Z architectures: Binary DVD ISO: A full installation image that contains the BaseOS and AppStream repositories and allows you to complete the installation without additional repositories. Note The Installation ISO image is in multiple GB size, and as a result, it might not fit on optical media formats. A USB key or USB hard drive is recommended when using the Installation ISO image to create bootable installation media. You can also use the Image Builder tool to create customized RHEL images. For more information about Image Builder, see the Composing a customized RHEL system image document. Boot ISO: A minimal boot ISO image that is used to boot into the installation program. This option requires access to the BaseOS and AppStream repositories to install software packages. The repositories are part of the Binary DVD ISO image. See the Interactively installing RHEL from installation media document for instructions on downloading ISO images, creating installation media, and completing a RHEL installation. For automated Kickstart installations and other advanced topics, see the Automatically installing RHEL document. 3.2. Repositories Red Hat Enterprise Linux 8 is distributed through two main repositories: BaseOS AppStream Both repositories are required for a basic RHEL installation, and are available with all RHEL subscriptions. Content in the BaseOS repository is intended to provide the core set of the underlying OS functionality that provides the foundation for all installations. This content is available in the RPM format and is subject to support terms similar to those in releases of RHEL. For a list of packages distributed through BaseOS, see the Package manifest . Content in the Application Stream repository includes additional user space applications, runtime languages, and databases in support of the varied workloads and use cases. Application Streams are available in the familiar RPM format, as an extension to the RPM format called modules , or as Software Collections. For a list of packages available in AppStream, see the Package manifest . In addition, the CodeReady Linux Builder repository is available with all RHEL subscriptions. It provides additional packages for use by developers. Packages included in the CodeReady Linux Builder repository are unsupported. For more information about RHEL 8 repositories, see the Package manifest . 3.3. Application Streams Red Hat Enterprise Linux 8 introduces the concept of Application Streams. Multiple versions of user space components are now delivered and updated more frequently than the core operating system packages. This provides greater flexibility to customize Red Hat Enterprise Linux without impacting the underlying stability of the platform or specific deployments. Components made available as Application Streams can be packaged as modules or RPM packages and are delivered through the AppStream repository in RHEL 8. Each Application Stream component has a given life cycle, either the same as RHEL 8 or shorter. For details, see Red Hat Enterprise Linux Life Cycle . Modules are collections of packages representing a logical unit: an application, a language stack, a database, or a set of tools. These packages are built, tested, and released together. Module streams represent versions of the Application Stream components. For example, several streams (versions) of the PostgreSQL database server are available in the postgresql module with the default postgresql:10 stream. Only one module stream can be installed on the system. Different versions can be used in separate containers. Detailed module commands are described in the Installing, managing, and removing user-space components document. For a list of modules available in AppStream, see the Package manifest . 3.4. Package management with YUM/DNF On Red Hat Enterprise Linux 8, installing software is ensured by the YUM tool, which is based on the DNF technology. We deliberately adhere to usage of the yum term for consistency with major versions of RHEL. However, if you type dnf instead of yum , the command works as expected because yum is an alias to dnf for compatibility. For more details, see the following documentation: Installing, managing, and removing user-space components Considerations in adopting RHEL 8 | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/8.8_release_notes/distribution-of-content-in-rhel-8 |
16.7. PAM and Device Ownership | 16.7. PAM and Device Ownership Red Hat Enterprise Linux allows the first user to log in on the physical console of the machine the ability to manipulate some devices and perform some tasks normally reserved for the root user. This is controlled by a PAM module called pam_console.so . 16.7.1. Device Ownership When a user logs into a Red Hat Enterprise Linux system, the pam_console.so module is called by login or the graphical login programs, gdm and kdm . If this user is the first user to log in at the physical console - called the console user - the module grants the user ownership of a variety of devices normally owned by root. The console user owns these devices until the last local session for that user ends. Once the user has logged out, ownership of the devices reverts back to the root user. The devices affected include, but are not limited to, sound cards, diskette drives, and CD-ROM drives. This allows a local user to manipulate these devices without attaining root access, thus simplifying common tasks for the console user. By modifying the file /etc/security/console.perms , the administrator can edit the list of devices controlled by pam_console.so . Warning If the gdm , kdm , or xdm display manager configuration file has been altered to allow remote users to log in and the host is configured to run at runlevel 5, it is advisable to change the <console> and <xconsole> directives within the /etc/security/console.perms to the following values: Doing this prevents remote users from gaining access to devices and restricted applications on the machine. If the gdm , kdm , or xdm display manager configuration file has been altered to allow remote users to log in and the host is configured to run at any multiple user runlevel other than 5, it is advisable to remove the <xconsole> directive entirely and change the <console> directive to the following value: | [
"<console>=tty[0-9][0-9]* vc/[0-9][0-9]* :0\\.[0-9] :0 <xconsole>=:0\\.[0-9] :0",
"<console>=tty[0-9][0-9]* vc/[0-9][0-9]*"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s1-pam-console |
Chapter 9. Tuning SSSD performance for large IdM-AD trust deployments | Chapter 9. Tuning SSSD performance for large IdM-AD trust deployments Retrieving user and group information is a very data-intensive operation for the System Security Services Daemon (SSSD), especially in an IdM deployment with a trust to a large Active Directory (AD) domain. You can improve this performance by adjusting which information SSSD retrieves from identity providers and for how long. 9.1. Tuning SSSD in IdM servers for large IdM-AD trust deployments This procedure applies tuning options to the configuration of the SSSD service in an IdM server to improve its response time when retrieving information from a large AD environment. Prerequisites You need root permissions to edit the /etc/sssd/sssd.conf configuration file. Procedure Open the /etc/sssd/sssd.conf configuration file in a text editor. Add the following options to the [domain] section for your Identity Management (IdM) domain: Note Settings listed in the subdomain_inherit options apply to both the main (IdM) domain as well as the trusted AD domain(s). Save and close the /etc/sssd/sssd.conf file on the server. Restart the SSSD service to load the configuration changes. Additional resources Options for tuning SSSD in IdM servers and clients for large IdM-AD trust deployments 9.2. Tuning the config timeout for the ipa-extdom plugin on IdM servers IdM clients cannot receive information about users and groups from Active Directory (AD) directly, so IdM servers use the ipa-extdom plugin to receive information about AD users and groups, and that information is forwarded to the requesting client. The ipa-extdom plug-in sends a request to SSSD for the data about AD users. If the information is not in the SSSD cache, SSSD requests the data from an AD domain controller (DC). You can adjust the config timeout value, which defines how long the ipa-extdom plug-in waits for a reply from SSSD before the plug-in cancels the connection and returns a timeout error to the caller. The default value is 10000 milliseconds (10 seconds). The following example adjusts the config timeout to 20 seconds (20000 milliseconds). Warning Exercise caution when adjusting the config timeout: If you set a value that is too small, such as 500 milliseconds, SSSD might not have enough time to reply and requests will always return a timeout. If you set a value that is too large, such as 30000 milliseconds (30 seconds), a single request might block the connection to SSSD for this amount of time. Because only one thread can connect to SSSD at a time, all other requests from the plug-in have to wait. If there are many requests sent by IdM clients, they can block all available workers configured for the Directory Server on the IdM server. As a consequence, the server might not be able to reply to any kind of request for some time. Only change the config timeout in the following situations: If IdM clients frequently receive timeout errors before their own search timeout is reached when requesting information about AD users and groups, the config timeout value is too small . If the Directory Server on the IdM server is often locked and the pstack utility reports that many or all worker threads are handling ipa-extdom requests at this time, the value is too large . Prerequisites The LDAP Directory Manager password Procedure Use the following command to adjust the config timeout to 20000 milliseconds: 9.3. Tuning the maximum buffer size for the ipa-extdom plugin on IdM servers IdM clients cannot receive information about users and groups from Active Directory (AD) directly, so IdM servers use the ipa-extdom plugin to receive information about AD users and groups, and that information is forwarded to the requesting client. You can tune the maximum buffer size for the ipa-extdom plugin, which adjusts the size of the buffer where SSSD can store the data it receives. If the buffer is too small, SSSD returns an ERANGE error and the plug-in retries the request with a larger buffer. The default buffer size is 134217728 bytes (128 MB). The following example adjusts the maximum buffer size to 256 MB (268435456 bytes). Prerequisites The LDAP Directory Manager password Procedure Use the following command to set the maximum buffer size to 268435456 bytes: 9.4. Tuning the maximum number of instances for the ipa-extdom plugin on IdM servers As IdM clients cannot receive information about users and groups from Active Directory (AD) directly, IdM servers use the ipa-extdom plugin to receive information about AD users and groups and then forward this information to the requesting client. By default, the ipa-extdom plugin is configured to use up to 80% of the LDAP worker threads to handle requests from IdM clients. If the SSSD service on an IdM client has requested a large amount of information about AD trust users and groups, this operation can halt the LDAP service if it uses most of the LDAP threads. If you experience these issues, you might see similar errors in the SSSD log file for your AD domain, /var/log/sssd/sssd__your-ad-domain-name.com_.log : You can adjust the maximum number of ipa-extdom instances by setting the value for the ipaExtdomMaxInstances option, which must be an integer larger than 0 and less than the total number of worker threads. Prerequisites The LDAP Directory Manager password Procedure Retrieve the total number of worker threads: This means that the current value for ipaExtdomMaxInstances is 13. Adjust the maximum number of instances. This example changes the value to 14: Retrieve the current value of ipaExtdomMaxInstances : Monitor the IdM directory server's performance and if it does not improve, repeat this procedure and adjust the value of the ipaExtdomMaxInstances variable. 9.5. Tuning SSSD in IdM clients for large IdM-AD trust deployments This procedure applies tuning options to SSSD service configuration in an IdM client to improve its response time when retrieving information from a large AD environment. Prerequisites You need root permissions to edit the /etc/sssd/sssd.conf configuration file. Procedure Determine the number of seconds a single un-cached login takes. Clear the SSSD cache on the IdM client client.example.com . Measure how long it takes to log in as an AD user with the time command. In this example, from the IdM client client.example.com , log into the same host as the user ad-user from the ad.example.com AD domain. Type in the password as soon as possible. Log out as soon as possible to display elapsed time. In this example, a single un-cached login takes about 9 seconds. Open the /etc/sssd/sssd.conf configuration file in a text editor. Add the following options to the [domain] section for your Active Directory domain. Set the pam_id_timeout and krb5_auth_timeout options to the number of seconds an un-cached login takes. If you do not already have a domain section for your AD domain, create one. Add the following option to the [pam] section: Save and close the /etc/sssd/sssd.conf file on the server. Restart the SSSD service to load the configuration changes. Additional resources Options for tuning SSSD in IdM servers and clients for large IdM-AD trust deployments 9.6. Mounting the SSSD cache in tmpfs The System Security Services Daemon (SSSD) constantly writes LDAP objects to its cache. These internal SSSD transactions write data to disk, which is much slower than reading and writing from Random-Access Memory (RAM). To improve this performance, mount the SSSD cache in RAM. Considerations Cached information does not persist after a reboot if the SSSD cache is in RAM. It is safe to perform this change on IdM servers, as the SSSD instance on an IdM server cannot lose connectivity with the Directory Server on the same host. If you perform this adjustment on an IdM client and it loses connectivity to IdM servers, users will not be able to authenticate after a reboot until you reestablish connectivity. Prerequisites You need root permissions to edit the /etc/fstab configuration file. Procedure Create a tmpfs temporary filesystem: Confirm that the SSSD user owns the config.ldb file: Add the following entry to the /etc/fstab file as a single line: This example creates a 300MB cache. Tune the size parameter according to your IdM and AD directory size, estimating 100 MBs per 10,000 LDAP entries. Mount the new SSSD cache directory. Restart SSSD to reflect this configuration change. 9.7. Options in sssd.conf for tuning IdM servers and clients for large IdM-AD trust deployments You can use the following options in the /etc/sssd/sssd.conf configuration file to tune the performance of SSSD in IdM servers and clients when you have a large IdM-AD trust deployment. 9.7.1. Tuning options for IdM servers ignore_group_members Knowing which groups a user belongs to, as opposed to all the users that belong to a group, is important when authenticating and authorizing a user. When ignore_group_members is set to true , SSSD only retrieves information about the group objects themselves and not their members, providing a significant performance boost. Note The id [email protected] command still returns the correct list of groups, but getent group [email protected] returns an empty list. Default value false Recommended value true Note You should not set this option to true when the deployment involves an IdM server with the compat tree. subdomain_inherit With the subdomain_inherit option, you can apply the ignore_group_members setting to the trusted AD domains' configuration. Settings listed in the subdomain_inherit options apply to both the main (IdM) domain as well as the AD subdomain. Default value none Recommended value subdomain_inherit = ignore_group_members 9.7.2. Tuning options for IdM clients pam_id_timeout This parameter controls how long results from a PAM session are cached, to avoid excessive round-trips to the identity provider during an identity lookup. The default value of 5 seconds might not be enough in environments where complex group memberships are populated on the IdM Server and IdM client side. Red Hat recommends setting pam_id_timeout to the number of seconds a single un-cached login takes. Default value 5 Recommended value the number of seconds a single un-cached login takes krb5_auth_timeout Increasing krb5_auth_timeout allows more time to process complex group information in environments where users are members of a large number of groups. Red Hat recommends setting this value to the number of seconds a single un-cached login takes. Default value 6 Recommended value the number of seconds a single un-cached login takes ldap_deref_threshold A dereference lookup is a means of fetching all group members in a single LDAP call. The ldap_deref_threshold value specifies the number of group members that must be missing from the internal cache to trigger a dereference lookup. If less members are missing, they are looked up individually. Dereference lookups may take a long time in large environments and decrease performance. To disable dereference lookups, set this option to 0 . Default value 10 Recommended value 0 9.8. Additional resources Performance tuning SSSD for large IdM-AD trust deployments | [
"[domain/ idm.example.com ] ignore_group_members = true subdomain_inherit = ignore_group_members",
"systemctl restart sssd",
"ldapmodify -D \"cn=directory manager\" -W dn: cn=ipa_extdom_extop,cn=plugins,cn=config changetype: modify replace: ipaExtdomMaxNssTimeout ipaExtdomMaxNssTimeout: 20000",
"ldapmodify -D \"cn=directory manager\" -W dn: cn=ipa_extdom_extop,cn=plugins,cn=config changetype: modify replace: ipaExtdomMaxNssBufSize ipaExtdomMaxNssBufSize: 268435456",
"(2022-05-22 5:00:13): [be[ad.example.com]] [ipa_s2n_get_user_done] (0x0040): s2n exop request failed. (2022-05-22 5:00:13): [be[ad.example.com]] [ipa_s2n_get_user_done] (0x0040): s2n exop request failed. (2022-05-22 5:00:13): [be[ad.example.com]] [ipa_s2n_exop_done] (0x0040): ldap_extended_operation result: Server is busy(51), Too many extdom instances running.",
"ldapsearch -xLLLD cn=directory\\ manager -W -b cn=config -s base nsslapd-threadnumber Enter LDAP Password: dn: cn=config nsslapd-threadnumber: 16",
"ldapmodify -D \"cn=directory manager\" -W dn: cn=ipa_extdom_extop,cn=plugins,cn=config changetype: modify replace: ipaExtdomMaxInstances ipaExtdomMaxInstances: 14",
"ldapsearch -xLLLD \"cn=directory manager\" -W -b \"cn=ipa_extdom_extop,cn=plugins,cn=config\" |grep ipaextdommaxinstances Enter LDAP Password: ipaextdommaxinstances: 14",
"sss_cache -E",
"time ssh ad-user @ ad.example.com @client.example.com",
"Password: Last login: Sat Jan 23 06:29:54 2021 from 10.0.2.15 [[email protected]@client ~]USD",
"[[email protected]@client /]USD exit logout Connection to client.example.com closed. real 0m8.755s user 0m0.017s sys 0m0.013s",
"[domain/ example.com / ad.example.com ] krb5_auth_timeout = 9 ldap_deref_threshold = 0",
"[pam] pam_id_timeout = 9",
"systemctl restart sssd",
"ls -al /var/lib/sss/db/config.ldb -rw-------. 1 sssd sssd 1286144 Jun 8 16:41 /var/lib/sss/db/config.ldb",
"tmpfs /var/lib/sss/db/ tmpfs size= 300M ,mode=0700, uid=sssd,gid=sssd ,rootcontext=system_u:object_r:sssd_var_lib_t:s0 0 0",
"mount /var/lib/sss/db/",
"systemctl restart sssd"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/tuning_performance_in_identity_management/assembly_tuning-sssd-performance-for-large-idm-ad-trust-deployments_tuning-performance-in-idm |
34.5. Configuring Locations | 34.5. Configuring Locations A location is a set of maps, which are all stored in auto.master , and a location can store multiple maps. The location entry only works as a container for map entries; it is not an automount configuration in and of itself. Important Identity Management does not set up or configure autofs. That must be done separately. Identity Management works with an existing autofs deployment. 34.5.1. Configuring Locations through the Web UI Click the Policy tab. Click the Automount subtab. Click the Add link at the top of the list of automount locations. Enter the name for the new location. Click the Add and Edit button to go to the map configuration for the new location. Create maps, as described in Section 34.6.1.1, "Configuring Direct Maps from the Web UI" and Section 34.6.2.1, "Configuring Indirect Maps from the Web UI" . 34.5.2. Configuring Locations through the Command Line To create a map, using the automountlocation-add and give the location name. For example: When a new location is created, two maps are automatically created for it, auto.master and auto.direct . auto.master is the root map for all automount maps for the location. auto.direct is the default map for direct mounts and is mounted on /- . To view all of the maps configured for a location as if they were deployed on a filesystem, use the automountlocation-tofiles command: | [
"ipa automountlocation-add location",
"ipa automountlocation-add raleigh ---------------------------------- Added automount location \"raleigh\" ---------------------------------- Location: raleigh",
"ipa automountlocation-tofiles raleigh /etc/auto.master: /- /etc/auto.direct --------------------------- /etc/auto.direct:"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/adding-locations |
Examples | Examples Red Hat Service Interconnect 1.8 Service network tutorials with the CLI and YAML | [
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-west Enter your provider-specific login command create namespace west config set-context --current --namespace west",
"export KUBECONFIG=~/.kube/config-east Enter your provider-specific login command create namespace east config set-context --current --namespace east",
"create deployment frontend --image quay.io/skupper/hello-world-frontend",
"create deployment backend --image quay.io/skupper/hello-world-backend --replicas 3",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'west'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"west\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'east'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"east\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose deployment/backend --port 8080",
"skupper expose deployment/backend --port 8080 deployment backend exposed as backend",
"port-forward deployment/frontend 8080:8080",
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"apply -f server",
"kubectl apply -f server deployment.apps/broker created",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose deployment/broker --port 5672",
"skupper expose deployment/broker --port 5672 deployment broker exposed as broker",
"get service/broker",
"kubectl get service/broker NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE broker ClusterIP 10.100.58.95 <none> 5672/TCP 2s",
"run client --attach --rm --restart Never --image quay.io/skupper/activemq-example-client --env SERVER=broker",
"kubectl run client --attach --rm --restart Never --image quay.io/skupper/activemq-example-client --env SERVER=broker ____ __ _____ ___ __ ____ ____ --/ __ \\/ / / / _ | / _ \\/ //_/ / / / / -/ /_/ / /_/ / __ |/ , / ,< / // /\\ --\\___\\_\\____/_/ |_/_/|_/_/|_|\\____/_/ 2022-05-27 11:19:07,149 INFO [io.sma.rea.mes.amqp] (main) SRMSG16201: AMQP broker configured to broker:5672 for channel incoming-messages 2022-05-27 11:19:07,170 INFO [io.sma.rea.mes.amqp] (main) SRMSG16201: AMQP broker configured to broker:5672 for channel outgoing-messages 2022-05-27 11:19:07,198 INFO [io.sma.rea.mes.amqp] (main) SRMSG16212: Establishing connection with AMQP broker 2022-05-27 11:19:07,212 INFO [io.sma.rea.mes.amqp] (main) SRMSG16212: Establishing connection with AMQP broker 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) client 1.0.0-SNAPSHOT on JVM (powered by Quarkus 2.9.2.Final) started in 0.397s. 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) Profile prod activated. 2022-05-27 11:19:07,215 INFO [io.quarkus] (main) Installed features: [cdi, smallrye-context-propagation, smallrye-reactive-messaging, smallrye-reactive-messaging-amqp, vertx] Sent message 1 Sent message 2 Sent message 3 Sent message 4 Sent message 5 Sent message 6 Sent message 7 Sent message 8 Sent message 9 Sent message 10 2022-05-27 11:19:07,434 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16213: Connection with AMQP broker established 2022-05-27 11:19:07,442 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16213: Connection with AMQP broker established 2022-05-27 11:19:07,468 INFO [io.sma.rea.mes.amqp] (vert.x-eventloop-thread-0) SRMSG16203: AMQP Receiver listening address notifications Received message 1 Received message 2 Received message 3 Received message 4 Received message 5 Received message 6 Received message 7 Received message 8 Received message 9 Received message 10 Result: OK",
"kamel install",
"export KUBECONFIG=~/.kube/config-private1",
"export KUBECONFIG=~/.kube/config-public1",
"export KUBECONFIG=~/.kube/config-public2",
"create namespace private1 config set-context --current --namespace private1",
"create namespace public1 config set-context --current --namespace public1",
"create namespace public2 config set-context --current --namespace public2",
"skupper init",
"skupper init",
"skupper init",
"skupper status",
"skupper status",
"skupper status",
"Skupper is enabled for namespace \"<namespace>\" in interior mode. It is not connected to any other sites. It has no exposed services. The site console url is: http://<address>:8080 The credentials for internal console-auth mode are held in secret: 'skupper-console-users'",
"skupper token create ~/public1.token --uses 2",
"skupper link create ~/public1.token skupper link status --wait 30 skupper token create ~/public2.token",
"skupper link create ~/public1.token skupper link create ~/public2.token skupper link status --wait 30",
"create -f src/main/resources/database/postgres-svc.yaml skupper expose deployment postgres --address postgres --port 5432 -n private1",
"run pg-shell -i --tty --image quay.io/skupper/simple-pg --env=\"PGUSER=postgresadmin\" --env=\"PGPASSWORD=admin123\" --env=\"PGHOST=USD(kubectl get service postgres -o=jsonpath='{.spec.clusterIP}')\" -- bash psql --dbname=postgresdb CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\"; CREATE TABLE tw_feedback (id uuid DEFAULT uuid_generatev4 (),sigthning VARCHAR(255),created TIMESTAMP default CURRENTTIMESTAMP,PRIMARY KEY(id));",
"src/main/resources/scripts/setUpPublic1Cluster.sh",
"src/main/resources/scripts/setUpPublic2Cluster.sh",
"attach pg-shell -c pg-shell -i -t psql --dbname=postgresdb SELECT * FROM twfeedback;",
"id | sigthning | created --------------------------------------+-----------------+---------------------------- 95655229-747a-4787-8133-923ef0a1b2ca | Testing skupper | 2022-03-10 19:35:08.412542",
"kamel logs twitter-route",
"\"[1] 2022-03-10 19:35:08,397 INFO [postgresql-sink-1] (Camel (camel-1) thread #0 - twitter-search://skupper) Testing skupper\"",
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"apply -f server",
"kubectl apply -f server deployment.apps/ftp-server created",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose deployment/ftp-server --port 21100 --port 21",
"skupper expose deployment/ftp-server --port 21100 --port 21 deployment ftp-server exposed as ftp-server",
"echo \"Hello!\" | kubectl run ftp-client --stdin --rm --image=docker.io/curlimages/curl --restart=Never -- -s -T - ftp://example:example@ftp-server/greeting run ftp-client --attach --rm --image=docker.io/curlimages/curl --restart=Never -- -s ftp://example:example@ftp-server/greeting",
"echo \"Hello!\" | kubectl run ftp-client --stdin --rm --image=docker.io/curlimages/curl --restart=Never -- -s -T - ftp://example:example@ftp-server/greeting pod \"ftp-client\" deleted kubectl run ftp-client --attach --rm --image=docker.io/curlimages/curl --restart=Never -- -s ftp://example:example@ftp-server/greeting Hello! pod \"ftp-client\" deleted",
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public1",
"export KUBECONFIG=~/.kube/config-public2",
"export KUBECONFIG=~/.kube/config-private1",
"create namespace public1 config set-context --current --namespace public1",
"create namespace public2 config set-context --current --namespace public2",
"create namespace private1 config set-context --current --namespace private1",
"skupper init --enable-console --enable-flow-collector",
"skupper init",
"skupper init",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace '<namespace>'. Use 'skupper status' to get more information.",
"skupper status",
"skupper status",
"skupper status",
"Skupper is enabled for namespace \"<namespace>\" in interior mode. It is connected to 1 other site. It has 1 exposed service. The site console url is: <console-url> The credentials for internal console-auth mode are held in secret: 'skupper-console-users'",
"skupper token create ~/private1-to-public1-token.yaml skupper token create ~/public2-to-public1-token.yaml",
"skupper token create ~/private1-to-public2-token.yaml skupper link create ~/public2-to-public1-token.yaml skupper link status --wait 60",
"skupper link create ~/private1-to-public1-token.yaml skupper link create ~/private1-to-public2-token.yaml skupper link status --wait 60",
"apply -f deployment-iperf3-a.yaml",
"apply -f deployment-iperf3-b.yaml",
"apply -f deployment-iperf3-c.yaml",
"skupper expose deployment/iperf3-server-a --port 5201",
"skupper expose deployment/iperf3-server-b --port 5201",
"skupper expose deployment/iperf3-server-c --port 5201",
"exec USD(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a exec USD(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b exec USD(kubectl get pod -l application=iperf3-server-a -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c",
"exec USD(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a exec USD(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b exec USD(kubectl get pod -l application=iperf3-server-b -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c",
"exec USD(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-a exec USD(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-b exec USD(kubectl get pod -l application=iperf3-server-c -o=jsonpath='{.items[0].metadata.name}') -- iperf3 -c iperf3-server-c",
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"create -f server/strimzi.yaml apply -f server/cluster1.yaml wait --for condition=ready --timeout 900s kafka/cluster1",
"kubectl create -f server/strimzi.yaml customresourcedefinition.apiextensions.k8s.io/kafkas.kafka.strimzi.io created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-entity-operator-delegation created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-topic-operator-delegation created customresourcedefinition.apiextensions.k8s.io/kafkausers.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkarebalances.kafka.strimzi.io created deployment.apps/strimzi-cluster-operator created customresourcedefinition.apiextensions.k8s.io/kafkamirrormaker2s.kafka.strimzi.io created clusterrole.rbac.authorization.k8s.io/strimzi-entity-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-global created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-broker-delegation created rolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-cluster-operator-namespaced created clusterrole.rbac.authorization.k8s.io/strimzi-topic-operator created clusterrolebinding.rbac.authorization.k8s.io/strimzi-cluster-operator-kafka-client-delegation created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-client created serviceaccount/strimzi-cluster-operator created clusterrole.rbac.authorization.k8s.io/strimzi-kafka-broker created customresourcedefinition.apiextensions.k8s.io/kafkatopics.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkabridges.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnectors.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects2is.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkaconnects.kafka.strimzi.io created customresourcedefinition.apiextensions.k8s.io/kafkamirrormakers.kafka.strimzi.io created configmap/strimzi-cluster-operator created kubectl apply -f server/cluster1.yaml kafka.kafka.strimzi.io/cluster1 created kafkatopic.kafka.strimzi.io/topic1 created kubectl wait --for condition=ready --timeout 900s kafka/cluster1 kafka.kafka.strimzi.io/cluster1 condition met",
"spec: kafka: listeners: - name: plain port: 9092 type: internal tls: false configuration: brokers: - broker: 0 advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose statefulset/cluster1-kafka --headless --port 9092",
"skupper expose statefulset/cluster1-kafka --headless --port 9092 statefulset cluster1-kafka exposed as cluster1-kafka-brokers",
"get service/cluster1-kafka-brokers",
"kubectl get service/cluster1-kafka-brokers NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE cluster1-kafka-brokers ClusterIP None <none> 9092/TCP 2s",
"run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092",
"kubectl run client --attach --rm --restart Never --image quay.io/skupper/kafka-example-client --env BOOTSTRAPSERVERS=cluster1-kafka-brokers:9092 [...] Received message 1 Received message 2 Received message 3 Received message 4 Received message 5 Received message 6 Received message 7 Received message 8 Received message 9 Received message 10 Result: OK [...]",
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"export SKUPPERPLATFORM=podman network create skupper systemctl --user enable --now podman.socket",
"system service --time=0 unix://USDXDGRUNTIMEDIR/podman/podman.sock &",
"apply -f frontend/kubernetes.yaml",
"apply -f payment-processor/kubernetes.yaml",
"run --name database-target --network skupper --detach --rm -p 5432:5432 quay.io/skupper/patient-portal-database",
"skupper init",
"skupper init --ingress none",
"skupper init --ingress none",
"skupper token create --uses 2 ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token",
"skupper expose deployment/payment-processor --port 8080",
"skupper service create database 5432 skupper service bind database host database-target --target-port 5432",
"skupper service create database 5432",
"expose deployment/frontend --port 8080 --type LoadBalancer get service/frontend curl http://<external-ip>:8080/api/health",
"kubectl expose deployment/frontend --port 8080 --type LoadBalancer service/frontend exposed kubectl get service/frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.103.232.28 <external-ip> 8080:30407/TCP 15s curl http://<external-ip>:8080/api/health OK",
"sudo dnf install skupper-cli",
"export KUBECONFIG=~/.kube/config-public Enter your provider-specific login command create namespace public config set-context --current --namespace public",
"export KUBECONFIG=~/.kube/config-private Enter your provider-specific login command create namespace private config set-context --current --namespace private",
"create -f kafka-cluster/strimzi.yaml apply -f kafka-cluster/cluster1.yaml wait --for condition=ready --timeout 900s kafka/cluster1",
"spec: kafka: listeners: - name: plain port: 9092 type: internal tls: false configuration: brokers: - broker: 0 advertisedHost: cluster1-kafka-0.cluster1-kafka-brokers",
"apply -f order-processor/kubernetes.yaml apply -f market-data/kubernetes.yaml apply -f frontend/kubernetes.yaml",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'public'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"public\". It is not connected to any other sites. It has no exposed services.",
"skupper init skupper status",
"skupper init Waiting for LoadBalancer IP or hostname Waiting for status Skupper is now installed in namespace 'private'. Use 'skupper status' to get more information. skupper status Skupper is enabled for namespace \"private\". It is not connected to any other sites. It has no exposed services.",
"skupper token create ~/secret.token",
"skupper token create ~/secret.token Token written to ~/secret.token",
"skupper link create ~/secret.token",
"skupper link create ~/secret.token Site configured to link to https://10.105.193.154:8081/ed9c37f6-d78a-11ec-a8c7-04421a4c5042 (name=link1) Check the status of the link using 'skupper link status'.",
"skupper expose statefulset/cluster1-kafka --headless --port 9092",
"get service/cluster1-kafka-brokers",
"expose deployment/frontend --port 8080 --type LoadBalancer get service/frontend curl http://<external-ip>:8080/api/health",
"kubectl expose deployment/frontend --port 8080 --type LoadBalancer service/frontend exposed kubectl get service/frontend NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE frontend LoadBalancer 10.103.232.28 <external-ip> 8080:30407/TCP 15s curl http://<external-ip>:8080/api/health OK"
]
| https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html-single/examples/index |
Chapter 2. About migrating from OpenShift Container Platform 3 to 4 | Chapter 2. About migrating from OpenShift Container Platform 3 to 4 OpenShift Container Platform 4 contains new technologies and functionality that result in a cluster that is self-managing, flexible, and automated. OpenShift Container Platform 4 clusters are deployed and managed very differently from OpenShift Container Platform 3. The most effective way to migrate from OpenShift Container Platform 3 to 4 is by using a CI/CD pipeline to automate deployments in an application lifecycle management framework. If you do not have a CI/CD pipeline or if you are migrating stateful applications, you can use the Migration Toolkit for Containers (MTC) to migrate your application workloads. You can use Red Hat Advanced Cluster Management for Kubernetes to help you import and manage your OpenShift Container Platform 3 clusters easily, enforce policies, and redeploy your applications. Take advantage of the free subscription to use Red Hat Advanced Cluster Management to simplify your migration process. To successfully transition to OpenShift Container Platform 4, review the following information: Differences between OpenShift Container Platform 3 and 4 Architecture Installation and upgrade Storage, network, logging, security, and monitoring considerations About the Migration Toolkit for Containers Workflow File system and snapshot copy methods for persistent volumes (PVs) Direct volume migration Direct image migration Advanced migration options Automating your migration with migration hooks Using the MTC API Excluding resources from a migration plan Configuring the MigrationController custom resource for large-scale migrations Enabling automatic PV resizing for direct volume migration Enabling cached Kubernetes clients for improved performance For new features and enhancements, technical changes, and known issues, see the MTC release notes . | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/migrating_from_version_3_to_4/about-migrating-from-3-to-4 |
Chapter 14. Changing resources for the OpenShift Data Foundation components | Chapter 14. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 14.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 14.2, "Tuning the resources for the MCG" . 14.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 14.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) . | [
"oc edit storagecluster -n openshift-storage <storagecluster_name>",
"oc edit storagecluster -n openshift-storage ocs-storagecluster",
"spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi",
"oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'",
"oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '"
]
| https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/troubleshooting_openshift_data_foundation/changing-resources-for-the-openshift-data-foundation-components_rhodf |
Chapter 10. Network configuration | Chapter 10. Network configuration The following sections describe the basics of network configuration with the Assisted Installer. 10.1. Cluster networking There are various network types and addresses used by OpenShift and listed in the following table. Important IPv6 is not currently supported in the following configurations: Single stack Primary within dual stack Type DNS Description clusterNetwork The IP address pools from which pod IP addresses are allocated. serviceNetwork The IP address pool for services. machineNetwork The IP address blocks for machines forming the cluster. apiVIP api.<clustername.clusterdomain> The VIP to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If you are deploying with dual-stack networking, this must be the IPv4 address. apiVIPs api.<clustername.clusterdomain> The VIPs to use for API communication. You must provide this setting or preconfigure the address in the DNS so that the default name resolves correctly. If using dual stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the apiVIP setting. ingressVIP *.apps.<clustername.clusterdomain> The VIP to use for ingress traffic. If you are deploying with dual-stack networking, this must be the IPv4 address. ingressVIPs *.apps.<clustername.clusterdomain> The VIPs to use for ingress traffic. If you are deploying with dual-stack networking, the first address must be the IPv4 address and the second address must be the IPv6 address. You must also set the ingressVIP setting. Note OpenShift Container Platform 4.12 introduces the new apiVIPs and ingressVIPs settings to accept many IP addresses for dual-stack networking. When using dual-stack networking, the first IP address must be the IPv4 address and the second IP address must be the IPv6 address. The new settings will replace apiVIP and IngressVIP , but you must set both the new and old settings when modifying the configuration by using the API. Currently, the Assisted Service can deploy OpenShift Container Platform clusters by using one of the following configurations: IPv4 Dual-stack (IPv4 + IPv6 with IPv4 as primary) Note OVN is the default Container Network Interface (CNI) in OpenShift Container Platform 4.12 and later releases. SDN is supported up to OpenShift Container Platform 4.14, but not for OpenShift Container Platform 4.15 and later releases. 10.1.1. Limitations 10.1.1.1. SDN The SDN controller is not supported with single-node OpenShift. The SDN controller does not support dual-stack networking. The SDN controller is not supported for OpenShift Container Platform 4.15 and later releases. For more information, see Deprecation of the OpenShift SDN network plugin in the OpenShift Container Platform release notes. 10.1.1.2. OVN-Kubernetes For more information, see About the OVN-Kubernetes network plugin . 10.1.2. Cluster network The cluster network is a network from which every pod deployed in the cluster gets its IP address. Given that the workload might live across many nodes forming the cluster, it is important for the network provider to be able to easily find an individual node based on the pod's IP address. To do this, clusterNetwork.cidr is further split into subnets of the size defined in clusterNetwork.hostPrefix . The host prefix specifies a length of the subnet assigned to each individual node in the cluster. An example of how a cluster might assign addresses for the multi-node cluster: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Creating a 3-node cluster by using this snippet might create the following network topology: Pods scheduled in node #1 get IPs from 10.128.0.0/23 Pods scheduled in node #2 get IPs from 10.128.2.0/23 Pods scheduled in node #3 get IPs from 10.128.4.0/23 Explaining OVN-Kubernetes internals is out of scope for this document, but the pattern previously described provides a way to route Pod-to-Pod traffic between different nodes without keeping a big list of mapping between Pods and their corresponding nodes. 10.1.3. Machine network The machine network is a network used by all the hosts forming the cluster to communicate with each other. This is also the subnet that must include the API and Ingress VIPs. For iSCSI boot volumes, the hosts are connected over two machine networks: one designated for the OpenShift Container Platform installation and the other for iSCSI traffic. During the installation process, ensure that you specify the OpenShift Container Platform network. Using the iSCSI network will result in an error for the host. 10.1.4. Single-node OpenShift compared to multi-node cluster Depending on whether you are deploying single-node OpenShift or a multi-node cluster, different values are mandatory. The following table explains this in more detail. Parameter Single-node OpenShift Multi-node cluster with DHCP mode Multi-node cluster without DHCP mode clusterNetwork Required Required Required serviceNetwork Required Required Required machineNetwork Auto-assign possible (*) Auto-assign possible (*) Auto-assign possible (*) apiVIP Forbidden Forbidden Required apiVIPs Forbidden Forbidden Required in 4.12 and later releases ingressVIP Forbidden Forbidden Required ingressVIPs Forbidden Forbidden Required in 4.12 and later releases (*) Auto assignment of the machine network CIDR happens if there is only a single host network. Otherwise you need to specify it explicitly. 10.1.5. Air-gapped environments The workflow for deploying a cluster without Internet access has some prerequisites, which are out of scope of this document. You can consult the Zero Touch Provisioning the hard way Git repository for some insights. 10.2. VIP DHCP allocation The VIP DHCP allocation is a feature allowing users to skip the requirement of manually providing virtual IPs for API and Ingress by leveraging the ability of a service to automatically assign those IP addresses from the DHCP server. If you enable the feature, instead of using api_vips and ingress_vips from the cluster configuration, the service will send a lease allocation request and based on the reply it will use VIPs accordingly. The service will allocate the IP addresses from the Machine Network. Please note this is not an OpenShift Container Platform feature and it has been implemented in the Assisted Service to make the configuration easier. Important VIP DHCP allocation is currently limited to the OpenShift Container Platform SDN network type. SDN is not supported from OpenShift Container Platform version 4.15 and later. Therefore, support for VIP DHCP allocation is also ending from OpenShift Container Platform 4.15 and later. 10.2.1. Example payload to enable autoallocation { "vip_dhcp_allocation": true, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ], "machine_networks": [ { "cidr": "192.168.127.0/24" } ] } 10.2.2. Example payload to disable autoallocation { "api_vips": [ { "ip": "192.168.127.100" } ], "ingress_vips": [ { "ip": "192.168.127.101" } ], "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 } ], "service_networks": [ { "cidr": "172.30.0.0/16" } ] } 10.3. Additional resources Bare metal IPI documentation provides additional explanation of the syntax for the VIP addresses. 10.4. Understanding differences between user- and cluster-managed networking User managed networking is a feature in the Assisted Installer that allows customers with non-standard network topologies to deploy OpenShift Container Platform clusters. Examples include: Customers with an external load balancer who do not want to use keepalived and VRRP for handling VIP addressses. Deployments with cluster nodes distributed across many distinct L2 network segments. 10.4.1. Validations There are various network validations happening in the Assisted Installer before it allows the installation to start. When you enable User Managed Networking, the following validations change: The L3 connectivity check (ICMP) is performed instead of the L2 check (ARP). The MTU validation verifies the maximum transmission unit (MTU) value for all interfaces and not only for the machine network. 10.5. Static network configuration You may use static network configurations when generating or updating the discovery ISO. 10.5.1. Prerequisites You are familiar with NMState . 10.5.2. NMState configuration The NMState file in YAML format specifies the desired network configuration for the host. It has the logical names of the interfaces that will be replaced with the actual name of the interface at discovery time. 10.5.2.1. Example of NMState configuration dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 -hop-address: 192.168.126.1 -hop-interface: eth0 table-id: 254 10.5.3. MAC interface mapping MAC interface map is an attribute that maps logical interfaces defined in the NMState configuration with the actual interfaces present on the host. The mapping should always use physical interfaces present on the host. For example, when the NMState configuration defines a bond or VLAN, the mapping should only contain an entry for parent interfaces. 10.5.3.1. Example of MAC interface mapping mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ] 10.5.4. Additional NMState configuration examples The following examples are only meant to show a partial configuration. They are not meant for use as-is, and you should always adjust to the environment where they will be used. If used incorrectly, they can leave your machines with no network connectivity. 10.5.4.1. Tagged VLAN interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true 10.5.4.2. Network bond interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: "140" port: - eth0 - eth1 name: bond0 state: up type: bond 10.6. Applying a static network configuration with the API You can apply a static network configuration by using the Assisted Installer API. Important A static IP configuration is not supported in the following scenarios: OpenShift Container Platform installations on Oracle Cloud Infrastructure. OpenShift Container Platform installations on iSCSI boot volumes. Prerequisites You have created an infrastructure environment using the API or have created a cluster using the web console. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have YAML files with a static network configuration available as server-a.yaml and server-b.yaml . Procedure Create a temporary file /tmp/request-body.txt with the API request: jq -n --arg NMSTATE_YAML1 "USD(cat server-a.yaml)" --arg NMSTATE_YAML2 "USD(cat server-b.yaml)" \ '{ "static_network_config": [ { "network_yaml": USDNMSTATE_YAML1, "mac_interface_map": [{"mac_address": "02:00:00:2c:23:a5", "logical_nic_name": "eth0"}, {"mac_address": "02:00:00:68:73:dc", "logical_nic_name": "eth1"}] }, { "network_yaml": USDNMSTATE_YAML2, "mac_interface_map": [{"mac_address": "02:00:00:9f:85:eb", "logical_nic_name": "eth1"}, {"mac_address": "02:00:00:c8:be:9b", "logical_nic_name": "eth0"}] } ] }' >> /tmp/request-body.txt Refresh the API token: USD source refresh-token Send the request to the Assisted Service API endpoint: USD curl -H "Content-Type: application/json" \ -X PATCH -d @/tmp/request-body.txt \ -H "Authorization: Bearer USD{API_TOKEN}" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID 10.7. Additional resources Applying a static network configuration with the web console 10.8. Converting to dual-stack networking Dual-stack IPv4/IPv6 configuration allows deployment of a cluster with pods residing in both IPv4 and IPv6 subnets. 10.8.1. Prerequisites You are familiar with OVN-K8s documentation 10.8.2. Example payload for single-node OpenShift { "network_type": "OVNKubernetes", "user_managed_networking": false, "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.3. Example payload for an OpenShift Container Platform cluster consisting of many nodes { "vip_dhcp_allocation": false, "network_type": "OVNKubernetes", "user_managed_networking": false, "api_vips": [ { "ip": "192.168.127.100" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7334" } ], "ingress_vips": [ { "ip": "192.168.127.101" }, { "ip": "2001:0db8:85a3:0000:0000:8a2e:0370:7335" } ], "cluster_networks": [ { "cidr": "10.128.0.0/14", "host_prefix": 23 }, { "cidr": "fd01::/48", "host_prefix": 64 } ], "service_networks": [ {"cidr": "172.30.0.0/16"}, {"cidr": "fd02::/112"} ], "machine_networks": [ {"cidr": "192.168.127.0/24"},{"cidr": "1001:db8::/120"} ] } 10.8.4. Limitations The api_vips IP address and ingress_vips IP address settings must be of the primary IP address family when using dual-stack networking, which must be IPv4 addresses. Currently, Red Hat does not support dual-stack VIPs or dual-stack networking with IPv6 as the primary IP address family. Red Hat supports dual-stack networking with IPv4 as the primary IP address family and IPv6 as the secondary IP address family. Therefore, you must place the IPv4 entries before the IPv6 entries when entering the IP address values. 10.9. Additional resources Understanding OpenShift networking About the OpenShift SDN network plugin OVN-Kubernetes - CNI network provider Dual-stack Service configuration scenarios Installing a user-provisioned bare metal cluster with network customizations . Cluster Network Operator configuration object | [
"clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23",
"{ \"vip_dhcp_allocation\": true, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ], \"machine_networks\": [ { \"cidr\": \"192.168.127.0/24\" } ] }",
"{ \"api_vips\": [ { \"ip\": \"192.168.127.100\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" } ], \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 } ], \"service_networks\": [ { \"cidr\": \"172.30.0.0/16\" } ] }",
"dns-resolver: config: server: - 192.168.126.1 interfaces: - ipv4: address: - ip: 192.168.126.30 prefix-length: 24 dhcp: false enabled: true name: eth0 state: up type: ethernet - ipv4: address: - ip: 192.168.141.30 prefix-length: 24 dhcp: false enabled: true name: eth1 state: up type: ethernet routes: config: - destination: 0.0.0.0/0 next-hop-address: 192.168.126.1 next-hop-interface: eth0 table-id: 254",
"mac_interface_map: [ { mac_address: 02:00:00:2c:23:a5, logical_nic_name: eth0 }, { mac_address: 02:00:00:68:73:dc, logical_nic_name: eth1 } ]",
"interfaces: - ipv4: address: - ip: 192.168.143.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false name: eth0.404 state: up type: vlan vlan: base-iface: eth0 id: 404 reorder-headers: true",
"interfaces: - ipv4: address: - ip: 192.168.138.15 prefix-length: 24 dhcp: false enabled: true ipv6: enabled: false link-aggregation: mode: active-backup options: miimon: \"140\" port: - eth0 - eth1 name: bond0 state: up type: bond",
"jq -n --arg NMSTATE_YAML1 \"USD(cat server-a.yaml)\" --arg NMSTATE_YAML2 \"USD(cat server-b.yaml)\" '{ \"static_network_config\": [ { \"network_yaml\": USDNMSTATE_YAML1, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:2c:23:a5\", \"logical_nic_name\": \"eth0\"}, {\"mac_address\": \"02:00:00:68:73:dc\", \"logical_nic_name\": \"eth1\"}] }, { \"network_yaml\": USDNMSTATE_YAML2, \"mac_interface_map\": [{\"mac_address\": \"02:00:00:9f:85:eb\", \"logical_nic_name\": \"eth1\"}, {\"mac_address\": \"02:00:00:c8:be:9b\", \"logical_nic_name\": \"eth0\"}] } ] }' >> /tmp/request-body.txt",
"source refresh-token",
"curl -H \"Content-Type: application/json\" -X PATCH -d @/tmp/request-body.txt -H \"Authorization: Bearer USD{API_TOKEN}\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID",
"{ \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }",
"{ \"vip_dhcp_allocation\": false, \"network_type\": \"OVNKubernetes\", \"user_managed_networking\": false, \"api_vips\": [ { \"ip\": \"192.168.127.100\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7334\" } ], \"ingress_vips\": [ { \"ip\": \"192.168.127.101\" }, { \"ip\": \"2001:0db8:85a3:0000:0000:8a2e:0370:7335\" } ], \"cluster_networks\": [ { \"cidr\": \"10.128.0.0/14\", \"host_prefix\": 23 }, { \"cidr\": \"fd01::/48\", \"host_prefix\": 64 } ], \"service_networks\": [ {\"cidr\": \"172.30.0.0/16\"}, {\"cidr\": \"fd02::/112\"} ], \"machine_networks\": [ {\"cidr\": \"192.168.127.0/24\"},{\"cidr\": \"1001:db8::/120\"} ] }"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.16/html/installing_openshift_container_platform_with_the_assisted_installer/assembly_network-configuration |
Chapter 10. Setting a custom cryptographic policy by using RHEL system roles | Chapter 10. Setting a custom cryptographic policy by using RHEL system roles Custom cryptographic policies are a set of rules and configurations that manage the use of cryptographic algorithms and protocols. These policies help you to maintain a protected, consistent, and manageable security environment across multiple systems and applications. By using the crypto_policies RHEL system role, you can quickly and consistently configure custom cryptographic policies across many operating systems in an automated fashion. 10.1. Enhancing security with the FUTURE cryptographic policy using the crypto_policies RHEL system role You can use the crypto_policies RHEL system role to configure the FUTURE policy on your managed nodes. This policy helps to achieve for example: Future-proofing against emerging threats: anticipates advancements in computational power. Enhanced security: stronger encryption standards require longer key lengths and more secure algorithms. Compliance with high-security standards: for example in healthcare, telco, and finance the data sensitivity is high, and availability of strong cryptography is critical. Typically, FUTURE is suitable for environments handling highly sensitive data, preparing for future regulations, or adopting long-term security strategies. Warning Legacy systems or software does not have to support the more modern and stricter algorithms and protocols enforced by the FUTURE policy. For example, older systems might not support TLS 1.3 or larger key sizes. This could lead to compatibility problems. Also, using strong algorithms usually increases the computational workload, which could negatively affect your system performance. Prerequisites You have prepared the control node and the managed nodes . You are logged in to the control node as a user who can run playbooks on the managed nodes. The account you use to connect to the managed nodes has sudo permissions on them. Procedure Create a playbook file, for example ~/playbook.yml , with the following content: --- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true The settings specified in the example playbook include the following: crypto_policies_policy: FUTURE Configures the required cryptographic policy ( FUTURE ) on the managed node. It can be either the base policy or a base policy with some sub-policies. The specified base policy and sub-policies have to be available on the managed node. The default value is null . It means that the configuration is not changed and the crypto_policies RHEL system role will only collect the Ansible facts. crypto_policies_reboot_ok: true Causes the system to reboot after the cryptographic policy change to make sure all of the services and applications will read the new configuration files. The default value is false . For details about all variables used in the playbook, see the /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file on the control node. Validate the playbook syntax: Note that this command only validates the syntax and does not protect against a wrong but valid configuration. Run the playbook: Warning Because the FIPS:OSPP system-wide subpolicy contains further restrictions for cryptographic algorithms required by the Common Criteria (CC) certification, the system is less interoperable after you set it. For example, you cannot use RSA and DH keys shorter than 3072 bits, additional SSH algorithms, and several TLS groups. Setting FIPS:OSPP also prevents connecting to Red Hat Content Delivery Network (CDN) structure. Furthermore, you cannot integrate Active Directory (AD) into the IdM deployments that use FIPS:OSPP , communication between RHEL hosts using FIPS:OSPP and AD domains might not work, or some AD accounts might not be able to authenticate. Note that your system is not CC-compliant after you set the FIPS:OSPP cryptographic subpolicy. The only correct way to make your RHEL system compliant with the CC standard is by following the guidance provided in the cc-config package. See the Common Criteria section on the Product compliance Red Hat Customer Portal page for a list of certified RHEL versions, validation reports, and links to CC guides hosted at the National Information Assurance Partnership (NIAP) website. Verification On the control node, create another playbook named, for example, verify_playbook.yml : --- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active The settings specified in the example playbook include the following: crypto_policies_active An exported Ansible fact that contains the currently active policy name in the format as accepted by the crypto_policies_policy variable. Validate the playbook syntax: Run the playbook: The crypto_policies_active variable shows the active policy on the managed node. Additional resources /usr/share/ansible/roles/rhel-system-roles.crypto_policies/README.md file /usr/share/doc/rhel-system-roles/crypto_policies/ directory update-crypto-policies(8) and crypto-policies(7) manual pages | [
"--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true",
"ansible-playbook --syntax-check ~/playbook.yml",
"ansible-playbook ~/playbook.yml",
"--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active",
"ansible-playbook --syntax-check ~/verify_playbook.yml",
"ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/automating_system_administration_by_using_rhel_system_roles/setting-a-custom-cryptographic-policy-by-using-the-crypto-policies-rhel-system-role_automating-system-administration-by-using-rhel-system-roles |
Chapter 5. Configuring the database | Chapter 5. Configuring the database 5.1. Using an existing PostgreSQL database If you are using an externally managed PostgreSQL database, you must manually enable the pg_trgm extension for a successful deployment. Use the following procedure to deploy an existing PostgreSQL database. Procedure Create a config.yaml file with the necessary database fields. For example: Example config.yaml file: DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database Create a Secret using the configuration file: Create a QuayRegistry.yaml file which marks the postgres component as unmanaged and references the created Secret . For example: Example quayregistry.yaml file apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false steps Continue to the following sections to deploy the registry. 5.1.1. Database configuration This section describes the database configuration fields available for Red Hat Quay deployments. 5.1.1.1. Database URI With Red Hat Quay, connection to the database is configured by using the required DB_URI field. The following table describes the DB_URI configuration field: Table 5.1. Database URI Field Type Description DB_URI (Required) String The URI for accessing the database, including any credentials. Example DB_URI field: postgresql://quayuser:[email protected]:5432/quay 5.1.1.2. Database connection arguments Optional connection arguments are configured by the DB_CONNECTION_ARGS parameter. Some of the key-value pairs defined under DB_CONNECTION_ARGS are generic, while others are database specific. The following table describes database connection arguments: Table 5.2. Database connection arguments Field Type Description DB_CONNECTION_ARGS Object Optional connection arguments for the database, such as timeouts and SSL/TLS. .autorollback Boolean Whether to use thread-local connections. Should always be true .threadlocals Boolean Whether to use auto-rollback connections. Should always be true 5.1.1.2.1. PostgreSQL SSL/TLS connection arguments With SSL/TLS, configuration depends on the database you are deploying. The following example shows a PostgreSQL SSL/TLS configuration: DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert The sslmode option determines whether, or with, what priority a secure SSL/TLS TCP/IP connection will be negotiated with the server. There are six modes: Table 5.3. SSL/TLS options Mode Description disable Your configuration only tries non-SSL/TLS connections. allow Your configuration first tries a non-SSL/TLS connection. Upon failure, tries an SSL/TLS connection. prefer (Default) Your configuration first tries an SSL/TLS connection. Upon failure, tries a non-SSL/TLS connection. require Your configuration only tries an SSL/TLS connection. If a root CA file is present, it verifies the certificate in the same way as if verify-ca was specified. verify-ca Your configuration only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted certificate authority (CA). verify-full Only tries an SSL/TLS connection, and verifies that the server certificate is issued by a trusted CA and that the requested server hostname matches that in the certificate. For more information on the valid arguments for PostgreSQL, see Database Connection Control Functions . 5.1.1.2.2. MySQL SSL/TLS connection arguments The following example shows a sample MySQL SSL/TLS configuration: DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert Information on the valid connection arguments for MySQL is available at Connecting to the Server Using URI-Like Strings or Key-Value Pairs . 5.1.2. Using the managed PostgreSQL database With Red Hat Quay 3.9, if your database is managed by the Red Hat Quay Operator, updating from Red Hat Quay 3.8 3.9 automatically handles upgrading PostgreSQL 10 to PostgreSQL 13. Important Users with a managed database are required to upgrade their PostgreSQL database from 10 13. If your Red Hat Quay and Clair databases are managed by the Operator, the database upgrades for each component must succeed for the 3.9.0 upgrade to be successful. If either of the database upgrades fail, the entire Red Hat Quay version upgrade fails. This behavior is expected. If you do not want the Red Hat Quay Operator to upgrade your PostgreSQL deployment from PostgreSQL 10 13, you must set the PostgreSQL parameter to managed: false in your quayregistry.yaml file. For more information about setting your database to unmanaged, see Using an existing Postgres database . Important It is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy . If you want your PostgreSQL database to match the same version as your Red Hat Enterprise Linux (RHEL) system, see Migrating to a RHEL 8 version of PostgreSQL for RHEL 8 or Migrating to a RHEL 9 version of PostgreSQL for RHEL 9. For more information about the Red Hat Quay 3.8 3.9 procedure, see Upgrading the Red Hat Quay Operator overview . 5.1.2.1. PostgreSQL database recommendations The Red Hat Quay team recommends the following for managing your PostgreSQL database. Database backups should be performed regularly using either the supplied tools on the PostgreSQL image or your own backup infrastructure. The Red Hat Quay Operator does not currently ensure that the PostgreSQL database is backed up. Restoring the PostgreSQL database from a backup must be done using PostgreSQL tools and procedures. Be aware that your Quay pods should not be running while the database restore is in progress. Database disk space is allocated automatically by the Red Hat Quay Operator with 50 GiB. This number represents a usable amount of storage for most small to medium Red Hat Quay installations but might not be sufficient for your use cases. Resizing the database volume is currently not handled by the Red Hat Quay Operator. 5.2. Configuring external Redis Use the content in this section to set up an external Redis deployment. 5.2.1. Using an unmanaged Redis database Use the following procedure to set up an external Redis database. Procedure Create a config.yaml file using the following Redis fields: # ... BUILDLOGS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false # ... USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false # ... Enter the following command to create a secret using the configuration file: USD oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret Create a quayregistry.yaml file that sets the Redis component to unmanaged and references the created secret: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false # ... Deploy the Red Hat Quay registry. Additional resources Redis configuration fields 5.2.2. Using unmanaged Horizontal Pod Autoscalers Horizontal Pod Autoscalers (HPAs) are now included with the Clair , Quay , and Mirror pods, so that they now automatically scale during load spikes. As HPA is configured by default to be managed, the number of Clair , Quay , and Mirror pods is set to two. This facilitates the avoidance of downtime when updating or reconfiguring Red Hat Quay through the Operator or during rescheduling events. 5.2.2.1. Disabling the Horizontal Pod Autoscaler To disable autoscaling or create your own HorizontalPodAutoscaler , specify the component as unmanaged in the QuayRegistry instance. For example: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false # ... 5.2.3. Disabling the Route component Use the following procedure to prevent the Red Hat Quay Operator from creating a route. Procedure Set the component as managed: false in the quayregistry.yaml file: apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false Edit the config.yaml file to specify that Red Hat Quay handles SSL/TLS. For example: # ... EXTERNAL_TLS_TERMINATION: false # ... SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com # ... PREFERRED_URL_SCHEME: https # ... If you do not configure the unmanaged route correctly, the following error is returned: { { "kind":"QuayRegistry", "namespace":"quay-enterprise", "name":"example-registry", "uid":"d5879ba5-cc92-406c-ba62-8b19cf56d4aa", "apiVersion":"quay.redhat.com/v1", "resourceVersion":"2418527" }, "reason":"ConfigInvalid", "message":"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields" } Note Disabling the default route means you are now responsible for creating a Route , Service , or Ingress in order to access the Red Hat Quay instance. Additionally, whatever DNS you use must match the SERVER_HOSTNAME in the Red Hat Quay config. 5.2.4. Disabling the monitoring component If you install the Red Hat Quay Operator in a single namespace, the monitoring component is automatically set to managed: false . Use the following reference to explicitly disable monitoring. Unmanaged monitoring apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false To enable monitoring in this scenario, see Enabling monitoring when the Red Hat Quay Operator is installed in a single namespace . 5.2.5. Disabling the mirroring component To disable mirroring, use the following YAML configuration: Unmanaged mirroring example YAML configuration apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false | [
"DB_URI: postgresql://test-quay-database:postgres@test-quay-database:5432/test-quay-database",
"kubectl create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: postgres managed: false",
"DB_CONNECTION_ARGS: sslmode: verify-ca sslrootcert: /path/to/cacert",
"DB_CONNECTION_ARGS: ssl: ca: /path/to/cacert",
"BUILDLOGS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false USER_EVENTS_REDIS: host: <quay-server.example.com> port: 6379 ssl: false",
"oc create secret generic --from-file config.yaml=./config.yaml config-bundle-secret",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: configBundleSecret: config-bundle-secret components: - kind: redis managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: horizontalpodautoscaler managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: route managed: false",
"EXTERNAL_TLS_TERMINATION: false SERVER_HOSTNAME: example-registry-quay-quay-enterprise.apps.user1.example.com PREFERRED_URL_SCHEME: https",
"{ { \"kind\":\"QuayRegistry\", \"namespace\":\"quay-enterprise\", \"name\":\"example-registry\", \"uid\":\"d5879ba5-cc92-406c-ba62-8b19cf56d4aa\", \"apiVersion\":\"quay.redhat.com/v1\", \"resourceVersion\":\"2418527\" }, \"reason\":\"ConfigInvalid\", \"message\":\"required component `route` marked as unmanaged, but `configBundleSecret` is missing necessary fields\" }",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: monitoring managed: false",
"apiVersion: quay.redhat.com/v1 kind: QuayRegistry metadata: name: example-registry namespace: quay-enterprise spec: components: - kind: mirroring managed: false"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.10/html/deploying_the_red_hat_quay_operator_on_openshift_container_platform/configuring-the-database-poc |
Chapter 16. DHCP Servers | Chapter 16. DHCP Servers Dynamic Host Configuration Protocol ( DHCP ) is a network protocol that automatically assigns TCP/IP information to client machines. Each DHCP client connects to the centrally located DHCP server, which returns the network configuration (including the IP address, gateway, and DNS servers) of that client. 16.1. Why Use DHCP? DHCP is useful for automatic configuration of client network interfaces. When configuring the client system, you can choose DHCP instead of specifying an IP address, netmask, gateway, or DNS servers. The client retrieves this information from the DHCP server. DHCP is also useful if you want to change the IP addresses of a large number of systems. Instead of reconfiguring all the systems, you can just edit one configuration file on the server for the new set of IP addresses. If the DNS servers for an organization changes, the changes happen on the DHCP server, not on the DHCP clients. When you restart the network or reboot the clients, the changes go into effect. If an organization has a functional DHCP server correctly connected to a network, laptops and other mobile computer users can move these devices from office to office. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/ch-dhcp_servers |
Chapter 7. Working with containers | Chapter 7. Working with containers 7.1. Understanding Containers The basic units of OpenShift Container Platform applications are called containers . Linux container technologies are lightweight mechanisms for isolating running processes so that they are limited to interacting with only their designated resources. Many application instances can be running in containers on a single host without visibility into each others' processes, files, network, and so on. Typically, each container provides a single service (often called a "micro-service"), such as a web server or a database, though containers can be used for arbitrary workloads. The Linux kernel has been incorporating capabilities for container technologies for years. OpenShift Container Platform and Kubernetes add the ability to orchestrate containers across multi-host installations. About containers and RHEL kernel memory Due to Red Hat Enterprise Linux (RHEL) behavior, a container on a node with high CPU usage might seem to consume more memory than expected. The higher memory consumption could be caused by the kmem_cache in the RHEL kernel. The RHEL kernel creates a kmem_cache for each cgroup. For added performance, the kmem_cache contains a cpu_cache , and a node cache for any NUMA nodes. These caches all consume kernel memory. The amount of memory stored in those caches is proportional to the number of CPUs that the system uses. As a result, a higher number of CPUs results in a greater amount of kernel memory being held in these caches. Higher amounts of kernel memory in these caches can cause OpenShift Container Platform containers to exceed the configured memory limits, resulting in the container being killed. To avoid losing containers due to kernel memory issues, ensure that the containers request sufficient memory. You can use the following formula to estimate the amount of memory consumed by the kmem_cache , where nproc is the number of processing units available that are reported by the nproc command. The lower limit of container requests should be this value plus the container memory requirements: USD(nproc) X 1/2 MiB 7.2. Using Init Containers to perform tasks before a pod is deployed OpenShift Container Platform provides init containers , which are specialized containers that run before application containers and can contain utilities or setup scripts not present in an app image. 7.2.1. Understanding Init Containers You can use an Init Container resource to perform tasks before the rest of a pod is deployed. A pod can have Init Containers in addition to application containers. Init containers allow you to reorganize setup scripts and binding code. An Init Container can: Contain and run utilities that are not desirable to include in the app Container image for security reasons. Contain utilities or custom code for setup that is not present in an app image. For example, there is no requirement to make an image FROM another image just to use a tool like sed, awk, python, or dig during setup. Use Linux namespaces so that they have different filesystem views from app containers, such as access to secrets that application containers are not able to access. Each Init Container must complete successfully before the one is started. So, Init Containers provide an easy way to block or delay the startup of app containers until some set of preconditions are met. For example, the following are some ways you can use Init Containers: Wait for a service to be created with a shell command like: for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1 Register this pod with a remote server from the downward API with a command like: USD curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()' Wait for some time before starting the app Container with a command like sleep 60 . Clone a git repository into a volume. Place values into a configuration file and run a template tool to dynamically generate a configuration file for the main app Container. For example, place the POD_IP value in a configuration and generate the main app configuration file using Jinja. See the Kubernetes documentation for more information. 7.2.2. Creating Init Containers The following example outlines a simple pod which has two Init Containers. The first waits for myservice and the second waits for mydb . After both containers complete, the pod begins. Procedure Create the pod for the Init Container: Create a YAML file similar to the following: apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] - name: init-mydb image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;'] # ... Create the pod: USD oc create -f myapp.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s The pod status, Init:0/2 , indicates it is waiting for the two services. Create the myservice service. Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376 Create the pod: USD oc create -f myservice.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s The pod status, Init:1/2 , indicates it is waiting for one service, in this case the mydb service. Create the mydb service: Create a YAML file similar to the following: kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377 Create the pod: USD oc create -f mydb.yaml View the status of the pod: USD oc get pods Example output NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m The pod status indicated that it is no longer waiting for the services and is running. 7.3. Using volumes to persist container data Files in a container are ephemeral. As such, when a container crashes or stops, the data is lost. You can use volumes to persist the data used by the containers in a pod. A volume is directory, accessible to the Containers in a pod, where data is stored for the life of the pod. 7.3.1. Understanding volumes Volumes are mounted file systems available to pods and their containers which may be backed by a number of host-local or network attached storage endpoints. Containers are not persistent by default; on restart, their contents are cleared. To ensure that the file system on the volume contains no errors and, if errors are present, to repair them when possible, OpenShift Container Platform invokes the fsck utility prior to the mount utility. This occurs when either adding a volume or updating an existing volume. The simplest volume type is emptyDir , which is a temporary directory on a single machine. Administrators may also allow you to request a persistent volume that is automatically attached to your pods. Note emptyDir volume storage may be restricted by a quota based on the pod's FSGroup, if the FSGroup parameter is enabled by your cluster administrator. 7.3.2. Working with volumes using the OpenShift Container Platform CLI You can use the CLI command oc set volume to add and remove volumes and volume mounts for any object that has a pod template like replication controllers or deployment configs. You can also list volumes in pods or any object that has a pod template. The oc set volume command uses the following general syntax: USD oc set volume <object_selection> <operation> <mandatory_parameters> <options> Object selection Specify one of the following for the object_selection parameter in the oc set volume command: Table 7.1. Object Selection Syntax Description Example <object_type> <name> Selects <name> of type <object_type> . deploymentConfig registry <object_type> / <name> Selects <name> of type <object_type> . deploymentConfig/registry <object_type> --selector= <object_label_selector> Selects resources of type <object_type> that matched the given label selector. deploymentConfig --selector="name=registry" <object_type> --all Selects all resources of type <object_type> . deploymentConfig --all -f or --filename= <file_name> File name, directory, or URL to file to use to edit the resource. -f registry-deployment-config.json Operation Specify --add or --remove for the operation parameter in the oc set volume command. Mandatory parameters Any mandatory parameters are specific to the selected operation and are discussed in later sections. Options Any options are specific to the selected operation and are discussed in later sections. 7.3.3. Listing volumes and volume mounts in a pod You can list volumes and volume mounts in pods or pod templates: Procedure To list volumes: USD oc set volume <object_type>/<name> [options] List volume supported options: Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' For example: To list all volumes for pod p1 : USD oc set volume pod/p1 To list volume v1 defined on all deployment configs: USD oc set volume dc --all --name=v1 7.3.4. Adding volumes to a pod You can add volumes and volume mounts to a pod. Procedure To add a volume, a volume mount, or both to pod templates: USD oc set volume <object_type>/<name> --add [options] Table 7.2. Supported Options for Adding Volumes Option Description Default --name Name of the volume. Automatically generated, if not specified. -t, --type Name of the volume source. Supported values: emptyDir , hostPath , secret , configmap , persistentVolumeClaim or projected . emptyDir -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' -m, --mount-path Mount path inside the selected containers. Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --path Host path. Mandatory parameter for --type=hostPath . Do not mount to the container root, / , or any path that is the same in the host and the container. This can corrupt your host system if the container is sufficiently privileged, such as the host /dev/pts files. It is safe to mount the host by using /host . --secret-name Name of the secret. Mandatory parameter for --type=secret . --configmap-name Name of the configmap. Mandatory parameter for --type=configmap . --claim-name Name of the persistent volume claim. Mandatory parameter for --type=persistentVolumeClaim . --source Details of volume source as a JSON string. Recommended if the desired volume source is not supported by --type . -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To add a new volume source emptyDir to the registry DeploymentConfig object: USD oc set volume dc/registry --add Tip You can alternatively apply the following YAML to add the volume: Example 7.1. Sample deployment config with an added volume kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP 1 Add the volume source emptyDir . To add volume v1 with secret secret1 for replication controller r1 and mount inside the containers at /data : USD oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data Tip You can alternatively apply the following YAML to add the volume: Example 7.2. Sample replication controller with added volume and secret kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data 1 Add the volume and secret. 2 Add the container mount path. To add existing persistent volume v1 with claim name pvc1 to deployment configuration dc.json on disk, mount the volume on container c1 at /data , and update the DeploymentConfig object on the server: USD oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim \ --claim-name=pvc1 --mount-path=/data --containers=c1 Tip You can alternatively apply the following YAML to add the volume: Example 7.3. Sample deployment config with persistent volume added kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data 1 Add the persistent volume claim named `pvc1. 2 Add the container mount path. To add a volume v1 based on Git repository https://github.com/namespace1/project1 with revision 5125c45f9f563 for all replication controllers: USD oc set volume rc --all --add --name=v1 \ --source='{"gitRepo": { "repository": "https://github.com/namespace1/project1", "revision": "5125c45f9f563" }}' 7.3.5. Updating volumes and volume mounts in a pod You can modify the volumes and volume mounts in a pod. Procedure Updating existing volumes using the --overwrite option: USD oc set volume <object_type>/<name> --add --overwrite [options] For example: To replace existing volume v1 for replication controller r1 with existing persistent volume claim pvc1 : USD oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 Tip You can alternatively apply the following YAML to replace the volume: Example 7.4. Sample replication controller with persistent volume claim named pvc1 kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data 1 Set persistent volume claim to pvc1 . To change the DeploymentConfig object d1 mount point to /opt for volume v1 : USD oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt Tip You can alternatively apply the following YAML to change the mount point: Example 7.5. Sample deployment config with mount point set to opt . kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt 1 Set the mount point to /opt . 7.3.6. Removing volumes and volume mounts from a pod You can remove a volume or volume mount from a pod. Procedure To remove a volume from pod templates: USD oc set volume <object_type>/<name> --remove [options] Table 7.3. Supported options for removing volumes Option Description Default --name Name of the volume. -c, --containers Select containers by name. It can also take wildcard '*' that matches any character. '*' --confirm Indicate that you want to remove multiple volumes at once. -o, --output Display the modified objects instead of updating them on the server. Supported values: json , yaml . --output-version Output the modified objects with the given version. api-version For example: To remove a volume v1 from the DeploymentConfig object d1 : USD oc set volume dc/d1 --remove --name=v1 To unmount volume v1 from container c1 for the DeploymentConfig object d1 and remove the volume v1 if it is not referenced by any containers on d1 : USD oc set volume dc/d1 --remove --name=v1 --containers=c1 To remove all volumes for replication controller r1 : USD oc set volume rc/r1 --remove --confirm 7.3.7. Configuring volumes for multiple uses in a pod You can configure a volume to allows you to share one volume for multiple uses in a single pod using the volumeMounts.subPath property to specify a subPath value inside a volume instead of the volume's root. Note You cannot add a subPath parameter to an existing scheduled pod. Procedure To view the list of files in the volume, run the oc rsh command: USD oc rsh <pod> Example output sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3 Specify the subPath : Example Pod spec with subPath parameter apiVersion: v1 kind: Pod metadata: name: my-site spec: containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data 1 Databases are stored in the mysql folder. 2 HTML content is stored in the html folder. 7.4. Mapping volumes using projected volumes A projected volume maps several existing volume sources into the same directory. The following types of volume sources can be projected: Secrets Config Maps Downward API Note All sources are required to be in the same namespace as the pod. 7.4.1. Understanding projected volumes Projected volumes can map any combination of these volume sources into a single directory, allowing the user to: automatically populate a single volume with the keys from multiple secrets, config maps, and with downward API information, so that I can synthesize a single directory with various sources of information; populate a single volume with the keys from multiple secrets, config maps, and with downward API information, explicitly specifying paths for each item, so that I can have full control over the contents of that volume. Important When the RunAsUser permission is set in the security context of a Linux-based pod, the projected files have the correct permissions set, including container user ownership. However, when the Windows equivalent RunAsUsername permission is set in a Windows pod, the kubelet is unable to correctly set ownership on the files in the projected volume. Therefore, the RunAsUsername permission set in the security context of a Windows pod is not honored for Windows projected volumes running in OpenShift Container Platform. The following general scenarios show how you can use projected volumes. Config map, secrets, Downward API. Projected volumes allow you to deploy containers with configuration data that includes passwords. An application using these resources could be deploying Red Hat OpenStack Platform (RHOSP) on Kubernetes. The configuration data might have to be assembled differently depending on if the services are going to be used for production or for testing. If a pod is labeled with production or testing, the downward API selector metadata.labels can be used to produce the correct RHOSP configs. Config map + secrets. Projected volumes allow you to deploy containers involving configuration data and passwords. For example, you might execute a config map with some sensitive encrypted tasks that are decrypted using a vault password file. ConfigMap + Downward API. Projected volumes allow you to generate a config including the pod name (available via the metadata.name selector). This application can then pass the pod name along with requests to easily determine the source without using IP tracking. Secrets + Downward API. Projected volumes allow you to use a secret as a public key to encrypt the namespace of the pod (available via the metadata.namespace selector). This example allows the Operator to use the application to deliver the namespace information securely without using an encrypted transport. 7.4.1.1. Example Pod specs The following are examples of Pod specs for creating projected volumes. Pod with a secret, a Downward API, and a config map apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: "/projected-volume" 2 readOnly: true 3 volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: "labels" fieldRef: fieldPath: metadata.labels - path: "cpu_limit" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11 1 Add a volumeMounts section for each container that needs the secret. 2 Specify a path to an unused directory where the secret will appear. 3 Set readOnly to true . 4 Add a volumes block to list each projected volume source. 5 Specify any name for the volume. 6 Set the execute permission on the files. 7 Add a secret. Enter the name of the secret object. Each secret you want to use must be listed. 8 Specify the path to the secrets file under the mountPath . Here, the secrets file is in /projected-volume/my-group/my-username . 9 Add a Downward API source. 10 Add a ConfigMap source. 11 Set the mode for the specific projection Note If there are multiple containers in the pod, each container needs a volumeMounts section, but only one volumes section is needed. Pod with multiple secrets with a non-default permission mode set apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511 Note The defaultMode can only be specified at the projected level and not for each volume source. However, as illustrated above, you can explicitly set the mode for each individual projection. 7.4.1.2. Pathing Considerations Collisions Between Keys when Configured Paths are Identical If you configure any keys with the same path, the pod spec will not be accepted as valid. In the following example, the specified path for mysecret and myconfigmap are the same: apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data Consider the following situations related to the volume file paths. Collisions Between Keys without Configured Paths The only run-time validation that can occur is when all the paths are known at pod creation, similar to the above scenario. Otherwise, when a conflict occurs the most recent specified resource will overwrite anything preceding it (this is true for resources that are updated after pod creation as well). Collisions when One Path is Explicit and the Other is Automatically Projected In the event that there is a collision due to a user specified path matching data that is automatically projected, the latter resource will overwrite anything preceding it as before 7.4.2. Configuring a Projected Volume for a Pod When creating projected volumes, consider the volume file path situations described in Understanding projected volumes . The following example shows how to use a projected volume to mount an existing secret volume source. The steps can be used to create a user name and password secrets from local files. You then create a pod that runs one container, using a projected volume to mount the secrets into the same shared directory. The user name and password values can be any valid string that is base64 encoded. The following example shows admin in base64: USD echo -n "admin" | base64 Example output YWRtaW4= The following example shows the password 1f2d1e2e67df in base64: USD echo -n "1f2d1e2e67df" | base64 Example output MWYyZDFlMmU2N2Rm Procedure To use a projected volume to mount an existing secret volume source. Create the secret: Create a YAML file similar to the following, replacing the password and user information as appropriate: apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= Use the following command to create the secret: USD oc create -f <secrets-filename> For example: USD oc create -f secret.yaml Example output secret "mysecret" created You can check that the secret was created using the following commands: USD oc get secret <secret-name> For example: USD oc get secret mysecret Example output NAME TYPE DATA AGE mysecret Opaque 2 17h USD oc get secret <secret-name> -o yaml For example: USD oc get secret mysecret -o yaml apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: "2107" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque Create a pod with a projected volume. Create a YAML file similar to the following, including a volumes section: kind: Pod metadata: name: test-projected-volume spec: containers: - name: test-projected-volume image: busybox args: - sleep - "86400" volumeMounts: - name: all-in-one mountPath: "/projected-volume" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1 1 The name of the secret you created. Create the pod from the configuration file: USD oc create -f <your_yaml_file>.yaml For example: USD oc create -f secret-pod.yaml Example output pod "test-projected-volume" created Verify that the pod container is running, and then watch for changes to the pod: USD oc get pod <name> For example: USD oc get pod test-projected-volume The output should appear similar to the following: Example output NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s In another terminal, use the oc exec command to open a shell to the running container: USD oc exec -it <pod> <command> For example: USD oc exec -it test-projected-volume -- /bin/sh In your shell, verify that the projected-volumes directory contains your projected sources: / # ls Example output bin home root tmp dev proc run usr etc projected-volume sys var 7.5. Allowing containers to consume API objects The Downward API is a mechanism that allows containers to consume information about API objects without coupling to OpenShift Container Platform. Such information includes the pod's name, namespace, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. 7.5.1. Expose pod information to Containers using the Downward API The Downward API contains such information as the pod's name, project, and resource values. Containers can consume information from the downward API using environment variables or a volume plugin. Fields within the pod are selected using the FieldRef API type. FieldRef has two fields: Field Description fieldPath The path of the field to select, relative to the pod. apiVersion The API version to interpret the fieldPath selector within. Currently, the valid selectors in the v1 API include: Selector Description metadata.name The pod's name. This is supported in both environment variables and volumes. metadata.namespace The pod's namespace.This is supported in both environment variables and volumes. metadata.labels The pod's labels. This is only supported in volumes and not in environment variables. metadata.annotations The pod's annotations. This is only supported in volumes and not in environment variables. status.podIP The pod's IP. This is only supported in environment variables and not volumes. The apiVersion field, if not specified, defaults to the API version of the enclosing pod template. 7.5.2. Understanding how to consume container values using the downward API You containers can consume API values using environment variables or a volume plugin. Depending on the method you choose, containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Annotations and labels are available using only a volume plugin. 7.5.2.1. Consuming container values using environment variables When using a container's environment variables, use the EnvVar type's valueFrom field (of type EnvVarSource ) to specify that the variable's value should come from a FieldRef source instead of the literal value specified by the value field. Only constant attributes of the pod can be consumed this way, as environment variables cannot be updated once a process is started in a way that allows the process to be notified that the value of a variable has changed. The fields supported using environment variables are: Pod name Pod project/namespace Procedure Create a new pod spec that contains the environment variables you want the container to consume: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_POD_NAME and MY_POD_NAMESPACE values: USD oc logs -p dapi-env-test-pod 7.5.2.2. Consuming container values using a volume plugin You containers can consume API values using a volume plugin. Containers can consume: Pod name Pod project/namespace Pod annotations Pod labels Procedure To use the volume plugin: Create a new pod spec that contains the environment variables you want the container to consume: Create a volume-pod.yaml file similar to the following: kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: "345" annotation2: "456" spec: containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: ["sh", "-c", "cat /tmp/etc/pod_labels /tmp/etc/pod_annotations"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml Verification Check the container's logs and verify the presence of the configured fields: USD oc logs -p dapi-volume-test-pod Example output cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api 7.5.3. Understanding how to consume container resources using the Downward API When creating pods, you can use the Downward API to inject information about computing resource requests and limits so that image and application authors can correctly create an image for specific environments. You can do this using environment variable or a volume plugin. 7.5.3.1. Consuming container resources using environment variables When creating pods, you can use the Downward API to inject information about computing resource requests and limits using environment variables. When creating the pod configuration, specify environment variables that correspond to the contents of the resources field in the spec.container field. Note If the resource limits are not included in the container configuration, the downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ "/bin/sh", "-c", "env" ] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml 7.5.3.2. Consuming container resources using a volume plugin When creating pods, you can use the Downward API to inject information about computing resource requests and limits using a volume plugin. When creating the pod configuration, use the spec.volumes.downwardAPI.items field to describe the desired resources that correspond to the spec.resources field. Note If the resource limits are not included in the container configuration, the Downward API defaults to the node's CPU and memory allocatable values. Procedure Create a new pod spec that contains the resources you want to inject: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: ["sh", "-c", "while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done"] resources: requests: memory: "32Mi" cpu: "125m" limits: memory: "64Mi" cpu: "250m" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: "cpu_limit" resourceFieldRef: containerName: client-container resource: limits.cpu - path: "cpu_request" resourceFieldRef: containerName: client-container resource: requests.cpu - path: "mem_limit" resourceFieldRef: containerName: client-container resource: limits.memory - path: "mem_request" resourceFieldRef: containerName: client-container resource: requests.memory # ... Create the pod from the volume-pod.yaml file: USD oc create -f volume-pod.yaml 7.5.4. Consuming secrets using the Downward API When creating pods, you can use the downward API to inject secrets so image and application authors can create an image for specific environments. Procedure Create a secret to inject: Create a secret.yaml file similar to the following: apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth Create the secret object from the secret.yaml file: USD oc create -f secret.yaml Create a pod that references the username field from the above Secret object: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_SECRET_USERNAME value: USD oc logs -p dapi-env-test-pod 7.5.5. Consuming configuration maps using the Downward API When creating pods, you can use the Downward API to inject configuration map values so image and application authors can create an image for specific environments. Procedure Create a config map with the values to inject: Create a configmap.yaml file similar to the following: apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue Create the config map from the configmap.yaml file: USD oc create -f configmap.yaml Create a pod that references the above config map: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey restartPolicy: Always # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_CONFIGMAP_VALUE value: USD oc logs -p dapi-env-test-pod 7.5.6. Referencing environment variables When creating pods, you can reference the value of a previously defined environment variable by using the USD() syntax. If the environment variable reference can not be resolved, the value will be left as the provided string. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_ENV_VAR_REF_ENV value: USD oc logs -p dapi-env-test-pod 7.5.7. Escaping environment variable references When creating a pod, you can escape an environment variable reference by using a double dollar sign. The value will then be set to a single dollar sign version of the provided value. Procedure Create a pod that references an existing environment variable: Create a pod.yaml file similar to the following: apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ "/bin/sh", "-c", "env" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) restartPolicy: Never # ... Create the pod from the pod.yaml file: USD oc create -f pod.yaml Verification Check the container's logs for the MY_NEW_ENV value: USD oc logs -p dapi-env-test-pod 7.6. Copying files to or from an OpenShift Container Platform container You can use the CLI to copy local files to or from a remote directory in a container using the rsync command. 7.6.1. Understanding how to copy files The oc rsync command, or remote sync, is a useful tool for copying database archives to and from your pods for backup and restore purposes. You can also use oc rsync to copy source code changes into a running pod for development debugging, when the running pod supports hot reload of source files. USD oc rsync <source> <destination> [-c <container>] 7.6.1.1. Requirements Specifying the Copy Source The source argument of the oc rsync command must point to either a local directory or a pod directory. Individual files are not supported. When specifying a pod directory the directory name must be prefixed with the pod name: <pod name>:<dir> If the directory name ends in a path separator ( / ), only the contents of the directory are copied to the destination. Otherwise, the directory and its contents are copied to the destination. Specifying the Copy Destination The destination argument of the oc rsync command must point to a directory. If the directory does not exist, but rsync is used for copy, the directory is created for you. Deleting Files at the Destination The --delete flag may be used to delete any files in the remote directory that are not in the local directory. Continuous Syncing on File Change Using the --watch option causes the command to monitor the source path for any file system changes, and synchronizes changes when they occur. With this argument, the command runs forever. Synchronization occurs after short quiet periods to ensure a rapidly changing file system does not result in continuous synchronization calls. When using the --watch option, the behavior is effectively the same as manually invoking oc rsync repeatedly, including any arguments normally passed to oc rsync . Therefore, you can control the behavior via the same flags used with manual invocations of oc rsync , such as --delete . 7.6.2. Copying files to and from containers Support for copying local files to or from a container is built into the CLI. Prerequisites When working with oc rsync , note the following: rsync must be installed. The oc rsync command uses the local rsync tool, if present on the client machine and the remote container. If rsync is not found locally or in the remote container, a tar archive is created locally and sent to the container where the tar utility is used to extract the files. If tar is not available in the remote container, the copy will fail. The tar copy method does not provide the same functionality as oc rsync . For example, oc rsync creates the destination directory if it does not exist and only sends files that are different between the source and the destination. Note In Windows, the cwRsync client should be installed and added to the PATH for use with the oc rsync command. Procedure To copy a local directory to a pod directory: USD oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name> For example: USD oc rsync /home/user/source devpod1234:/src -c user-container To copy a pod directory to a local directory: USD oc rsync devpod1234:/src /home/user/source Example output USD oc rsync devpod1234:/src/status.txt /home/user/ 7.6.3. Using advanced Rsync features The oc rsync command exposes fewer command line options than standard rsync . In the case that you want to use a standard rsync command line option that is not available in oc rsync , for example the --exclude-from=FILE option, it might be possible to use standard rsync 's --rsh ( -e ) option or RSYNC_RSH environment variable as a workaround, as follows: USD rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> or: Export the RSYNC_RSH variable: USD export RSYNC_RSH='oc rsh' Then, run the rsync command: USD rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir> Both of the above examples configure standard rsync to use oc rsh as its remote shell program to enable it to connect to the remote pod, and are an alternative to running oc rsync . 7.7. Executing remote commands in an OpenShift Container Platform container You can use the CLI to execute remote commands in an OpenShift Container Platform container. 7.7.1. Executing remote commands in containers Support for remote container command execution is built into the CLI. Procedure To run a command in a container: USD oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>] For example: USD oc exec mypod date Example output Thu Apr 9 02:21:53 UTC 2015 Important For security purposes , the oc exec command does not work when accessing privileged containers except when the command is executed by a cluster-admin user. 7.7.2. Protocol for initiating a remote command from a client Clients initiate the execution of a remote command in a container by issuing a request to the Kubernetes API server: /proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command> In the above URL: <node_name> is the FQDN of the node. <namespace> is the project of the target pod. <pod> is the name of the target pod. <container> is the name of the target container. <command> is the desired command to be executed. For example: /proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date Additionally, the client can add parameters to the request to indicate if: the client should send input to the remote container's command (stdin). the client's terminal is a TTY. the remote container's command should send output from stdout to the client. the remote container's command should send output from stderr to the client. After sending an exec request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses HTTP/2 . The client creates one stream each for stdin, stdout, and stderr. To distinguish among the streams, the client sets the streamType header on the stream to one of stdin , stdout , or stderr . The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the remote command execution request. 7.8. Using port forwarding to access applications in a container OpenShift Container Platform supports port forwarding to pods. 7.8.1. Understanding port forwarding You can use the CLI to forward one or more local ports to a pod. This allows you to listen on a given or random port locally, and have data forwarded to and from given ports in the pod. Support for port forwarding is built into the CLI: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] The CLI listens on each local port specified by the user, forwarding using the protocol described below. Ports may be specified using the following formats: 5000 The client listens on port 5000 locally and forwards to 5000 in the pod. 6000:5000 The client listens on port 6000 locally and forwards to 5000 in the pod. :5000 or 0:5000 The client selects a free local port and forwards to 5000 in the pod. OpenShift Container Platform handles port-forward requests from clients. Upon receiving a request, OpenShift Container Platform upgrades the response and waits for the client to create port-forwarding streams. When OpenShift Container Platform receives a new stream, it copies data between the stream and the pod's port. Architecturally, there are options for forwarding to a pod's port. The supported OpenShift Container Platform implementation invokes nsenter directly on the node host to enter the pod's network namespace, then invokes socat to copy data between the stream and the pod's port. However, a custom implementation could include running a helper pod that then runs nsenter and socat , so that those binaries are not required to be installed on the host. 7.8.2. Using port forwarding You can use the CLI to port-forward one or more local ports to a pod. Procedure Use the following command to listen on the specified port in a pod: USD oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>] For example: Use the following command to listen on ports 5000 and 6000 locally and forward data to and from ports 5000 and 6000 in the pod: USD oc port-forward <pod> 5000 6000 Example output Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000 Use the following command to listen on port 8888 locally and forward to 5000 in the pod: USD oc port-forward <pod> 8888:5000 Example output Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000 Use the following command to listen on a free port locally and forward to 5000 in the pod: USD oc port-forward <pod> :5000 Example output Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000 Or: USD oc port-forward <pod> 0:5000 7.8.3. Protocol for initiating port forwarding from a client Clients initiate port forwarding to a pod by issuing a request to the Kubernetes API server: In the above URL: <node_name> is the FQDN of the node. <namespace> is the namespace of the target pod. <pod> is the name of the target pod. For example: After sending a port forward request to the API server, the client upgrades the connection to one that supports multiplexed streams; the current implementation uses Hyptertext Transfer Protocol Version 2 (HTTP/2) . The client creates a stream with the port header containing the target port in the pod. All data written to the stream is delivered via the kubelet to the target pod and port. Similarly, all data sent from the pod for that forwarded connection is delivered back to the same stream in the client. The client closes all streams, the upgraded connection, and the underlying connection when it is finished with the port forwarding request. 7.9. Using sysctls in containers Sysctl settings are exposed via Kubernetes, allowing users to modify certain kernel parameters at runtime for namespaces within a container. Only sysctls that are namespaced can be set independently on pods. If a sysctl is not namespaced, called node-level , you must use another method of setting the sysctl, such as the Node Tuning Operator . Moreover, only those sysctls considered safe are whitelisted by default; you can manually enable other unsafe sysctls on the node to be available to the user. 7.9.1. About sysctls In Linux, the sysctl interface allows an administrator to modify kernel parameters at runtime. Parameters are available via the /proc/sys/ virtual process file system. The parameters cover various subsystems, such as: kernel (common prefix: kernel. ) networking (common prefix: net. ) virtual memory (common prefix: vm. ) MDADM (common prefix: dev. ) More subsystems are described in Kernel documentation . To get a list of all parameters, run: USD sudo sysctl -a 7.9.1.1. Namespaced versus node-level sysctls A number of sysctls are namespaced in the Linux kernels. This means that you can set them independently for each pod on a node. Being namespaced is a requirement for sysctls to be accessible in a pod context within Kubernetes. The following sysctls are known to be namespaced: kernel.shm* kernel.msg* kernel.sem fs.mqueue.* Additionally, most of the sysctls in the net. * group are known to be namespaced. Their namespace adoption differs based on the kernel version and distributor. Sysctls that are not namespaced are called node-level and must be set manually by the cluster administrator, either by means of the underlying Linux distribution of the nodes, such as by modifying the /etc/sysctls.conf file, or by using a daemon set with privileged containers. You can use the Node Tuning Operator to set node-level sysctls. Note Consider marking nodes with special sysctls as tainted. Only schedule pods onto them that need those sysctl settings. Use the taints and toleration feature to mark the nodes. 7.9.1.2. Safe versus unsafe sysctls Sysctls are grouped into safe and unsafe sysctls. For a sysctl to be considered safe, it must use proper namespacing and must be properly isolated between pods on the same node. This means that if you set a sysctl for one pod it must not: Influence any other pod on the node Harm the node's health Gain CPU or memory resources outside of the resource limits of a pod OpenShift Container Platform supports, or whitelists, the following sysctls in the safe set: kernel.shm_rmid_forced net.ipv4.ip_local_port_range net.ipv4.tcp_syncookies net.ipv4.ping_group_range All safe sysctls are enabled by default. You can use a sysctl in a pod by modifying the Pod spec. Any sysctl not whitelisted by OpenShift Container Platform is considered unsafe for OpenShift Container Platform. Note that being namespaced alone is not sufficient for the sysctl to be considered safe. All unsafe sysctls are disabled by default, and the cluster administrator must manually enable them on a per-node basis. Pods with disabled unsafe sysctls are scheduled but do not launch. USD oc get pod Example output NAME READY STATUS RESTARTS AGE hello-pod 0/1 SysctlForbidden 0 14s 7.9.2. Setting sysctls for a pod You can set sysctls on pods using the pod's securityContext . The securityContext applies to all containers in the same pod. Safe sysctls are allowed by default. A pod with unsafe sysctls fails to launch on any node unless the cluster administrator explicitly enables unsafe sysctls for that node. As with node-level sysctls, use the taints and toleration feature or labels on nodes to schedule those pods onto the right nodes. The following example uses the pod securityContext to set a safe sysctl kernel.shm_rmid_forced and two unsafe sysctls, net.core.somaxconn and kernel.msgmax . There is no distinction between safe and unsafe sysctls in the specification. Warning To avoid destabilizing your operating system, modify sysctl parameters only after you understand their effects. Procedure To use safe and unsafe sysctls: Modify the YAML file that defines the pod and add the securityContext spec, as shown in the following example: apiVersion: v1 kind: Pod metadata: name: sysctl-example spec: securityContext: sysctls: - name: kernel.shm_rmid_forced value: "0" - name: net.core.somaxconn value: "1024" - name: kernel.msgmax value: "65536" ... Create the pod: USD oc apply -f <file-name>.yaml If the unsafe sysctls are not allowed for the node, the pod is scheduled, but does not deploy: USD oc get pod Example output NAME READY STATUS RESTARTS AGE hello-pod 0/1 SysctlForbidden 0 14s 7.9.3. Enabling unsafe sysctls A cluster administrator can allow certain unsafe sysctls for very special situations such as high performance or real-time application tuning. If you want to use unsafe sysctls, a cluster administrator must enable them individually for a specific type of node. The sysctls must be namespaced. You can further control which sysctls are set in pods by specifying lists of sysctls or sysctl patterns in the allowedUnsafeSysctls field of the Security Context Constraints. The allowedUnsafeSysctls option controls specific needs such as high performance or real-time application tuning. Warning Due to their nature of being unsafe, the use of unsafe sysctls is at-your-own-risk and can lead to severe problems, such as improper behavior of containers, resource shortage, or breaking a node. Procedure Add a label to the machine config pool where the containers where containers with the unsafe sysctls will run: USD oc edit machineconfigpool worker apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: sysctl 1 1 Add a key: pair label. Create a KubeletConfig custom resource (CR): apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - "kernel.msg*" - "net.core.somaxconn" 1 Specify the label from the machine config pool. 2 List the unsafe sysctls you want to allow. Create the object: USD oc apply -f set-sysctl-worker.yaml A new MachineConfig object named in the 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet format is created. Wait for the cluster to reboot using the machineconfigpool object status fields: For example: status: conditions: - lastTransitionTime: '2019-08-11T15:32:00Z' message: >- All nodes are updating to rendered-worker-ccbfb5d2838d65013ab36300b7b3dc13 reason: '' status: 'True' type: Updating A message similar to the following appears when the cluster is ready: - lastTransitionTime: '2019-08-11T16:00:00Z' message: >- All nodes are updated with rendered-worker-ccbfb5d2838d65013ab36300b7b3dc13 reason: '' status: 'True' type: Updated When the cluster is ready, check for the merged KubeletConfig object in the new MachineConfig object: USD oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "blockOwnerDeletion": true, "controller": true, "kind": "KubeletConfig", "name": "custom-kubelet", "uid": "3f64a766-bae8-11e9-abe8-0a1a2a4813f2" } ] You can now add unsafe sysctls to pods as needed. | [
"USD(nproc) X 1/2 MiB",
"for i in {1..100}; do sleep 1; if dig myservice; then exit 0; fi; done; exit 1",
"curl -X POST http://USDMANAGEMENT_SERVICE_HOST:USDMANAGEMENT_SERVICE_PORT/register -d 'instance=USD()&ip=USD()'",
"apiVersion: v1 kind: Pod metadata: name: myapp-pod labels: app: myapp spec: containers: - name: myapp-container image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'echo The app is running! && sleep 3600'] initContainers: - name: init-myservice image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts myservice; do echo waiting for myservice; sleep 2; done;'] - name: init-mydb image: registry.access.redhat.com/ubi8/ubi:latest command: ['sh', '-c', 'until getent hosts mydb; do echo waiting for mydb; sleep 2; done;']",
"oc create -f myapp.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:0/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: myservice spec: ports: - protocol: TCP port: 80 targetPort: 9376",
"oc create -f myservice.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 0/1 Init:1/2 0 5s",
"kind: Service apiVersion: v1 metadata: name: mydb spec: ports: - protocol: TCP port: 80 targetPort: 9377",
"oc create -f mydb.yaml",
"oc get pods",
"NAME READY STATUS RESTARTS AGE myapp-pod 1/1 Running 0 2m",
"oc set volume <object_selection> <operation> <mandatory_parameters> <options>",
"oc set volume <object_type>/<name> [options]",
"oc set volume pod/p1",
"oc set volume dc --all --name=v1",
"oc set volume <object_type>/<name> --add [options]",
"oc set volume dc/registry --add",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: registry namespace: registry spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: 1 - name: volume-pppsw emptyDir: {} containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP",
"oc set volume rc/r1 --add --name=v1 --type=secret --secret-name='secret1' --mount-path=/data",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: creationTimestamp: null labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: 1 - name: v1 secret: secretName: secret1 defaultMode: 420 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume -f dc.json --add --name=v1 --type=persistentVolumeClaim --claim-name=pvc1 --mount-path=/data --containers=c1",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 2 - name: v1 mountPath: /data",
"oc set volume rc --all --add --name=v1 --source='{\"gitRepo\": { \"repository\": \"https://github.com/namespace1/project1\", \"revision\": \"5125c45f9f563\" }}'",
"oc set volume <object_type>/<name> --add --overwrite [options]",
"oc set volume rc/r1 --add --overwrite --name=v1 --type=persistentVolumeClaim --claim-name=pvc1",
"kind: ReplicationController apiVersion: v1 metadata: name: example-1 namespace: example spec: replicas: 0 selector: app: httpd deployment: example-1 deploymentconfig: example template: metadata: labels: app: httpd deployment: example-1 deploymentconfig: example spec: volumes: - name: v1 1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: - name: v1 mountPath: /data",
"oc set volume dc/d1 --add --overwrite --name=v1 --mount-path=/opt",
"kind: DeploymentConfig apiVersion: apps.openshift.io/v1 metadata: name: example namespace: example spec: replicas: 3 selector: app: httpd template: metadata: labels: app: httpd spec: volumes: - name: volume-pppsw emptyDir: {} - name: v2 persistentVolumeClaim: claimName: pvc1 - name: v1 persistentVolumeClaim: claimName: pvc1 containers: - name: httpd image: >- image-registry.openshift-image-registry.svc:5000/openshift/httpd:latest ports: - containerPort: 8080 protocol: TCP volumeMounts: 1 - name: v1 mountPath: /opt",
"oc set volume <object_type>/<name> --remove [options]",
"oc set volume dc/d1 --remove --name=v1",
"oc set volume dc/d1 --remove --name=v1 --containers=c1",
"oc set volume rc/r1 --remove --confirm",
"oc rsh <pod>",
"sh-4.2USD ls /path/to/volume/subpath/mount example_file1 example_file2 example_file3",
"apiVersion: v1 kind: Pod metadata: name: my-site spec: containers: - name: mysql image: mysql volumeMounts: - mountPath: /var/lib/mysql name: site-data subPath: mysql 1 - name: php image: php volumeMounts: - mountPath: /var/www/html name: site-data subPath: html 2 volumes: - name: site-data persistentVolumeClaim: claimName: my-site-data",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: 1 - name: all-in-one mountPath: \"/projected-volume\" 2 readOnly: true 3 volumes: 4 - name: all-in-one 5 projected: defaultMode: 0400 6 sources: - secret: name: mysecret 7 items: - key: username path: my-group/my-username 8 - downwardAPI: 9 items: - path: \"labels\" fieldRef: fieldPath: metadata.labels - path: \"cpu_limit\" resourceFieldRef: containerName: container-test resource: limits.cpu - configMap: 10 name: myconfigmap items: - key: config path: my-group/my-config mode: 0777 11",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true volumes: - name: all-in-one projected: defaultMode: 0755 sources: - secret: name: mysecret items: - key: username path: my-group/my-username - secret: name: mysecret2 items: - key: password path: my-group/my-password mode: 511",
"apiVersion: v1 kind: Pod metadata: name: volume-test spec: containers: - name: container-test image: busybox volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true volumes: - name: all-in-one projected: sources: - secret: name: mysecret items: - key: username path: my-group/data - configMap: name: myconfigmap items: - key: config path: my-group/data",
"echo -n \"admin\" | base64",
"YWRtaW4=",
"echo -n \"1f2d1e2e67df\" | base64",
"MWYyZDFlMmU2N2Rm",
"apiVersion: v1 kind: Secret metadata: name: mysecret type: Opaque data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4=",
"oc create -f <secrets-filename>",
"oc create -f secret.yaml",
"secret \"mysecret\" created",
"oc get secret <secret-name>",
"oc get secret mysecret",
"NAME TYPE DATA AGE mysecret Opaque 2 17h",
"oc get secret <secret-name> -o yaml",
"oc get secret mysecret -o yaml",
"apiVersion: v1 data: pass: MWYyZDFlMmU2N2Rm user: YWRtaW4= kind: Secret metadata: creationTimestamp: 2017-05-30T20:21:38Z name: mysecret namespace: default resourceVersion: \"2107\" selfLink: /api/v1/namespaces/default/secrets/mysecret uid: 959e0424-4575-11e7-9f97-fa163e4bd54c type: Opaque",
"kind: Pod metadata: name: test-projected-volume spec: containers: - name: test-projected-volume image: busybox args: - sleep - \"86400\" volumeMounts: - name: all-in-one mountPath: \"/projected-volume\" readOnly: true securityContext: allowPrivilegeEscalation: false capabilities: drop: - ALL volumes: - name: all-in-one projected: sources: - secret: name: mysecret 1",
"oc create -f <your_yaml_file>.yaml",
"oc create -f secret-pod.yaml",
"pod \"test-projected-volume\" created",
"oc get pod <name>",
"oc get pod test-projected-volume",
"NAME READY STATUS RESTARTS AGE test-projected-volume 1/1 Running 0 14s",
"oc exec -it <pod> <command>",
"oc exec -it test-projected-volume -- /bin/sh",
"/ # ls",
"bin home root tmp dev proc run usr etc projected-volume sys var",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_POD_NAME valueFrom: fieldRef: fieldPath: metadata.name - name: MY_POD_NAMESPACE valueFrom: fieldRef: fieldPath: metadata.namespace restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"kind: Pod apiVersion: v1 metadata: labels: zone: us-east-coast cluster: downward-api-test-cluster1 rack: rack-123 name: dapi-volume-test-pod annotations: annotation1: \"345\" annotation2: \"456\" spec: containers: - name: volume-test-container image: gcr.io/google_containers/busybox command: [\"sh\", \"-c\", \"cat /tmp/etc/pod_labels /tmp/etc/pod_annotations\"] volumeMounts: - name: podinfo mountPath: /tmp/etc readOnly: false volumes: - name: podinfo downwardAPI: defaultMode: 420 items: - fieldRef: fieldPath: metadata.name path: pod_name - fieldRef: fieldPath: metadata.namespace path: pod_namespace - fieldRef: fieldPath: metadata.labels path: pod_labels - fieldRef: fieldPath: metadata.annotations path: pod_annotations restartPolicy: Never",
"oc create -f volume-pod.yaml",
"oc logs -p dapi-volume-test-pod",
"cluster=downward-api-test-cluster1 rack=rack-123 zone=us-east-coast annotation1=345 annotation2=456 kubernetes.io/config.source=api",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: test-container image: gcr.io/google_containers/busybox:1.24 command: [ \"/bin/sh\", \"-c\", \"env\" ] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" env: - name: MY_CPU_REQUEST valueFrom: resourceFieldRef: resource: requests.cpu - name: MY_CPU_LIMIT valueFrom: resourceFieldRef: resource: limits.cpu - name: MY_MEM_REQUEST valueFrom: resourceFieldRef: resource: requests.memory - name: MY_MEM_LIMIT valueFrom: resourceFieldRef: resource: limits.memory",
"oc create -f pod.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: client-container image: gcr.io/google_containers/busybox:1.24 command: [\"sh\", \"-c\", \"while true; do echo; if [[ -e /etc/cpu_limit ]]; then cat /etc/cpu_limit; fi; if [[ -e /etc/cpu_request ]]; then cat /etc/cpu_request; fi; if [[ -e /etc/mem_limit ]]; then cat /etc/mem_limit; fi; if [[ -e /etc/mem_request ]]; then cat /etc/mem_request; fi; sleep 5; done\"] resources: requests: memory: \"32Mi\" cpu: \"125m\" limits: memory: \"64Mi\" cpu: \"250m\" volumeMounts: - name: podinfo mountPath: /etc readOnly: false volumes: - name: podinfo downwardAPI: items: - path: \"cpu_limit\" resourceFieldRef: containerName: client-container resource: limits.cpu - path: \"cpu_request\" resourceFieldRef: containerName: client-container resource: requests.cpu - path: \"mem_limit\" resourceFieldRef: containerName: client-container resource: limits.memory - path: \"mem_request\" resourceFieldRef: containerName: client-container resource: requests.memory",
"oc create -f volume-pod.yaml",
"apiVersion: v1 kind: Secret metadata: name: mysecret data: password: <password> username: <username> type: kubernetes.io/basic-auth",
"oc create -f secret.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_SECRET_USERNAME valueFrom: secretKeyRef: name: mysecret key: username restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: ConfigMap metadata: name: myconfigmap data: mykey: myvalue",
"oc create -f configmap.yaml",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_CONFIGMAP_VALUE valueFrom: configMapKeyRef: name: myconfigmap key: mykey restartPolicy: Always",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_EXISTING_ENV value: my_value - name: MY_ENV_VAR_REF_ENV value: USD(MY_EXISTING_ENV) restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"apiVersion: v1 kind: Pod metadata: name: dapi-env-test-pod spec: containers: - name: env-test-container image: gcr.io/google_containers/busybox command: [ \"/bin/sh\", \"-c\", \"env\" ] env: - name: MY_NEW_ENV value: USDUSD(SOME_OTHER_ENV) restartPolicy: Never",
"oc create -f pod.yaml",
"oc logs -p dapi-env-test-pod",
"oc rsync <source> <destination> [-c <container>]",
"<pod name>:<dir>",
"oc rsync <local-dir> <pod-name>:/<remote-dir> -c <container-name>",
"oc rsync /home/user/source devpod1234:/src -c user-container",
"oc rsync devpod1234:/src /home/user/source",
"oc rsync devpod1234:/src/status.txt /home/user/",
"rsync --rsh='oc rsh' --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"export RSYNC_RSH='oc rsh'",
"rsync --exclude-from=<file_name> <local-dir> <pod-name>:/<remote-dir>",
"oc exec <pod> [-c <container>] -- <command> [<arg_1> ... <arg_n>]",
"oc exec mypod date",
"Thu Apr 9 02:21:53 UTC 2015",
"/proxy/nodes/<node_name>/exec/<namespace>/<pod>/<container>?command=<command>",
"/proxy/nodes/node123.openshift.com/exec/myns/mypod/mycontainer?command=date",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> [<local_port>:]<remote_port> [...[<local_port_n>:]<remote_port_n>]",
"oc port-forward <pod> 5000 6000",
"Forwarding from 127.0.0.1:5000 -> 5000 Forwarding from [::1]:5000 -> 5000 Forwarding from 127.0.0.1:6000 -> 6000 Forwarding from [::1]:6000 -> 6000",
"oc port-forward <pod> 8888:5000",
"Forwarding from 127.0.0.1:8888 -> 5000 Forwarding from [::1]:8888 -> 5000",
"oc port-forward <pod> :5000",
"Forwarding from 127.0.0.1:42390 -> 5000 Forwarding from [::1]:42390 -> 5000",
"oc port-forward <pod> 0:5000",
"/proxy/nodes/<node_name>/portForward/<namespace>/<pod>",
"/proxy/nodes/node123.openshift.com/portForward/myns/mypod",
"sudo sysctl -a",
"oc get pod",
"NAME READY STATUS RESTARTS AGE hello-pod 0/1 SysctlForbidden 0 14s",
"apiVersion: v1 kind: Pod metadata: name: sysctl-example spec: securityContext: sysctls: - name: kernel.shm_rmid_forced value: \"0\" - name: net.core.somaxconn value: \"1024\" - name: kernel.msgmax value: \"65536\"",
"oc apply -f <file-name>.yaml",
"oc get pod",
"NAME READY STATUS RESTARTS AGE hello-pod 0/1 SysctlForbidden 0 14s",
"oc edit machineconfigpool worker",
"apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfigPool metadata: creationTimestamp: 2019-02-08T14:52:39Z generation: 1 labels: custom-kubelet: sysctl 1",
"apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: custom-kubelet spec: machineConfigPoolSelector: matchLabels: custom-kubelet: sysctl 1 kubeletConfig: allowedUnsafeSysctls: 2 - \"kernel.msg*\" - \"net.core.somaxconn\"",
"oc apply -f set-sysctl-worker.yaml",
"status: conditions: - lastTransitionTime: '2019-08-11T15:32:00Z' message: >- All nodes are updating to rendered-worker-ccbfb5d2838d65013ab36300b7b3dc13 reason: '' status: 'True' type: Updating",
"- lastTransitionTime: '2019-08-11T16:00:00Z' message: >- All nodes are updated with rendered-worker-ccbfb5d2838d65013ab36300b7b3dc13 reason: '' status: 'True' type: Updated",
"oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7",
"\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"blockOwnerDeletion\": true, \"controller\": true, \"kind\": \"KubeletConfig\", \"name\": \"custom-kubelet\", \"uid\": \"3f64a766-bae8-11e9-abe8-0a1a2a4813f2\" } ]"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/nodes/working-with-containers |
B.2.6. Querying | B.2.6. Querying The RPM database stores information about all RPM packages installed in your system. It is stored in the directory /var/lib/rpm/ , and is used to query what packages are installed, what versions each package is, and to calculate any changes to any files in the package since installation, among other use cases. To query this database, use the -q option. The rpm -q package name command displays the package name, version, and release number of the installed package <package_name> . For example, using rpm -q tree to query installed package tree might generate the following output: You can also use the following Package Selection Options (which is a subheading in the RPM man page: see man rpm for details) to further refine or qualify your query: -a - queries all currently installed packages. -f <file_name> - queries the RPM database for which package owns <file_name> . Specify the absolute path of the file (for example, rpm -qf /bin/ls instead of rpm -qf ls ). -p <package_file> - queries the uninstalled package <package_file> . There are a number of ways to specify what information to display about queried packages. The following options are used to select the type of information for which you are searching. These are called the Package Query Options . -i displays package information including name, description, release, size, build date, install date, vendor, and other miscellaneous information. -l displays the list of files that the package contains. -s displays the state of all the files in the package. -d displays a list of files marked as documentation (man pages, info pages, READMEs, etc.) in the package. -c displays a list of files marked as configuration files. These are the files you edit after installation to adapt and customize the package to your system (for example, sendmail.cf , passwd , inittab , etc.). For options that display lists of files, add -v to the command to display the lists in a familiar ls -l format. | [
"tree-1.5.2.2-4.el6.x86_64"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-rpm-querying |
Chapter 5. Advisories related to this release | Chapter 5. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2023:4159 RHSA-2023:4169 RHSA-2023:4170 RHSA-2023:4171 RHSA-2023:4177 RHSA-2023:4210 RHSA-2023:4211 Revised on 2024-05-03 15:37:05 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.8/openjdk-1708-advisory_openjdk |
1.4. Materialized Views | 1.4. Materialized Views Materialized views are just like other views, but their transformations are pre-computed and stored like a regular table. When queries are issued against the views through the Red Hat JBoss Data Virtualization Server, the cached results are used. This saves the cost of accessing all the underlying data sources and re-computing the view transformations each time a query is executed. Materialized views are appropriate when the underlying data does not change rapidly, or when it is acceptable to retrieve older data within a specified period of time, or when it is preferred for end-user queries to access staged data rather than placing additional query load on operational sources. | null | https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/development_guide_volume_5_caching_guide/materialized_views |
B.2.5. Freshening | B.2.5. Freshening Freshening is similar to upgrading, except that only existent packages are upgraded. Type the following command at a shell prompt: RPM's freshen option checks the versions of the packages specified on the command line against the versions of packages that have already been installed on your system. When a newer version of an already-installed package is processed by RPM's freshen option, it is upgraded to the newer version. However, RPM's freshen option does not install a package if no previously-installed package of the same name exists. This differs from RPM's upgrade option, as an upgrade does install packages whether or not an older version of the package was already installed. Freshening works for single packages or package groups. If you have just downloaded a large number of different packages, and you only want to upgrade those packages that are already installed on your system, freshening does the job. Thus, you do not have to delete any unwanted packages from the group that you downloaded before using RPM. In this case, issue the following with the *.rpm glob: RPM then automatically upgrades only those packages that are already installed. | [
"-Fvh foo-2.0-1.el6.x86_64.rpm",
"-Fvh *.rpm"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-rpm-freshening |
3.2. Setting up Certificate Profiles | 3.2. Setting up Certificate Profiles In Certificate System, you can add, delete, and modify enrollment profiles: Using the PKI command-line interface Using the Java-based administration console This section provides information on each method. 3.2.1. Managing Certificate Enrollment Profiles Using the PKI Command-line Interface This section describes how to manage certificate profiles using the pki utility. For further details, see the pki-ca-profile (1) man page. Note Using the raw format is recommended. For details on each attribute and field of the profile, see the section Creating and Editing Certificate Profiles Directly on the File System in Red Hat Certificate System Planning, Installation and Deployment Guide. 3.2.1.1. Enabling and Disabling a Certificate Profile Before you can edit a certificate profile, you must disable it. After the modification is complete, you can re-enable the profile. Note Only CA agents can enable and disable certificate profiles. For example, to disable the caCMCECserverCert certificate profile: For example, to enable the caCMCECserverCert certificate profile: 3.2.1.2. Creating a Certificate Profile in Raw Format To create a new profile in raw format: Note In raw format, specify the new profile ID as follows: 3.2.1.3. Editing a Certificate Profile in Raw Format CA administrators can edit a certificate profile in raw format without manually downloading the configuration file. For example, to edit the caCMCECserverCert profile: This command automatically downloads the profile configuration in raw format and opens it in the VI editor. When you close the editor, the profile configuration is updated on the server. You do not need to restart the CA after editing a profile. Important Before you can edit a profile, disable the profile. For details, see Section 3.2.1.1, "Enabling and Disabling a Certificate Profile" . Example 3.2. Editing a Certificate Profile in RAW Format For example, to edit the caCMCserverCert profile to accept multiple user-supplied extensions: Disable the profile as a CA agent: Edit the profile as a CA administrator: Download and open the profile in the VI editor: Update the configuration to accept the extensions. For details, see Example B.3, "Multiple User Supplied Extensions in CSR" . Enable the profile as a CA agent: 3.2.1.4. Deleting a Certificate Profile To delete a certificate profile: Important Before you can delete a profile, disable the profile. For details, see Section 3.2.1.1, "Enabling and Disabling a Certificate Profile" . 3.2.2. Managing Certificate Enrollment Profiles Using the Java-based Administration Console Important pkiconsole is being deprecated. 3.2.2.1. Creating Certificate Profiles through the CA Console For security reasons, the Certificate Systems enforces separation of roles whereby an existing certificate profile can only be edited by an administrator after it was allowed by an agent. To add a new certificate profile or modify an existing certificate profile, perform the following steps as the administrator: Log in to the Certificate System CA subsystem console. In the Configuration tab, select Certificate Manager , and then select Certificate Profiles . The Certificate Profile Instances Management tab, which lists configured certificate profiles, opens. To create a new certificate profile, click Add . In the Select Certificate Profile Plugin Implementation window, select the type of certificate for which the profile is being created. Fill in the profile information in the Certificate Profile Instance Editor . Certificate Profile Instance ID . This is the ID used by the system to identify the profile. Certificate Profile Name . This is the user-friendly name for the profile. Certificate Profile Description . End User Certificate Profile . This sets whether the request must be made through the input form for the profile. This is usually set to true . Setting this to false allows a signed request to be processed through the Certificate Manager's certificate profile framework, rather than through the input page for the certificate profile. Certificate Profile Authentication . This sets the authentication method. An automated authentication is set by providing the instance ID for the authentication instance. If this field is blank, the authentication method is agent-approved enrollment; the request is submitted to the request queue of the agent services interface. Unless it is for a TMS subsystem, administrators must select one of the following authentication plug-ins: CMCAuth : Use this plug-in when a CA agent must approve and submit the enrollment request. CMCUserSignedAuth : Use this plug-in to enable non-agent users to enroll own certificates. Click OK . The plug-in editor closes, and the new profile is listed in the profiles tab. Configure the policies, inputs, and outputs for the new profile. Select the new profile from the list, and click Edit/View . Set up policies in the Policies tab of the Certificate Profile Rule Editor window. The Policies tab lists policies that are already set by default for the profile type. To add a policy, click Add . Choose the default from the Default field, choose the constraints associated with that policy in the Constraints field, and click OK . Fill in the policy set ID. When issuing dual key pairs, separate policy sets define the policies associated with each certificate. Then fill in the certificate profile policy ID, a name or identifier for the certificate profile policy. Configure any parameters in the Defaults and Constraints tabs. Defaults defines attributes that populate the certificate request, which in turn determines the content of the certificate. These can be extensions, validity periods, or other fields contained in the certificates. Constraints defines valid values for the defaults. See Section B.1, "Defaults Reference" and Section B.2, "Constraints Reference" for complete details for each default or constraint. To modify an existing policy, select a policy, and click Edit . Then edit the default and constraints for that policy. To delete a policy, select the policy, and click Delete . Set inputs in the Inputs tab of the Certificate Profile Rule Editor window. There can be more than one input type for a profile. Note Unless you configure the profile for a TMS subsystem, select only cmcCertReqInput and delete other profiles by selecting them and clicking the Delete button. To add an input, click Add . Choose the input from the list, and click OK . See Section A.1, "Input Reference" for complete details of the default inputs. The New Certificate Profile Editor window opens. Set the input ID, and click OK . Inputs can be added and deleted. It is possible to select edit for an input, but since inputs have no parameters or other settings, there is nothing to configure. To delete an input, select the input, and click Delete . Set up outputs in the Outputs tab of the Certificate Profile Rule Editor window. Outputs must be set for any certificate profile that uses an automated authentication method; no output needs to be set for any certificate profile that uses agent-approved authentication. The Certificate Output type is set by default for all profiles and is added automatically to custom profiles. Unless you configure the profile for a TMS subsystem, select only certOutput . Outputs can be added and deleted. It is possible to select edit for an output, but since outputs have no parameters or other settings, there is nothing to configure. To add an output, click Add . Choose the output from the list, and click OK . Give a name or identifier for the output, and click OK . This output will be listed in the output tab. You can edit it to provide values to the parameters in this output. To delete an output, select the output from list, and click Delete . Restart the CA to apply the new profile. After creating the profile as an administrator, a CA agent has to approve the profile in the agent services pages to enable the profile. Open the CA's services page. Click the Manage Certificate Profiles link. This page lists all of the certificate profiles that have been set up by an administrator, both active and inactive. Click the name of the certificate profile to approve. At the bottom of the page, click the Enable button. Note If this profile will be used with a TPS, then the TPS must be configured to recognized the profile type. This is in 11.1.4. Managing Smart Card CA Profiles in Red Hat Certificate System's Planning, Installation, and Deployment Guide. Authorization methods for the profiles can only be added to the profile using the command line, as described in the section Creating and Editing Certificate Profiles Directly on the File System in Red Hat Certificate System Planning, Installation and Deployment Guide. 3.2.2.2. Editing Certificate Profiles in the Console To modify an existing certificate profile: Log into the agent services pages and disable the profile. Once a certificate profile is enabled by an agent, that certificate profile is marked enabled in the Certificate Profile Instance Management tab, and the certificate profile cannot be edited in any way through the console. Log in to the Certificate System CA subsystem console. In the Configuration tab, select Certificate Manager , and then select Certificate Profiles . Select the certificate profile, and click Edit/View . The Certificate Profile Rule Editor window appears. Many any changes to the defaults, constraints, inputs, or outputs. Note The profile instance ID cannot be modified. If necessary, enlarge the window by pulling out one of the corners of the window. Restart the CA to apply the changes. In the agent services page, re-enable the profile. Note Delete any certificate profiles that will not be approved by an agent. Any certificate profile that appears in the Certificate Profile Instance Management tab also appears in the agent services interface. If a profile has already been enabled, it must be disabled by the agent before it can be deleted from the profile list. 3.2.3. Listing Certificate Enrollment Profiles The following pre-defined certificate profiles are ready to use and set up in this environment when the Certificate System CA is installed. These certificate profiles have been designed for the most common types of certificates, and they provide common defaults, constraints, authentication methods, inputs, and outputs. To list the available profiles on the command line, use the pki utility. For example: For further details, see the pki-ca-profile (1) man page. Additional information can also be found at Red Hat Certificate System Planning, Installation, and Deployment Guide . 3.2.4. Displaying Details of a Certificate Enrollment Profile For example, to display a specific certificate profile, such as caECFullCMCUserSignedCert : For example, to display a specific certificate profile, such as caECFullCMCUserSignedCert , in raw format: For further details, see the pki-ca-profile (1) man page. | [
"pki -c password -n caagent ca-profile-disable caCMCECserverCert",
"pki -c password -n caagent ca-profile-enable caCMCECserverCert",
"pki -c password -n caadmin ca-profile-add profile_name .cfg --raw",
"profileId= profile_name",
"pki -c password -n caadmin ca-profile-edit caCMCECserverCert",
"pki -c password -n caagemt ca-profile-disable caCMCserverCert",
"pki -c password -n caadmin ca-profile-edit caCMCserverCert",
"pki -c password -n caagent ca-profile-enable caCMCserverCert",
"pki -c password -n caadmin ca-profile-del profile_name",
"pkiconsole https://server.example.com:8443/ca",
"systemctl restart pki-tomcatd-nuxwdog@ instance_name .service",
"https://server.example.com:8443/ca/services",
"pkiconsole https://server.example.com:8443/ca",
"pki -c password -n caadmin ca-profile-find ------------------ 59 entries matched ------------------ Profile ID: caCMCserverCert Name: Server Certificate Enrollment using CMC Description: This certificate profile is for enrolling server certificates using CMC. Profile ID: caCMCECserverCert Name: Server Certificate wth ECC keys Enrollment using CMC Description: This certificate profile is for enrolling server certificates with ECC keys using CMC. Profile ID: caCMCECsubsystemCert Name: Subsystem Certificate Enrollment with ECC keys using CMC Description: This certificate profile is for enrolling subsystem certificates with ECC keys using CMC. Profile ID: caCMCsubsystemCert Name: Subsystem Certificate Enrollment using CMC Description: This certificate profile is for enrolling subsystem certificates using CMC. ----------------------------- Number of entries returned 20",
"pki -c password -n caadmin ca-profile-show caECFullCMCUserSignedCert ----------------------------------- Profile \"caECFullCMCUserSignedCert\" ----------------------------------- Profile ID: caECFullCMCUserSignedCert Name: User-Signed CMC-Authenticated User Certificate Enrollment Description: This certificate profile is for enrolling user certificates with EC keys by using the CMC certificate request with non-agent user CMC authentication. Name: Certificate Request Input Class: cmcCertReqInputImpl Attribute Name: cert_request Attribute Description: Certificate Request Attribute Syntax: cert_request Name: Certificate Output Class: certOutputImpl Attribute Name: pretty_cert Attribute Description: Certificate Pretty Print Attribute Syntax: pretty_print Attribute Name: b64_cert Attribute Description: Certificate Base-64 Encoded Attribute Syntax: pretty_print",
"pki -c password -n caadmin ca-profile-show caECFullCMCUserSignedCert --raw #Wed Jul 25 14:41:35 PDT 2018 auth.instance_id=CMCUserSignedAuth policyset.cmcUserCertSet.1.default.params.name= policyset.cmcUserCertSet.4.default.class_id=authorityKeyIdentifierExtDefaultImpl policyset.cmcUserCertSet.6.default.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.10.default.class_id=noDefaultImpl policyset.cmcUserCertSet.10.constraint.name=Renewal Grace Period Constraint output.o1.class_id=certOutputImpl"
]
| https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/Setting_up_Certificate_Profiles |
Chapter 4. Installing a cluster on Azure Stack Hub with network customizations | Chapter 4. Installing a cluster on Azure Stack Hub with network customizations In OpenShift Container Platform version 4.15, you can install a cluster with a customized network configuration on infrastructure that the installation program provisions on Azure Stack Hub. By customizing your network configuration, your cluster can coexist with existing IP address allocations in your environment and integrate with existing MTU and VXLAN configurations. Note While you can select azure when using the installation program to deploy a cluster using installer-provisioned infrastructure, this option is only supported for the Azure Public Cloud. 4.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You verified that you have approximately 16 GB of local disk space. Installing the cluster requires that you download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Decompressing the VHD files requires this amount of local disk space. 4.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.15, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 4.3. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 4.4. Uploading the RHCOS cluster image You must download the RHCOS virtual hard disk (VHD) cluster image and upload it to your Azure Stack Hub environment so that it is accessible during deployment. Prerequisites Configure an Azure account. Procedure Obtain the RHCOS VHD cluster image: Export the URL of the RHCOS VHD to an environment variable. USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Download the compressed RHCOS VHD file locally. USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. The VHD file can be deleted once you have uploaded it. Upload the local VHD to the Azure Stack Hub environment, making sure that the blob is publicly available. For example, you can upload the VHD to a blob using the az cli or the web portal. 4.5. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with at least 1.2 GB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 4.6. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications: Specify the required installation parameters. Update the platform.azure section to specify the parameters that are specific to Azure Stack Hub. Optional: Update one or more of the default configuration parameters to customize the installation. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for Azure Stack Hub 4.6.1. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{"auths": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- 1 7 10 12 14 17 18 20 Required. 2 5 If you do not provide these parameters and values, the installation program provides the default value. 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Although both sections currently define a single machine pool, it is possible that future versions of OpenShift Container Platform will support defining multiple compute pools during installation. Only one control plane pool is used. 4 6 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 8 The name of the cluster. 9 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 The Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 13 The name of the resource group that contains the DNS zone for your base domain. 15 The name of your Azure Stack Hub local region. 16 The name of an existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 19 The URL of a storage blob in the Azure Stack environment that contains an RHCOS VHD. 21 The pull secret required to authenticate your cluster. 22 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. 23 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 24 If the Azure Stack Hub environment is using an internal Certificate Authority (CA), adding the CA certificate is required. 4.7. Manually manage cloud credentials The Cloud Credential Operator (CCO) only supports your cloud provider in manual mode. As a result, you must specify the identity and access management (IAM) secrets for your cloud provider. Procedure If you have not previously created installation manifest files, do so by running the following command: USD openshift-install create manifests --dir <installation_directory> where <installation_directory> is the directory in which the installation program creates files. Set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Sample CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Important Before upgrading a cluster that uses manually maintained credentials, you must ensure that the CCO is in an upgradeable state. Additional resources Updating a cluster using the web console Updating a cluster using the CLI 4.8. Configuring the cluster to use an internal CA If the Azure Stack Hub environment is using an internal Certificate Authority (CA), update the cluster-proxy-01-config.yaml file to configure the cluster to use the internal CA. Prerequisites Create the install-config.yaml file and specify the certificate trust bundle in .pem format. Create the cluster manifests. Procedure From the directory in which the installation program creates files, go to the manifests directory. Add user-ca-bundle to the spec.trustedCA.name field. Example cluster-proxy-01-config.yaml file apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {} Optional: Back up the manifests/ cluster-proxy-01-config.yaml file. The installation program consumes the manifests/ directory when you deploy the cluster. 4.9. Network configuration phases There are two phases prior to OpenShift Container Platform installation where you can customize the network configuration. Phase 1 You can customize the following network-related fields in the install-config.yaml file before you create the manifest files: networking.networkType networking.clusterNetwork networking.serviceNetwork networking.machineNetwork For more information, see "Installation configuration parameters". Note Set the networking.machineNetwork to match the Classless Inter-Domain Routing (CIDR) where the preferred subnet is located. Important The CIDR range 172.17.0.0/16 is reserved by libVirt . You cannot use any other CIDR range that overlaps with the 172.17.0.0/16 CIDR range for networks in your cluster. Phase 2 After creating the manifest files by running openshift-install create manifests , you can define a customized Cluster Network Operator manifest with only the fields you want to modify. You can use the manifest to specify an advanced network configuration. During phase 2, you cannot override the values that you specified in phase 1 in the install-config.yaml file. However, you can customize the network plugin during phase 2. 4.10. Specifying advanced network configuration You can use advanced network configuration for your network plugin to integrate your cluster into your existing network environment. You can specify advanced network configuration only before you install the cluster. Important Customizing your network configuration by modifying the OpenShift Container Platform manifest files created by the installation program is not supported. Applying a manifest file that you create, as in the following procedure, is supported. Prerequisites You have created the install-config.yaml file and completed any modifications to it. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> 1 1 <installation_directory> specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: Specify the advanced network configuration for your cluster in the cluster-network-03-config.yml file, such as in the following example: Enable IPsec for the OVN-Kubernetes network provider apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program consumes the manifests/ directory when you create the Ignition config files. 4.11. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network plugin. OVNKubernetes is the only supported plugin during installation. You can specify the cluster network plugin configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.11.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.1. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 You can customize this field only in the install-config.yaml file before you create the manifests. The value is read-only in the manifest file. spec.defaultNetwork object Configures the network plugin for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network plugin, the kube-proxy configuration has no effect. Important For a cluster that needs to deploy objects across multiple networks, ensure that you specify the same value for the clusterNetwork.hostPrefix parameter for each network type that is defined in the install-config.yaml file. Setting a different value for each clusterNetwork.hostPrefix parameter can impact the OVN-Kubernetes network plugin, where the plugin cannot effectively route object traffic among different nodes. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.2. defaultNetwork object Field Type Description type string OVNKubernetes . The Red Hat OpenShift Networking network plugin is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OVN-Kubernetes network plugin by default. OpenShift SDN is no longer available as an installation choice for new clusters. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes network plugin. Configuration for the OVN-Kubernetes network plugin The following table describes the configuration fields for the OVN-Kubernetes network plugin: Table 4.3. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This is detected automatically based on the MTU of the primary network interface. You do not normally need to override the detected MTU. If the auto-detected value is not what you expect it to be, confirm that the MTU on the primary network interface on your nodes is correct. You cannot use this option to change the MTU value of the primary network interface on the nodes. If your cluster requires different MTU values for different nodes, you must set this value to 100 less than the lowest MTU value in your cluster. For example, if some nodes in your cluster have an MTU of 9001 , and some have an MTU of 1500 , you must set this value to 1400 . genevePort integer The port to use for all Geneve packets. The default value is 6081 . This value cannot be changed after cluster installation. ipsecConfig object Specify a configuration object for customizing the IPsec configuration. ipv4 object Specifies a configuration object for IPv4 settings. ipv6 object Specifies a configuration object for IPv6 settings. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes. Table 4.4. ovnKubernetesConfig.ipv4 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the 100.88.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. The subnet for the distributed transit switch that enables east-west traffic. This subnet cannot overlap with any other subnets used by OVN-Kubernetes or on the host itself. It must be large enough to accommodate one IP address per node in your cluster. The default value is 100.88.0.0/16 . internalJoinSubnet string If your existing network infrastructure overlaps with the 100.64.0.0/16 IPv4 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. For example, if the clusterNetwork.cidr value is 10.128.0.0/14 and the clusterNetwork.hostPrefix value is /23 , then the maximum number of nodes is 2^(23-14)=512 . The default value is 100.64.0.0/16 . Table 4.5. ovnKubernetesConfig.ipv6 object Field Type Description internalTransitSwitchSubnet string If your existing network infrastructure overlaps with the fd98::/48 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. This field cannot be changed after installation. The default value is fd98::/48 . internalJoinSubnet string If your existing network infrastructure overlaps with the fd98::/64 IPv6 subnet, you can specify a different IP address range for internal use by OVN-Kubernetes. You must ensure that the IP address range does not overlap with any other subnet used by your OpenShift Container Platform installation. The IP address range must be larger than the maximum number of nodes that can be added to the cluster. The default value is fd98::/64 . Table 4.6. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. maxLogFiles integer The maximum number of log files that are retained. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.7. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. ipForwarding object You can control IP forwarding for all traffic on OVN-Kubernetes managed interfaces by using the ipForwarding specification in the Network resource. Specify Restricted to only allow IP forwarding for Kubernetes related traffic. Specify Global to allow forwarding of all IP traffic. For new installations, the default is Restricted . For updates to OpenShift Container Platform 4.14 or later, the default is Global . ipv4 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv4 addresses. ipv6 object Optional: Specify an object to configure the internal OVN-Kubernetes masquerade address for host to service traffic for IPv6 addresses. Table 4.8. gatewayConfig.ipv4 object Field Type Description internalMasqueradeSubnet string The masquerade IPv4 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is 169.254.169.0/29 . Table 4.9. gatewayConfig.ipv6 object Field Type Description internalMasqueradeSubnet string The masquerade IPv6 addresses that are used internally to enable host to service traffic. The host is configured with these IP addresses as well as the shared gateway bridge interface. The default value is fd69::/125 . Table 4.10. ipsecConfig object Field Type Description mode string Specifies the behavior of the IPsec implementation. Must be one of the following values: Disabled : IPsec is not enabled on cluster nodes. External : IPsec is enabled for network traffic with external hosts. Full : IPsec is enabled for pod traffic and network traffic with external hosts. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full Important Using OVNKubernetes can lead to a stack exhaustion problem on IBM Power(R). kubeProxyConfig object configuration (OpenShiftSDN container network interface only) The values for the kubeProxyConfig object are defined in the following table: Table 4.11. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.12. Configuring hybrid networking with OVN-Kubernetes You can configure your cluster to use hybrid networking with the OVN-Kubernetes network plugin. This allows a hybrid cluster that supports different node networking configurations. Note This configuration is necessary to run both Linux and Windows nodes in the same cluster. Prerequisites You defined OVNKubernetes for the networking.networkType parameter in the install-config.yaml file. See the installation documentation for configuring OpenShift Container Platform network customizations on your chosen cloud provider for more information. Procedure Change to the directory that contains the installation program and create the manifests: USD ./openshift-install create manifests --dir <installation_directory> where: <installation_directory> Specifies the name of the directory that contains the install-config.yaml file for your cluster. Create a stub manifest file for the advanced network configuration that is named cluster-network-03-config.yml in the <installation_directory>/manifests/ directory: USD cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF where: <installation_directory> Specifies the directory name that contains the manifests/ directory for your cluster. Open the cluster-network-03-config.yml file in an editor and configure OVN-Kubernetes with hybrid networking, as in the following example: Specify a hybrid networking configuration apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2 1 Specify the CIDR configuration used for nodes on the additional overlay network. The hybridClusterNetwork CIDR must not overlap with the clusterNetwork CIDR. 2 Specify a custom VXLAN port for the additional overlay network. This is required for running Windows nodes in a cluster installed on vSphere, and must not be configured for any other cloud provider. The custom port can be any open port excluding the default 4789 port. For more information on this requirement, see the Microsoft documentation on Pod-to-pod connectivity between hosts is broken . Note Windows Server Long-Term Servicing Channel (LTSC): Windows Server 2019 is not supported on clusters with a custom hybridOverlayVXLANPort value because this Windows server version does not support selecting a custom VXLAN port. Save the cluster-network-03-config.yml file and quit the text editor. Optional: Back up the manifests/cluster-network-03-config.yml file. The installation program deletes the manifests/ directory when creating the cluster. Note For more information about using Linux and Windows nodes in the same cluster, see Understanding Windows container workloads . 4.13. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 4.14. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.15. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.15 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.15 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 4.15. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 4.16. Logging in to the cluster by using the web console The kubeadmin user exists by default after an OpenShift Container Platform installation. You can log in to your cluster as the kubeadmin user by using the OpenShift Container Platform web console. Prerequisites You have access to the installation host. You completed a cluster installation and all cluster Operators are available. Procedure Obtain the password for the kubeadmin user from the kubeadmin-password file on the installation host: USD cat <installation_directory>/auth/kubeadmin-password Note Alternatively, you can obtain the kubeadmin password from the <installation_directory>/.openshift_install.log log file on the installation host. List the OpenShift Container Platform web console route: USD oc get routes -n openshift-console | grep 'console-openshift' Note Alternatively, you can obtain the OpenShift Container Platform route from the <installation_directory>/.openshift_install.log log file on the installation host. Example output console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None Navigate to the route detailed in the output of the preceding command in a web browser and log in as the kubeadmin user. Additional resources Accessing the web console . 4.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.15, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 4.18. steps Validating an installation . Customize your cluster . If necessary, you can opt out of remote health reporting . If necessary, you can remove cloud provider credentials . | [
"ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1",
"cat <path>/<file_name>.pub",
"cat ~/.ssh/id_ed25519.pub",
"eval \"USD(ssh-agent -s)\"",
"Agent pid 31874",
"ssh-add <path>/<file_name> 1",
"Identity added: /home/<you>/<path>/<file_name> (<computer_name>)",
"export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')",
"curl -O -L USD{COMPRESSED_VHD_URL}",
"tar -xvf openshift-install-linux.tar.gz",
"mkdir <installation_directory>",
"apiVersion: v1 baseDomain: example.com 1 credentialsMode: Manual controlPlane: 2 3 name: master platform: azure: osDisk: diskSizeGB: 1024 4 diskType: premium_LRS replicas: 3 compute: 5 - name: worker platform: azure: osDisk: diskSizeGB: 512 6 diskType: premium_LRS replicas: 3 metadata: name: test-cluster 7 8 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 9 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 10 11 baseDomainResourceGroupName: resource_group 12 13 region: azure_stack_local_region 14 15 resourceGroupName: existing_resource_group 16 outboundType: Loadbalancer cloudName: AzureStackCloud 17 clusterOSimage: https://vhdsa.blob.example.example.com/vhd/rhcos-410.84.202112040202-0-azurestack.x86_64.vhd 18 19 pullSecret: '{\"auths\": ...}' 20 21 fips: false 22 sshKey: ssh-ed25519 AAAA... 23 additionalTrustBundle: | 24 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE-----",
"openshift-install create manifests --dir <installation_directory>",
"RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')",
"oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor",
"apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>",
"apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>",
"apiVersion: config.openshift.io/v1 kind: Proxy metadata: creationTimestamp: null name: cluster spec: trustedCA: name: user-ca-bundle status: {}",
"./openshift-install create manifests --dir <installation_directory> 1",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec:",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: ipsecConfig: mode: Full",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: mode: Full",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"./openshift-install create manifests --dir <installation_directory>",
"cat <<EOF > <installation_directory>/manifests/cluster-network-03-config.yml apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: EOF",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: defaultNetwork: ovnKubernetesConfig: hybridOverlayConfig: hybridClusterNetwork: 1 - cidr: 10.132.0.0/14 hostPrefix: 23 hybridOverlayVXLANPort: 9898 2",
"./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2",
"INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s",
"tar xvf <file>",
"echo USDPATH",
"oc <command>",
"C:\\> path",
"C:\\> oc <command>",
"echo USDPATH",
"oc <command>",
"export KUBECONFIG=<installation_directory>/auth/kubeconfig 1",
"oc whoami",
"system:admin",
"cat <installation_directory>/auth/kubeadmin-password",
"oc get routes -n openshift-console | grep 'console-openshift'",
"console console-openshift-console.apps.<cluster_name>.<base_domain> console https reencrypt/Redirect None"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installing_on_azure_stack_hub/installing-azure-stack-hub-network-customizations |
4.65. firstboot | 4.65. firstboot 4.65.1. RHBA-2011:1742 - firstboot bug fix update An updated firstboot package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The firstboot utility runs after installation and guides the user through a series of steps that allows for easier configuration of the machine. Bug Fixes BZ# 700283 Previously, the Traditional Chinese translation (zh_TW) of the Forward button on the welcome page was different from the action mentioned in the text, on the same page, referring to this button. This update provides the corrected translation. BZ# 700305 Previously, when running firstboot in Japanese locale and the user attempted to continue without setting up an account, an untranslated warning message appeared. With this update, the message is properly translated into Japanese. All users of firstboot are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/firstboot |
Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment | Chapter 7. Installing the Migration Toolkit for Containers in a restricted network environment You can install the Migration Toolkit for Containers (MTC) on OpenShift Container Platform 3 and 4 in a restricted network environment by performing the following procedures: Create a mirrored Operator catalog . This process creates a mapping.txt file, which contains the mapping between the registry.redhat.io image and your mirror registry image. The mapping.txt file is required for installing the Operator on the source cluster. Install the Migration Toolkit for Containers Operator on the OpenShift Container Platform 4.7 target cluster by using Operator Lifecycle Manager. By default, the MTC web console and the Migration Controller pod run on the target cluster. You can configure the Migration Controller custom resource manifest to run the MTC web console and the Migration Controller pod on a source cluster or on a remote cluster . Install the legacy Migration Toolkit for Containers Operator on the OpenShift Container Platform 3 source cluster from the command line interface. Configure object storage to use as a replication repository. To uninstall MTC, see Uninstalling MTC and deleting resources . 7.1. Compatibility guidelines You must install the Migration Toolkit for Containers (MTC) Operator that is compatible with your OpenShift Container Platform version. Definitions legacy platform OpenShift Container Platform 4.5 and earlier. modern platform OpenShift Container Platform 4.6 and later. legacy operator The MTC Operator designed for legacy platforms. modern operator The MTC Operator designed for modern platforms. control cluster The cluster that runs the MTC controller and GUI. remote cluster A source or destination cluster for a migration that runs Velero. The Control Cluster communicates with Remote clusters via the Velero API to drive migrations. Table 7.1. MTC compatibility: Migrating from a legacy platform OpenShift Container Platform 4.5 or earlier OpenShift Container Platform 4.6 or later Stable MTC version MTC 1.7. z Legacy 1.7 operator: Install manually with the operator.yml file. Important This cluster cannot be the control cluster. MTC 1.7. z Install with OLM, release channel release-v1.7 Note Edge cases exist in which network restrictions prevent modern clusters from connecting to other clusters involved in the migration. For example, when migrating from an OpenShift Container Platform 3.11 cluster on premises to a modern OpenShift Container Platform cluster in the cloud, where the modern cluster cannot connect to the OpenShift Container Platform 3.11 cluster. With MTC 1.7, if one of the remote clusters is unable to communicate with the control cluster because of network restrictions, use the crane tunnel-api command. With the stable MTC release, although you should always designate the most modern cluster as the control cluster, in this specific case it is possible to designate the legacy cluster as the control cluster and push workloads to the remote cluster. 7.2. Installing the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 You install the Migration Toolkit for Containers Operator on OpenShift Container Platform 4.7 by using the Operator Lifecycle Manager. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must create an Operator catalog from a mirror image in a local registry. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Use the Filter by keyword field to find the Migration Toolkit for Containers Operator . Select the Migration Toolkit for Containers Operator and click Install . Click Install . On the Installed Operators page, the Migration Toolkit for Containers Operator appears in the openshift-migration project with the status Succeeded . Click Migration Toolkit for Containers Operator . Under Provided APIs , locate the Migration Controller tile, and click Create Instance . Click Create . Click Workloads Pods to verify that the MTC pods are running. 7.3. Installing the legacy Migration Toolkit for Containers Operator on OpenShift Container Platform 3 You can install the legacy Migration Toolkit for Containers Operator manually on OpenShift Container Platform 3. Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. You must have access to registry.redhat.io . You must have podman installed. You must create an image stream secret and copy it to each node in the cluster. You must have a Linux workstation with network access in order to download files from registry.redhat.io . You must create a mirror image of the Operator catalog. You must install the Migration Toolkit for Containers Operator from the mirrored Operator catalog on OpenShift Container Platform 4.7. Procedure Log in to registry.redhat.io with your Red Hat Customer Portal credentials: USD sudo podman login registry.redhat.io Download the operator.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./ Download the controller.yml file by entering the following command: USD sudo podman cp USD(sudo podman create \ registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./ Obtain the Operator image mapping by running the following command: USD grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc The mapping.txt file was created when you mirrored the Operator catalog. The output shows the mapping between the registry.redhat.io image and your mirror registry image. Example output registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator Update the image values for the ansible and operator containers and the REGISTRY value in the operator.yml file: containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 ... - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 ... env: - name: REGISTRY value: <registry.apps.example.com> 3 1 2 Specify your mirror registry and the sha256 value of the Operator image. 3 Specify your mirror registry. Log in to your source cluster. Create the Migration Toolkit for Containers Operator object: USD oc create -f operator.yml Example output namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-builders" already exists 1 Error from server (AlreadyExists): error when creating "./operator.yml": rolebindings.rbac.authorization.k8s.io "system:image-pullers" already exists 1 You can ignore Error from server (AlreadyExists) messages. They are caused by the Migration Toolkit for Containers Operator creating resources for earlier versions of OpenShift Container Platform 4 that are provided in later releases. Create the MigrationController object: USD oc create -f controller.yml Verify that the MTC pods are running: USD oc get pods -n openshift-migration 7.4. Proxy configuration For OpenShift Container Platform 4.1 and earlier versions, you must configure proxies in the MigrationController custom resource (CR) manifest after you install the Migration Toolkit for Containers Operator because these versions do not support a cluster-wide proxy object. For OpenShift Container Platform 4.2 to 4.7, the Migration Toolkit for Containers (MTC) inherits the cluster-wide proxy settings. You can change the proxy parameters if you want to override the cluster-wide proxy settings. 7.4.1. Direct volume migration Direct Volume Migration (DVM) was introduced in MTC 1.4.2. DVM supports only one proxy. The source cluster cannot access the route of the target cluster if the target cluster is also behind a proxy. If you want to perform a DVM from a source cluster behind a proxy, you must configure a TCP proxy that works at the transport layer and forwards the SSL connections transparently without decrypting and re-encrypting them with their own SSL certificates. A Stunnel proxy is an example of such a proxy. 7.4.1.1. TCP proxy setup for DVM You can set up a direct connection between the source and the target cluster through a TCP proxy and configure the stunnel_tcp_proxy variable in the MigrationController CR to use the proxy: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port Direct volume migration (DVM) supports only basic authentication for the proxy. Moreover, DVM works only from behind proxies that can tunnel a TCP connection transparently. HTTP/HTTPS proxies in man-in-the-middle mode do not work. The existing cluster-wide proxies might not support this behavior. As a result, the proxy settings for DVM are intentionally kept different from the usual proxy configuration in MTC. 7.4.1.2. Why use a TCP proxy instead of an HTTP/HTTPS proxy? You can enable DVM by running Rsync between the source and the target cluster over an OpenShift route. Traffic is encrypted using Stunnel, a TCP proxy. The Stunnel running on the source cluster initiates a TLS connection with the target Stunnel and transfers data over an encrypted channel. Cluster-wide HTTP/HTTPS proxies in OpenShift are usually configured in man-in-the-middle mode where they negotiate their own TLS session with the outside servers. However, this does not work with Stunnel. Stunnel requires that its TLS session be untouched by the proxy, essentially making the proxy a transparent tunnel which simply forwards the TCP connection as-is. Therefore, you must use a TCP proxy. 7.4.1.3. Known issue Migration fails with error Upgrade request required The migration Controller uses the SPDY protocol to execute commands within remote pods. If the remote cluster is behind a proxy or a firewall that does not support the SPDY protocol, the migration controller fails to execute remote commands. The migration fails with the error message Upgrade request required . Workaround: Use a proxy that supports the SPDY protocol. In addition to supporting the SPDY protocol, the proxy or firewall also must pass the Upgrade HTTP header to the API server. The client uses this header to open a websocket connection with the API server. If the Upgrade header is blocked by the proxy or firewall, the migration fails with the error message Upgrade request required . Workaround: Ensure that the proxy forwards the Upgrade header. 7.4.2. Tuning network policies for migrations OpenShift supports restricting traffic to or from pods using NetworkPolicy or EgressFirewalls based on the network plugin used by the cluster. If any of the source namespaces involved in a migration use such mechanisms to restrict network traffic to pods, the restrictions might inadvertently stop traffic to Rsync pods during migration. Rsync pods running on both the source and the target clusters must connect to each other over an OpenShift Route. Existing NetworkPolicy or EgressNetworkPolicy objects can be configured to automatically exempt Rsync pods from these traffic restrictions. 7.4.2.1. NetworkPolicy configuration 7.4.2.1.1. Egress traffic from Rsync pods You can use the unique labels of Rsync pods to allow egress traffic to pass from them if the NetworkPolicy configuration in the source or destination namespaces blocks this type of traffic. The following policy allows all egress traffic from Rsync pods in the namespace: apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress 7.4.2.1.2. Ingress traffic to Rsync pods apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress 7.4.2.2. EgressNetworkPolicy configuration The EgressNetworkPolicy object or Egress Firewalls are OpenShift constructs designed to block egress traffic leaving the cluster. Unlike the NetworkPolicy object, the Egress Firewall works at a project level because it applies to all pods in the namespace. Therefore, the unique labels of Rsync pods do not exempt only Rsync pods from the restrictions. However, you can add the CIDR ranges of the source or target cluster to the Allow rule of the policy so that a direct connection can be setup between two clusters. Based on which cluster the Egress Firewall is present in, you can add the CIDR range of the other cluster to allow egress traffic between the two: apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny 7.4.2.3. Configuring supplemental groups for Rsync pods When your PVCs use a shared storage, you can configure the access to that storage by adding supplemental groups to Rsync pod definitions in order for the pods to allow access: Table 7.2. Supplementary groups for Rsync pods Variable Type Default Description src_supplemental_groups string Not set Comma-separated list of supplemental groups for source Rsync pods target_supplemental_groups string Not set Comma-separated list of supplemental groups for target Rsync pods Example usage The MigrationController CR can be updated to set values for these supplemental groups: spec: src_supplemental_groups: "1000,2000" target_supplemental_groups: "2000,3000" 7.4.3. Configuring proxies Prerequisites You must be logged in as a user with cluster-admin privileges on all clusters. Procedure Get the MigrationController CR manifest: USD oc get migrationcontroller <migration_controller> -n openshift-migration Update the proxy parameters: apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration ... spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2 1 Stunnel proxy URL for direct volume migration. 2 Comma-separated list of destination domain names, domains, IP addresses, or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy nor the httpsProxy field is set. Save the manifest as migration-controller.yaml . Apply the updated manifest: USD oc replace -f migration-controller.yaml -n openshift-migration For more information, see Configuring the cluster-wide proxy . 7.5. Configuring a replication repository The Multicloud Object Gateway is the only supported option for a restricted network environment. MTC supports the file system and snapshot data copy methods for migrating data from the source cluster to the target cluster. You can select a method that is suited for your environment and is supported by your storage provider. 7.5.1. Prerequisites All clusters must have uninterrupted network access to the replication repository. If you use a proxy server with an internally hosted replication repository, you must ensure that the proxy allows access to the replication repository. 7.5.2. Retrieving Multicloud Object Gateway credentials You must retrieve the Multicloud Object Gateway (MCG) credentials in order to create a Secret custom resource (CR) for the OpenShift API for Data Protection (OADP). MCG is a component of OpenShift Container Storage. Prerequisites You must deploy OpenShift Container Storage by using the appropriate OpenShift Container Storage deployment guide . Procedure Obtain the S3 endpoint, AWS_ACCESS_KEY_ID , and AWS_SECRET_ACCESS_KEY by running the describe command on the NooBaa custom resource. 7.5.3. Additional resources Disconnected environment in the Red Hat OpenShift Container Storage documentation. MTC workflow About data copy methods Adding a replication repository to the MTC web console 7.6. Uninstalling MTC and deleting resources You can uninstall the Migration Toolkit for Containers (MTC) and delete its resources to clean up the cluster. Note Deleting the velero CRDs removes Velero from the cluster. Prerequisites You must be logged in as a user with cluster-admin privileges. Procedure Delete the MigrationController custom resource (CR) on all clusters: USD oc delete migrationcontroller <migration_controller> Uninstall the Migration Toolkit for Containers Operator on OpenShift Container Platform 4 by using the Operator Lifecycle Manager. Delete cluster-scoped resources on all clusters by running the following commands: migration custom resource definitions (CRDs): USD oc delete USD(oc get crds -o name | grep 'migration.openshift.io') velero CRDs: USD oc delete USD(oc get crds -o name | grep 'velero') migration cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io') migration-operator cluster role: USD oc delete clusterrole migration-operator velero cluster roles: USD oc delete USD(oc get clusterroles -o name | grep 'velero') migration cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io') migration-operator cluster role bindings: USD oc delete clusterrolebindings migration-operator velero cluster role bindings: USD oc delete USD(oc get clusterrolebindings -o name | grep 'velero') | [
"sudo podman login registry.redhat.io",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/operator.yml ./",
"sudo podman cp USD(sudo podman create registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator:v1.7):/controller.yml ./",
"grep openshift-migration-legacy-rhel8-operator ./mapping.txt | grep rhmtc",
"registry.redhat.io/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a=<registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator",
"containers: - name: ansible image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 1 - name: operator image: <registry.apps.example.com>/rhmtc/openshift-migration-legacy-rhel8-operator@sha256:<468a6126f73b1ee12085ca53a312d1f96ef5a2ca03442bcb63724af5e2614e8a> 2 env: - name: REGISTRY value: <registry.apps.example.com> 3",
"oc create -f operator.yml",
"namespace/openshift-migration created rolebinding.rbac.authorization.k8s.io/system:deployers created serviceaccount/migration-operator created customresourcedefinition.apiextensions.k8s.io/migrationcontrollers.migration.openshift.io created role.rbac.authorization.k8s.io/migration-operator created rolebinding.rbac.authorization.k8s.io/migration-operator created clusterrolebinding.rbac.authorization.k8s.io/migration-operator created deployment.apps/migration-operator created Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-builders\" already exists 1 Error from server (AlreadyExists): error when creating \"./operator.yml\": rolebindings.rbac.authorization.k8s.io \"system:image-pullers\" already exists",
"oc create -f controller.yml",
"oc get pods -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: migration-controller namespace: openshift-migration spec: [...] stunnel_tcp_proxy: http://username:password@ip:port",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer egress: - {} policyTypes: - Egress",
"apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-all-egress-from-rsync-pods spec: podSelector: matchLabels: owner: directvolumemigration app: directvolumemigration-rsync-transfer ingress: - {} policyTypes: - Ingress",
"apiVersion: network.openshift.io/v1 kind: EgressNetworkPolicy metadata: name: test-egress-policy namespace: <namespace> spec: egress: - to: cidrSelector: <cidr_of_source_or_target_cluster> type: Deny",
"spec: src_supplemental_groups: \"1000,2000\" target_supplemental_groups: \"2000,3000\"",
"oc get migrationcontroller <migration_controller> -n openshift-migration",
"apiVersion: migration.openshift.io/v1alpha1 kind: MigrationController metadata: name: <migration_controller> namespace: openshift-migration spec: stunnel_tcp_proxy: http://<username>:<password>@<ip>:<port> 1 noProxy: example.com 2",
"oc replace -f migration-controller.yaml -n openshift-migration",
"oc delete migrationcontroller <migration_controller>",
"oc delete USD(oc get crds -o name | grep 'migration.openshift.io')",
"oc delete USD(oc get crds -o name | grep 'velero')",
"oc delete USD(oc get clusterroles -o name | grep 'migration.openshift.io')",
"oc delete clusterrole migration-operator",
"oc delete USD(oc get clusterroles -o name | grep 'velero')",
"oc delete USD(oc get clusterrolebindings -o name | grep 'migration.openshift.io')",
"oc delete clusterrolebindings migration-operator",
"oc delete USD(oc get clusterrolebindings -o name | grep 'velero')"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/migrating_from_version_3_to_4/installing-restricted-3-4 |
5.10. Enabling and Disabling Cluster Resources | 5.10. Enabling and Disabling Cluster Resources The following command enables the resource specified by resource_id . The following command disables the resource specified by resource_id . | [
"pcs resource enable resource_id",
"pcs resource disable resource_id"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-starting_stopping_resources-haar |
26.8. Installing a CA Into an Existing IdM Domain | 26.8. Installing a CA Into an Existing IdM Domain If an IdM domain was installed without a Certificate Authority (CA), you can install the CA services subsequently. Depending on your environment, you can install the IdM Certificate Server CA or use an external CA. Note For details on the supported CA configurations, see Section 2.3.2, "Determining What CA Configuration to Use" . Installing an IdM Certificate Server Use the following command to install the IdM Certificate Server CA: Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. Installing External CA The subsequent installation of an external CA consists of multiple steps: Start the installation: After this step an information is shown that a certificate signing request (CSR) was saved. Submit the CSR to the external CA and copy the issued certificate to the IdM server. Continue the installation with passing the certificates and full path to the external CA files to ipa-ca-install : Run the ipa-certupdate utility on all servers and clients to update them with the information about the new certificate from LDAP. You must run ipa-certupdate on every server and client separately. Important Always run ipa-certupdate after manually installing a certificate. If you do not, the certificate will not be distributed to the other machines. The CA installation does not replace the existing service certificates for the LDAP and web server with ones issued by the new installed CA. For details how to replace the certificates, see Section 26.9, "Replacing the Web Server's and LDAP Server's Certificate" . | [
"[root@ipa-server ~] ipa-ca-install",
"[root@ipa-server ~] ipa-ca-install --external-ca",
"ipa-ca-install --external-cert-file=/root/ master .crt --external-cert-file=/root/ca.crt"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/ca-less-to-ca |
10.3.7. Establishing a VLAN Connection | 10.3.7. Establishing a VLAN Connection You can use NetworkManager to create a VLAN using an existing interface. Currently, at time of writing, you can only make VLANs on Ethernet devices. Procedure 10.11. Adding a New VLAN Connection You can configure a VLAN connection by opening the Network Connections window, clicking Add , and selecting VLAN from the list. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Click the Add button to open the selection list. Select VLAN and then click Create . The Editing VLAN Connection 1 window appears. On the VLAN tab, select the parent interface from the drop-down list you want to use for the VLAN connection. Enter the VLAN ID Enter a VLAN interface name. This is the name of the VLAN interface that will be created. For example, "eth0.1" or "vlan2". (Normally this is either the parent interface name plus "." and the VLAN ID, or "vlan" plus the VLAN ID.) Review and confirm the settings and then click the Apply button. Edit the VLAN-specific settings by referring to the Configuring the VLAN Tab description below . Procedure 10.12. Editing an Existing VLAN Connection Follow these steps to edit an existing VLAN connection. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. Select the connection you want to edit and click the Edit button. Select the VLAN tab. Configure the connection name, auto-connect behavior, and availability settings. Three settings in the Editing dialog are common to all connection types: Connection name - Enter a descriptive name for your network connection. This name will be used to list this connection in the VLAN section of the Network Connections window. Connect automatically - Check this box if you want NetworkManager to auto-connect to this connection when it is available. See Section 10.2.3, "Connecting to a Network Automatically" for more information. Available to all users - Check this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 10.2.4, "User and System Connections" for details. Edit the VLAN-specific settings by referring to the Configuring the VLAN Tab description below . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your VLAN connection, click the Apply button and NetworkManager will immediately save your customized configuration. Given a correct configuration, you can connect to your new or customized connection by selecting it from the NetworkManager Notification Area applet. See Section 10.2.1, "Connecting to a Network" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 10.3.9.4, "Configuring IPv4 Settings" . Configuring the VLAN Tab If you have already added a new VLAN connection (see Procedure 10.11, "Adding a New VLAN Connection" for instructions), you can edit the VLAN tab to set the parent interface and the VLAN ID. Parent Interface A previously configured interface can be selected in the drop-down list. VLAN ID The identification number to be used to tag the VLAN network traffic. VLAN interface name The name of the VLAN interface that will be created. For example, "eth0.1" or "vlan2". Cloned MAC address Optionally sets an alternate MAC address to use for identifying the VLAN interface. This can be used to change the source MAC address for packets sent on this VLAN. MTU Optionally sets a Maximum Transmission Unit (MTU) size to be used for packets to be sent over the VLAN connection. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Establishing_a_VLAN_Connection |
5.3. Virtio and vhost_net | 5.3. Virtio and vhost_net The following diagram demonstrates the involvement of the kernel in the Virtio and vhost_net architectures. Figure 5.1. Virtio and vhost_net architectures vhost_net moves part of the Virtio driver from the user space into the kernel. This reduces copy operations, lowers latency and CPU usage. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_tuning_and_optimization_guide/sect-virtualization_tuning_optimization_guide-networking-virtio_and_vhostnet |
Chapter 2. Image service with multiple stores | Chapter 2. Image service with multiple stores The Red Hat OpenStack Platform Image service (glance) supports using multiple stores with distributed edge architecture so that you can have an image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites. The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores. For more information about locations, see Understanding the location of images . With an RBD image pool at every edge site, you can boot VMs quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can boot VMs from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Block Device Guide . 2.1. Requirements of storage edge architecture A copy of each image must exist in the Image service at the central location. Prior to creating an instance at an edge site, you must have a local copy of the image at that edge site. Images uploaded to an edge site must be copied to the central location before they can be copied to other edge sites. You must use raw images when deploying a DCN architecture with Ceph storage. RBD must be the storage driver for the Image, Compute and Block Storage services. For each site, you must assign the same value to the NovaComputeAvailabilityZone and CinderStorageAvailabilityZone parameters. 2.2. Import an image to multiple stores Use the interoperable image import workflow to import image data into multiple Ceph Storage clusters. You can import images into the Image service that are available on the local file system or through a web server. If you import an image from a web server, the image can be imported into multiple stores at once. If the image is not available on a web server, you can import the image from a local file system into the central store and then copy it to additional stores. For more information, see Copy an existing image to multiple stores . Important Always store an image copy on the central site, even if there are no instances using the image at the central location. For more information about importing images into the Image service, see the Distributed compute node and storage deployment guide. 2.2.1. Manage image import failures You can manage failures of the image import operation by using the --allow-failure parameter: If the value of the --allow-failure parameter to true , the image status becomes active after the first store successfully imports the data. This is the default setting. You can view a list of stores that failed to import the image data by using the os_glance_failed_import image property. If you set the value of the --allow-failure parameter to false , the image status only becomes active after all specified stores successfully import the data. Failure of any store to import the image data results in an image status of failed . The image is not imported into any of the specified stores. 2.2.2. Importing image data to multiple stores Because the default setting of the --allow-failure parameter is true , you do not need to include the parameter in the command if it is acceptable for some stores to fail to import the image data. Note This procedure does not require all stores to successfully import the image data. Procedure Import image data to multiple, specified stores: Replace IMAGE-NAME with the name of the image you want to import. Replace URI with the URI of the image. Replace STORE1 , STORE2 , and STORE3 with the names of the stores to which you want to import the image data. Alternatively, replace --stores with --all-stores true to upload the image to all the stores. Note The glance image-create-via-import command, which automatically converts the QCOW2 image to RAW format, works only with the web-download method. The glance-direct method is available, but it works only in deployments with a configured shared file system. 2.2.3. Importing image data to multiple stores without failure This procedure requires all stores to successfully import the image data. Procedure Import image data to multiple, specified stores: Replace IMAGE-NAME with the name of the image you want to import. Replace URI with the URI of the image. Replace STORE1 , STORE2 , and STORE3 with the names of stores to which you want to copy the image data. Alternatively, replace --stores with --all-stores true to upload the image to all the stores. Note With the --allow-failure parameter set to false , the Image service does not ignore stores that fail to import the image data. You can view the list of failed stores with the image property os_glance_failed_import . For more information see Checking the progress of image import operation . Verify that the image data was added to specific stores: Replace IMAGE-ID with the ID of the original existing image. The output displays a comma-delimited list of stores. 2.2.4. Importing image data to a single store You can import image data to a single store. Procedure Import image data to a single store: Replace IMAGE-NAME with the name of the image you want to import. Replace URI with the URI of the image. Replace STORE with the name of the store to which you want to copy the image data. Note If you do not include the options of --stores , --all-stores , or --store in the command, the Image service creates the image in the central store. Verify that the image data was added to specific store: Replace IMAGE-ID with the ID of the original existing image. The output displays a comma-delimited list of stores. 2.2.5. Checking the progress of the image import operation The interoperable image import workflow sequentially imports image data into stores. The size of the image, the number of stores, and the network speed between the central site and the edge sites impact how long it takes for the image import operation to complete. You can follow the progress of the image import by looking at two image properties, which appear in notifications sent during the image import operation: The os_glance_importing_to_stores property lists the stores that have not imported the image data. At the beginning of the import, all requested stores show up in the list. Each time a store successfully imports the image data, the Image service removes the store from the list. The os_glance_failed_import property lists the stores that fail to import the image data. This list is empty at the beginning of the image import operation. Note In the following procedure, the environment has three Ceph Storage clusters: the central store and two stores at the edge, dcn0 and dcn1 . Procedure Verify that the image data was added to specific stores: Replace IMAGE-ID with the ID of the original existing image. The output displays a comma-delimited list of stores similar to the following example snippet: Monitor the status of the image import operation. When you precede a command with watch , the command output refreshes every two seconds. Replace IMAGE-ID with the ID of the original existing image. The status of the operation changes as the image import operation progresses: Output that shows that an image failed to import resembles the following example: After the operation completes, the status changes to active: 2.3. Copy an existing image to multiple stores This feature enables you to copy existing images using Red Hat OpenStack Image service (glance) image data into multiple Ceph Storage stores at the edge by using the interoperable image import workflow. Note The image must be present at the central site before you copy it to any edge sites. Only the image owner or administrator can copy existing images to newly added stores. You can copy existing image data either by setting --all-stores to true or by specifying specific stores to receive the image data. The default setting for the --all-stores option is false . If --all-stores is false , you must specify which stores receive the image data by using --stores STORE1,STORE2 . If the image data is already present in any of the specified stores, the request fails. If you set all-stores to true , and the image data already exists in some of the stores, then those stores are excluded from the list. After you specify which stores receive the image data, the Image service copies data from the central site to a staging area. Then the Image service imports the image data by using the interoperable image import workflow. For more information, see Importing an image to multiple stores . Important Red Hat recommends that administrators carefully avoid closely timed image copy requests. Two closely timed copy-image operations for the same image causes race conditions and unexpected results. Existing image data remains as it is, but copying data to new stores fails. 2.3.1. Copying an image to all stores Use the following procedure to copy image data to all available stores. Procedure Copy image data to all available stores: Replace IMAGE-ID with the name of the image you want to copy. Confirm that the image data successfully replicated to all available stores: For information about how to check the status of the image import operation, see Checking the progress of the image import operation . 2.3.2. Copying an image to specific stores Use the following procedure to copy image data to specific stores. Procedure Copy image data to specific stores: Replace IMAGE-ID with the name of the image you want to copy. Replace STORE1 and STORE2 with the names of the stores to which you want to copy the image data. Confirm that the image data successfully replicated to the specified stores: For information about how to check the status of the image import operation, see Checking the progress of the image import operation . 2.4. Deleting an image from a specific store This feature enables you to delete an existing image copy on a specific store using Red Hat OpenStack Image service (glance). Procedure Delete an image from a specific store: Replace _STORE_ID with the name of the store on which the image copy should be deleted. Replace IMAGE_ID with the ID of the image you want to delete. Warning Using glance image-delete will permanently delete the image across all the sites. All image copies will be deleted, as well as the image instance and metadata. 2.5. Understanding the locations of images Although an image can be present on multiple sites, there is only a single UUID for a given image. The image metadata contains the locations of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. Procedure Show the sites on which a copy of the image exists: In the example, the image is present on the central site, the default_backend , and on the two edge sites dcn1 and dcn2 . Alternatively, you can run the glance image-list command with the --include-stores option to see the sites where the images exist: List the image locations properties to show the details of each location: The image properties show the different Ceph RBD URIs for the location of each image. In the example, the central image location URI is: The URI is composed of the following data: 79b70c32-df46-4741-93c0-8118ae2ae284 corresponds to the central Ceph FSID. Each Ceph cluster has a unique FSID. The default value for all sites is images , which corresponds to the Ceph pool on which the images are stored. 2bd882e7-1da0-4078-97fe-f1bb81f61b00 corresponds to the image UUID. The UUID is the same for a given image regardless of its location. The metadata shows the glance store to which this location maps. In this example, it maps to the default_backend , which is the central hub site. | [
"glance image-create-via-import --container-format bare --name IMAGE-NAME --import-method web-download --uri URI --stores STORE1 , STORE2 , STORE3",
"glance image-create-via-import --container-format bare --name IMAGE-NAME --import-method web-download --uri URI --stores STORE1 , STORE2",
"glance image-show IMAGE-ID | grep stores",
"glance image-create-via-import --container-format bare --name IMAGE-NAME --import-method web-download --uri URI --store STORE",
"glance image-show IMAGE-ID | grep stores",
"glance image-show IMAGE-ID",
"| os_glance_failed_import | | os_glance_importing_to_stores | central,dcn0,dcn1 | status | importing",
"watch glance image-show IMAGE-ID",
"| os_glance_failed_import | | os_glance_importing_to_stores | dcn0,dcn1 | status | importing",
"| os_glance_failed_import | dcn0 | os_glance_importing_to_stores | dcn1 | status | importing",
"| os_glance_failed_import | dcn0 | os_glance_importing_to_stores | | status | active",
"glance image-import IMAGE-ID --all-stores true --import-method copy-image",
"glance image-list --include-stores",
"glance image-import IMAGE-ID --stores STORE1 , STORE2 --import-method copy-image",
"glance image-list --include-stores",
"glance stores-delete --store _STORE_ID_ _IMAGE_ID_",
"glance image-show ID | grep \"stores\" | stores | default_backend,dcn1,dcn2",
"glance image-list --include-stores | ID | Name | Stores | 2bd882e7-1da0-4078-97fe-f1bb81f61b00 | cirros | default_backend,dcn1,dcn2",
"openstack image show ID -c properties | properties | (--- cut ---) locations='[{'url': 'rbd://79b70c32-df46-4741-93c0-8118ae2ae284/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'default_backend'}}, {'url': 'rbd://63df2767-8ddb-4e06-8186-8c155334f487/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'dcn1'}}, {'url': 'rbd://1b324138-2ef9-4ef9-bd9e-aa7e6d6ead78/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'dcn2'}}]', (--- cut --)",
"rbd://79b70c32-df46-4741-93c0-8118ae2ae284/images/2bd882e7-1da0-4078-97fe-f1bb81f61b00/snap', 'metadata': {'store': 'default_backend'}}"
]
| https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/creating_and_managing_images/using-image-service-with-mulitple-stores |
4.4. Fencing | 4.4. Fencing In the context of the Red Hat Virtualization environment, fencing is a host reboot initiated by the Manager using a fence agent and performed by a power management device. Fencing allows a cluster to react to unexpected host failures as well as enforce power saving, load balancing, and virtual machine availability policies. Fencing ensures that the role of Storage Pool Manager (SPM) is always assigned to a functional host. If the fenced host was the SPM, the SPM role is relinquished and reassigned to a responsive host. Because the host with the SPM role is the only host that is able to write data domain structure metadata, a non-responsive, un-fenced SPM host causes its environment to lose the ability to create and destroy virtual disks, take snapshots, extend logical volumes, and all other actions that require changes to data domain structure metadata. When a host becomes non-responsive, all of the virtual machines that are currently running on that host can also become non-responsive. However, the non-responsive host retains the lock on the virtual machine hard disk images for virtual machines it is running. Attempting to start a virtual machine on a second host and assign the second host write privileges for the virtual machine hard disk image can cause data corruption. Fencing allows the Red Hat Virtualization Manager to assume that the lock on a virtual machine hard disk image has been released; the Manager can use a fence agent to confirm that the problem host has been rebooted. When this confirmation is received, the Red Hat Virtualization Manager can start a virtual machine from the problem host on another host without risking data corruption. Fencing is the basis for highly-available virtual machines. A virtual machine that has been marked highly-available can not be safely started on an alternate host without the certainty that doing so will not cause data corruption. When a host becomes non-responsive, the Red Hat Virtualization Manager allows a grace period of thirty (30) seconds to pass before any action is taken, to allow the host to recover from any temporary errors. If the host has not become responsive by the time the grace period has passed, the Manager automatically begins to mitigate any negative impact from the non-responsive host. The Manager uses the fencing agent for the power management card on the host to stop the host, confirm it has stopped, start the host, and confirm that the host has been started. When the host finishes booting, it attempts to rejoin the cluster that it was a part of before it was fenced. If the issue that caused the host to become non-responsive has been resolved by the reboot, then the host is automatically set to Up status and is once again capable of starting and hosting virtual machines. | null | https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/technical_reference/fencing |
Chapter 76. KafkaClientAuthenticationScramSha512 schema reference | Chapter 76. KafkaClientAuthenticationScramSha512 schema reference Used in: KafkaBridgeSpec , KafkaConnectSpec , KafkaMirrorMaker2ClusterSpec , KafkaMirrorMakerConsumerSpec , KafkaMirrorMakerProducerSpec Full list of KafkaClientAuthenticationScramSha512 schema properties To configure SASL-based SCRAM-SHA-512 authentication, set the type property to scram-sha-512 . The SCRAM-SHA-512 authentication mechanism requires a username and password. 76.1. username Specify the username in the username property. 76.2. passwordSecret In the passwordSecret property, specify a link to a Secret containing the password. You can use the secrets created by the User Operator. If required, you can create a text file that contains the password, in cleartext, to use for authentication: echo -n PASSWORD > MY-PASSWORD .txt You can then create a Secret from the text file, setting your own field name (key) for the password: oc create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt Example Secret for SCRAM-SHA-512 client authentication for Kafka Connect apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm The secretName property contains the name of the Secret , and the password property contains the name of the key under which the password is stored inside the Secret . Important Do not specify the actual password in the password property. Example SASL-based SCRAM-SHA-512 client authentication configuration for Kafka Connect authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field 76.3. KafkaClientAuthenticationScramSha512 schema properties Property Property type Description passwordSecret PasswordSecretSource Reference to the Secret which holds the password. type string Must be scram-sha-512 . username string Username used for the authentication. | [
"echo -n PASSWORD > MY-PASSWORD .txt",
"create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt",
"apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm",
"authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field"
]
| https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html/streams_for_apache_kafka_api_reference/type-kafkaclientauthenticationscramsha512-reference |
Chapter 4. Red Hat build of OpenJDK features | Chapter 4. Red Hat build of OpenJDK features The latest Red Hat build of OpenJDK 17 release might include new features. Additionally, the latest release might enhance, deprecate, or remove features that originated from Red Hat build of OpenJDK 17 releases. Note For all the other changes and security fixes, see OpenJDK 17.0.4 Released . 4.1. Red Hat build of OpenJDK enhancements Red Hat build of OpenJDK 17 provides enhancements to features originally created in releases of Red Hat build of OpenJDK. HTTPS channel binding support for Java Generic Security Services (GSS) or Kerberos The Red Hat build of OpenJDK 17.0.4 release supports TLS channel binding tokens when Negotiate selects Kerberos authentication over HTTPS through javax.net.HttpsURLConnection . Channel binding tokens are increasingly required as an enhanced form of security which can mitigate certain kinds of socially engineered, man in the middle (MITM) attacks. They work by communicating from a client to a server the client's understanding of the binding between connection security (as represented by a TLS server cert) and higher level authentication credentials (such as a username and password). The server can then detect if the client has been fooled by a MITM and shutdown the session/connection. The feature is controlled through the jdk.https.negotiate.cbt system property, which is described fully in Oracle documentation . See, JDK-8285240 (JDK Bug System) Incorrect handling of quoted arguments in ProcessBuilder Before the Red Hat build of OpenJDK 17.0.4 release, arguments to ProcessBuilder on Windows that started with a double quotation mark and ended with a backslash followed by a double quotation mark passed to a command incorrectly, causing the command to fail. For example, the argument "C:\\Program Files\" , was processed as having extra double quotation marks at the end. The Red Hat build of OpenJDK 17.0.4 release resolves this issue by restoring the previously available behavior, in which the backslash (\) before the final double quotation mark is not treated specially. See, JDK-8283137 (JDK Bug System) Default JDK compressor closes when IOException is encountered The DeflaterOutputStream.close() and GZIPOutputStream.finish() methods have been modified to close out the associated default JDK compressor before propagating a Throwable up the stack. The ZIPOutputStream.closeEntry() method has been modified to close out the associated default JDK compressor before propagating an IOException , not of type ZipException , up the stack. See, JDK-8278386 (JDK Bug System) New system property to disable Windows Alternate Data Stream support in java.io.File The Windows implementation of java.io.File allows access to NTFS Alternate Data Streams (ADS) by default. These streams are structured in the format "filename:streamname". The Red Hat build of OpenJDK 17.0.4 release adds a system property that allows you to disable ADS support in java.io.File . To disable ADS support in java.io.File , set the system property jdk.io.File.enableADS to false . Important Disabling ADS support in java.io.File results in stricter path checking that prevents the use of special device files, such as NUL: . See, JDK-8285660 (JDK Bug System) Revised on 2024-05-03 15:36:35 UTC | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.4/rn_openjdk-1704-features_openjdk |
Chapter 17. Managing hosts using Ansible playbooks | Chapter 17. Managing hosts using Ansible playbooks Ansible is an automation tool used to configure systems, deploy software, and perform rolling updates. Ansible includes support for Identity Management (IdM), and you can use Ansible modules to automate host management. The following concepts and operations are performed when managing hosts and host entries using Ansible playbooks: Ensuring the presence of IdM host entries that are only defined by their FQDNs Ensuring the presence of IdM host entries with IP addresses Ensuring the presence of multiple IdM host entries with random passwords Ensuring the presence of an IdM host entry with multiple IP addresses Ensuring the absence of IdM host entries 17.1. Ensuring the presence of an IdM host entry with FQDN using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are only defined by their fully-qualified domain names (FQDNs). Specifying the FQDN name of the host is enough if at least one of the following conditions applies: The IdM server is not configured to manage DNS. The host does not have a static IP address or the IP address is not known at the time the host is configured. Adding a host defined only by an FQDN essentially creates a placeholder entry in the IdM DNS service. For example, laptops may be preconfigured as IdM clients, but they do not have IP addresses at the time they are configured. When the DNS service dynamically updates its records, the host's current IP address is detected and its DNS record is updated. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the FQDN of the host whose presence in IdM you want to ensure. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/add-host.yml file: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. 17.2. Ensuring the presence of an IdM host entry with DNS information using Ansible playbooks Follow this procedure to ensure the presence of host entries in Identity Management (IdM) using Ansible playbooks. The host entries are defined by their fully-qualified domain names (FQDNs) and their IP addresses. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. In addition, if the IdM server is configured to manage DNS and you know the IP address of the host, specify a value for the ip_address parameter. The IP address is necessary for the host to exist in the DNS resource records. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-present.yml file. You can also include other, additional information: Run the playbook: Note The procedure results in a host entry in the IdM LDAP server being created but not in enrolling the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms host01.idm.example.com exists in IdM. 17.3. Ensuring the presence of multiple IdM host entries with random passwords using Ansible playbooks The ipahost module allows the system administrator to ensure the presence or absence of multiple host entries in IdM using just one Ansible task. Follow this procedure to ensure the presence of multiple host entries that are only defined by their fully-qualified domain names (FQDNs). Running the Ansible playbook generates random passwords for the hosts. Note Without Ansible, host entries are created in IdM using the ipa host-add command. The result of adding a host to IdM is the state of the host being present in IdM. Because of the Ansible reliance on idempotence, to add a host to IdM using Ansible, you must create a playbook in which you define the state of the host as present: state: present . Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the hosts whose presence in IdM you want to ensure. To make the Ansible playbook generate a random password for each host even when the host already exists in IdM and update_password is limited to on_create , add the random: true and force: true options. To simplify this step, you can copy and modify the example from the /usr/share/doc/ansible-freeipa/README-host.md Markdown file: Run the playbook: Note To deploy the hosts as IdM clients using random, one-time passwords (OTPs), see Authorization options for IdM client enrollment using an Ansible playbook or Installing a client by using a one-time password: Interactive installation . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of one of the hosts: The output confirms host01.idm.example.com exists in IdM with a random password. 17.4. Ensuring the presence of an IdM host entry with multiple IP addresses using Ansible playbooks Follow this procedure to ensure the presence of a host entry in Identity Management (IdM) using Ansible playbooks. The host entry is defined by its fully-qualified domain name (FQDN) and its multiple IP addresses. Note In contrast to the ipa host utility, the Ansible ipahost module can ensure the presence or absence of several IPv4 and IPv6 addresses for a host. The ipa host-mod command cannot handle IP addresses. Prerequisites You know the IdM administrator password. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.14 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file. Specify, as the name of the ipahost variable, the fully-qualified domain name (FQDN) of the host whose presence in IdM you want to ensure. Specify each of the multiple IPv4 and IPv6 ip_address values on a separate line by using the ip_address syntax. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/host-member-ipaddresses-present.yml file. You can also include additional information: Run the playbook: Note The procedure creates a host entry in the IdM LDAP server but does not enroll the host into the IdM Kerberos realm. For that, you must deploy the host as an IdM client. For details, see Installing an Identity Management client using an Ansible playbook . Verification Log in to your IdM server as admin: Enter the ipa host-show command and specify the name of the host: The output confirms that host01.idm.example.com exists in IdM. To verify that the multiple IP addresses of the host exist in the IdM DNS records, enter the ipa dnsrecord-show command and specify the following information: The name of the IdM domain The name of the host The output confirms that all the IPv4 and IPv6 addresses specified in the playbook are correctly associated with the host01.idm.example.com host entry. 17.5. Ensuring the absence of an IdM host entry using Ansible playbooks Follow this procedure to ensure the absence of host entries in Identity Management (IdM) using Ansible playbooks. Prerequisites IdM administrator credentials Procedure Create an inventory file, for example inventory.file , and define ipaserver in it: Create an Ansible playbook file with the fully-qualified domain name (FQDN) of the host whose absence from IdM you want to ensure. If your IdM domain has integrated DNS, use the updatedns: true option to remove the associated records of any kind for the host from the DNS. To simplify this step, you can copy and modify the example in the /usr/share/doc/ansible-freeipa/playbooks/host/delete-host.yml file: Run the playbook: Note The procedure results in: The host not being present in the IdM Kerberos realm. The host entry not being present in the IdM LDAP server. To remove the specific IdM configuration of system services, such as System Security Services Daemon (SSSD), from the client host itself, you must run the ipa-client-install --uninstall command on the client. For details, see Uninstalling an IdM client . Verification Log into ipaserver as admin: Display information about host01.idm.example.com : The output confirms that the host does not exist in IdM. 17.6. Additional resources See the /usr/share/doc/ansible-freeipa/README-host.md Markdown file. See the additional playbooks in the /usr/share/doc/ansible-freeipa/playbooks/host directory. | [
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com state: present force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host01.idm.example.com is present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com description: Example host ip_address: 192.168.0.123 locality: Lab ns_host_location: Lab ns_os_version: CentOS 7 ns_hardware_platform: Lenovo T61 mac_address: - \"08:00:27:E3:B1:2D\" - \"52:54:00:BD:97:1E\" state: present",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Description: Example host Locality: Lab Location: Lab Platform: Lenovo T61 Operating system: CentOS 7 Principal name: host/[email protected] Principal alias: host/[email protected] MAC address: 08:00:27:E3:B1:2D, 52:54:00:BD:97:1E Password: False Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Ensure hosts with random password hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Hosts host01.idm.example.com and host02.idm.example.com present with random passwords ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" hosts: - name: host01.idm.example.com random: true force: true - name: host02.idm.example.com random: true force: true register: ipahost",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-hosts-are-present.yml [...] TASK [Hosts host01.idm.example.com and host02.idm.example.com present with random passwords] changed: [r8server.idm.example.com] => {\"changed\": true, \"host\": {\"host01.idm.example.com\": {\"randompassword\": \"0HoIRvjUdH0Ycbf6uYdWTxH\"}, \"host02.idm.example.com\": {\"randompassword\": \"5VdLgrf3wvojmACdHC3uA3s\"}}}",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Host name: host01.idm.example.com Password: True Keytab: False Managed by: host01.idm.example.com",
"[ipaserver] server.idm.example.com",
"--- - name: Host member IP addresses present hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Ensure host101.example.com IP addresses present ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com ip_address: - 192.168.0.123 - fe80::20c:29ff:fe02:a1b3 - 192.168.0.124 - fe80::20c:29ff:fe02:a1b4 force: true",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-with-multiple-IP-addreses-is-present.yml",
"ssh [email protected] Password:",
"ipa host-show host01.idm.example.com Principal name: host/[email protected] Principal alias: host/[email protected] Password: False Keytab: False Managed by: host01.idm.example.com",
"ipa dnsrecord-show idm.example.com host01 [...] Record name: host01 A record: 192.168.0.123, 192.168.0.124 AAAA record: fe80::20c:29ff:fe02:a1b3, fe80::20c:29ff:fe02:a1b4",
"[ipaserver] server.idm.example.com",
"--- - name: Host absent hosts: ipaserver vars_files: - /home/user_name/MyPlaybooks/secret.yml tasks: - name: Host host01.idm.example.com absent ipahost: ipaadmin_password: \"{{ ipaadmin_password }}\" name: host01.idm.example.com updatedns: true state: absent",
"ansible-playbook --vault-password-file=password_file -v -i path_to_inventory_directory/inventory.file path_to_playbooks_directory/ensure-host-absent.yml",
"ssh [email protected] Password: [admin@server /]USD",
"ipa host-show host01.idm.example.com ipa: ERROR: host01.idm.example.com: host not found"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/using_ansible_to_install_and_manage_identity_management/managing-hosts-using-Ansible-playbooks_using-ansible-to-install-and-manage-identity-management |
F.2.3. The Kernel | F.2.3. The Kernel When the kernel is loaded, it immediately initializes and configures the computer's memory and configures the various hardware attached to the system, including all processors, I/O subsystems, and storage devices. It then looks for the compressed initramfs image(s) in a predetermined location in memory, decompresses it directly to /sysroot/ , and loads all necessary drivers. , it initializes virtual devices related to the file system, such as LVM or software RAID, before completing the initramfs processes and freeing up all the memory the disk image once occupied. The kernel then creates a root device, mounts the root partition read-only, and frees any unused memory. At this point, the kernel is loaded into memory and operational. However, since there are no user applications that allow meaningful input to the system, not much can be done with the system. To set up the user environment, the kernel executes the /sbin/init program. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s2-boot-init-shutdown-kernel |
Chapter 11. Integrating Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator | Chapter 11. Integrating Red Hat Quay into OpenShift Container Platform with the Quay Bridge Operator The Quay Bridge Operator duplicates the features of the integrated OpenShift Container Platform registry into the new Red Hat Quay registry. Using the Quay Bridge Operator, you can replace the integrated container registry in OpenShift Container Platform with a Red Hat Quay registry. The features enabled with the Quay Bridge Operator include: Synchronizing OpenShift Container Platform namespaces as Red Hat Quay organizations. Creating robot accounts for each default namespace service account. Creating secrets for each created robot account, and associating each robot secret to a service account as Mountable and Image Pull Secret . Synchronizing OpenShift Container Platform image streams as Red Hat Quay repositories. Automatically rewriting new builds making use of image streams to output to Red Hat Quay. Automatically importing an image stream tag after a build completes. By using the following procedures, you can enable bi-directional communication between your Red Hat Quay and OpenShift Container Platform clusters. 11.1. Setting up Red Hat Quay for the Quay Bridge Operator In this procedure, you will create a dedicated Red Hat Quay organization, and from a new application created within that organization you will generate an OAuth token to be used with the Quay Bridge Operator in OpenShift Container Platform. Procedure Log in to Red Hat Quay through the web UI. Select the organization for which the external application will be configured. On the navigation pane, select Applications . Select Create New Application and enter a name for the new application, for example, openshift . On the OAuth Applications page, select your application, for example, openshift . On the navigation pane, select Generate Token . Select the following fields: Administer Organization Administer Repositories Create Repositories View all visible repositories Read/Write to any accessible repositories Administer User Read User Information Review the assigned permissions. Select Authorize Application and then confirm confirm the authorization by selecting Authorize Application . Save the generated access token. Important Red Hat Quay does not offer token management. You cannot list tokens, delete tokens, or modify tokens. The generated access token is only shown once and cannot be re-obtained after closing the page. 11.2. Installing the Quay Bridge Operator on OpenShift Container Platform In this procedure, you will install the Quay Bridge Operator on OpenShift Container Platform. Prerequiites You have set up Red Hat Quay and obtained an Access Token. An OpenShift Container Platform 4.6 or greater environment for which you have cluster administrator permissions. Procedure Open the Administrator perspective of the web console and navigate to Operators OperatorHub on the navigation pane. Search for Quay Bridge Operator , click the Quay Bridge Operator title, and then click Install . Select the version to install, for example, stable-3.7 , and then click Install . Click View Operator when the installation finishes to go to the Quay Bridge Operator's Details page. Alternatively, you can click Installed Operators Red Hat Quay Bridge Operator to go to the Details page. 11.3. Creating an OpenShift Container Platform secret for the OAuth token In this procedure, you will add the previously obtained access token to communicate with your Red Hat Quay deployment. The access token will be stored within OpenShift Container Platform as a secret. Prerequisites You have set up Red Hat Quay and obtained an access token. You have deployed the Quay Bridge Operator on OpenShift Container Platform. An OpenShift Container Platform 4.6 or greater environment for which you have cluster administrator permissions. You have installed the OpenShift CLI (oc). Procedure Create a secret that contains the access token in the openshift-operators namespace: USD oc create secret -n openshift-operators generic <secret-name> --from-literal=token=<access_token> 11.4. Creating the QuayIntegration custom resource In this procedure, you will create a QuayIntegration custom resource, which can be completed from either the web console or from the command line. Prerequisites You have set up Red Hat Quay and obtained an access token. You have deployed the Quay Bridge Operator on OpenShift Container Platform. An OpenShift Container Platform 4.6 or greater environment for which you have cluster administrator permissions. Optional: You have installed the OpenShift CLI (oc). 11.4.1. Optional: Creating the QuayIntegration custom resource using the CLI Follow this procedure to create the QuayIntegration custom resource using the command line. Procedure Create a quay-integration.yaml : Use the following configuration for a minimal deployment of the QuayIntegration custom resource: apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration spec: clusterID: openshift 1 credentialsSecret: namespace: openshift-operators name: quay-integration 2 quayHostname: https://<QUAY_URL> 3 insecureRegistry: false 4 1 The clusterID value should be unique across the entire ecosystem. This value is required and defaults to openshift . 2 The credentialsSecret property refers to the namespace and name of the secret containing the token that was previously created. 3 Replace the QUAY_URL with the hostname of your Red Hat Quay instance. 4 If Red Hat Quay is using self signed certificates, set the property to insecureRegistry: true . For a list of all configuration fields, see "QuayIntegration configuration fields". Create the QuayIntegration custom resource: 11.4.2. Optional: Creating the QuayIntegration custom resource using the web console Follow this procedure to create the QuayIntegration custom resource using the web console. Procedure Open the Administrator perspective of the web console and navigate to Operators Installed Operators . Click Red Hat Quay Bridge Operator . On the Details page of the Quay Bridge Operator, click Create Instance on the Quay Integration API card. On the Create QuayIntegration page, enter the following required information in either Form view or YAML view : Name : The name that will refer to the QuayIntegration custom resource object. Cluster ID : The ID associated with this cluster. This value should be unique across the entire ecosystem. Defaults to openshift if left unspecified. Credentials secret : Refers to the namespace and name of the secret containing the token that was previously created. Quay hostname : The hostname of the Quay registry. For a list of all configuration fields, see " QuayIntegration configuration fields ". After the QuayIntegration custom resource is created, your OpenShift Container Platform cluster will be linked to your Red Hat Quay instance. Organizations within your Red Hat Quay registry should be created for the related namespace for the OpenShift Container Platform environment. 11.5. Using Quay Bridge Operator Use the following procedure to use the Quay Bridge Operator. Prerequisites You have installed the Red Hat Quay Operator. You have logged into OpenShift Container Platform as a cluster administrator. You have logged into your Red Hat Quay registry. You have installed the Quay Bridge Operator. You have configured the QuayIntegration custom resource. Procedure Enter the following command to create a new OpenShift Container Platform project called e2e-demo : USD oc new-project e2e-demo After you have created a new project, a new Organization is created in Red Hat Quay. Navigate to the Red Hat Quay registry and confirm that you have created a new Organization named openshift_e2e-demo . Note The openshift value of the Organization might different if the clusterID in your QuayIntegration resource used a different value. On the Red Hat Quay UI, click the name of the new Organization, for example, openshift_e2e-demo . Click Robot Accounts in the navigation pane. As part of new project, the following Robot Accounts should have been created: openshift_e2e-demo+deployer openshift_e2e-demo+default openshift_e2e-demo+builder Enter the following command to confirm three secrets containing Docker configuration associated with the applicable Robot Accounts were created: USD oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift Example output stevsmit@stevsmit ocp-quay USD oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift NAME TYPE DATA AGE builder-quay-openshift kubernetes.io/dockerconfigjson 1 77m deployer-quay-openshift kubernetes.io/dockerconfigjson 1 77m default-quay-openshift kubernetes.io/dockerconfigjson 1 77m Enter the following command to display detailed information about builder ServiceAccount (SA), including its secrets, token expiration, and associated roles and role bindings. This ensures that the project is integrated via the Quay Bridge Operator. USD oc describe sa builder default deployer Example output ... Name: builder Namespace: e2e-demo Labels: <none> Annotations: <none> Image pull secrets: builder-dockercfg-12345 builder-quay-openshift Mountable secrets: builder-dockercfg-12345 builder-quay-openshift Tokens: builder-token-12345 Events: <none> ... Enter the following command to create and deploy a new application called httpd-template : USD oc new-app --template=httpd-example Example output --> Deploying template "e2e-demo/httpd-example" to project e2e-demo ... --> Creating resources ... service "httpd-example" created route.route.openshift.io "httpd-example" created imagestream.image.openshift.io "httpd-example" created buildconfig.build.openshift.io "httpd-example" created deploymentconfig.apps.openshift.io "httpd-example" created --> Success Access your application via route 'httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org' Build scheduled, use 'oc logs -f buildconfig/httpd-example' to track its progress. Run 'oc status' to view your app. After running this command, BuildConfig , ImageStream , Service, Route , and DeploymentConfig resources are created. When the ImageStream resource is created, an associated repository is created in Red Hat Quay. For example: The ImageChangeTrigger for the BuildConfig triggers a new Build when the Apache HTTPD image, located in the openshift namespace, is resolved. As the new Build is created, the MutatingWebhookConfiguration automatically rewriters the output to point at Red Hat Quay. You can confirm that the build is complete by querying the output field of the build by running the following command: USD oc get build httpd-example-1 --template='{{ .spec.output.to.name }}' Example output example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest On the Red Hat Quay UI, navigate to the openshift_e2e-demo Organization and select the httpd-example repository. Click Tags in the navigation pane and confirm that the latest tag has been successfully pushed. Enter the following command to ensure that the latest tag has been resolved: USD oc describe is httpd-example Example output Name: httpd-example Namespace: e2e-demo Created: 55 minutes ago Labels: app=httpd-example template=httpd-example Description: Keeps track of changes in the application image Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/image.dockerRepositoryCheck=2023-10-02T17:56:45Z Image Repository: image-registry.openshift-image-registry.svc:5000/e2e-demo/httpd-example Image Lookup: local=false Unique Images: 0 Tags: 1 latest tagged from example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest After the ImageStream is resolwillved, a new deployment should have been triggered. Enter the following command to generate a URL output: USD oc get route httpd-example --template='{{ .spec.host }}' Example output httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org Navigate to the URL. If a sample webpage appears, the deployment was successful. Enter the following command to delete the resources and clean up your Red Hat Quay repository: USD oc delete project e2e-demo Note The command waits until the project resources have been removed. This can be bypassed by adding the --wait=false to the above command After the command completes, navigate to your Red Hat Quay repository and confirm that the openshift_e2e-demo Organization is no longer available. Additional resources Best practices dictate that all communication between a client and an image registry be facilitated through secure means. Communication should leverage HTTPS/TLS with a certificate trust between the parties. While Red Hat Quay can be configured to serve an insecure configuration, proper certificates should be utilized on the server and configured on the client. Follow the OpenShift Container Platform documentation for adding and managing certificates at the container runtime level. | [
"oc create secret -n openshift-operators generic <secret-name> --from-literal=token=<access_token>",
"touch quay-integration.yaml",
"apiVersion: quay.redhat.com/v1 kind: QuayIntegration metadata: name: example-quayintegration spec: clusterID: openshift 1 credentialsSecret: namespace: openshift-operators name: quay-integration 2 quayHostname: https://<QUAY_URL> 3 insecureRegistry: false 4",
"oc create -f quay-integration.yaml",
"oc new-project e2e-demo",
"oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift",
"stevsmit@stevsmit ocp-quay USD oc get secrets builder-quay-openshift deployer-quay-openshift default-quay-openshift NAME TYPE DATA AGE builder-quay-openshift kubernetes.io/dockerconfigjson 1 77m deployer-quay-openshift kubernetes.io/dockerconfigjson 1 77m default-quay-openshift kubernetes.io/dockerconfigjson 1 77m",
"oc describe sa builder default deployer",
"Name: builder Namespace: e2e-demo Labels: <none> Annotations: <none> Image pull secrets: builder-dockercfg-12345 builder-quay-openshift Mountable secrets: builder-dockercfg-12345 builder-quay-openshift Tokens: builder-token-12345 Events: <none>",
"oc new-app --template=httpd-example",
"--> Deploying template \"e2e-demo/httpd-example\" to project e2e-demo --> Creating resources service \"httpd-example\" created route.route.openshift.io \"httpd-example\" created imagestream.image.openshift.io \"httpd-example\" created buildconfig.build.openshift.io \"httpd-example\" created deploymentconfig.apps.openshift.io \"httpd-example\" created --> Success Access your application via route 'httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org' Build scheduled, use 'oc logs -f buildconfig/httpd-example' to track its progress. Run 'oc status' to view your app.",
"oc get build httpd-example-1 --template='{{ .spec.output.to.name }}'",
"example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest",
"oc describe is httpd-example",
"Name: httpd-example Namespace: e2e-demo Created: 55 minutes ago Labels: app=httpd-example template=httpd-example Description: Keeps track of changes in the application image Annotations: openshift.io/generated-by=OpenShiftNewApp openshift.io/image.dockerRepositoryCheck=2023-10-02T17:56:45Z Image Repository: image-registry.openshift-image-registry.svc:5000/e2e-demo/httpd-example Image Lookup: local=false Unique Images: 0 Tags: 1 latest tagged from example-registry-quay-quay-enterprise.apps.quay-ocp.gcp.quaydev.org/openshift_e2e-demo/httpd-example:latest",
"oc get route httpd-example --template='{{ .spec.host }}'",
"httpd-example-e2e-demo.apps.quay-ocp.gcp.quaydev.org",
"oc delete project e2e-demo"
]
| https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_operator_features/quay-bridge-operator |
5.10. Configuring IP Address Masquerading | 5.10. Configuring IP Address Masquerading IP masquerading is a process where one computer acts as an IP gateway for a network. For masquerading, the gateway dynamically looks up the IP of the outgoing interface all the time and replaces the source address in the packets with this address. You use masquerading if the IP of the outgoing interface can change. A typical use case for masquerading is if a router replaces the private IP addresses, which are not routed on the internet, with the public dynamic IP address of the outgoing interface on the router. To check if IP masquerading is enabled (for example, for the external zone), enter the following command as root : The command prints yes with exit status 0 if enabled. It prints no with exit status 1 otherwise. If zone is omitted, the default zone will be used. To enable IP masquerading, enter the following command as root : To make this setting persistent, repeat the command adding the --permanent option. To disable IP masquerading, enter the following command as root : To make this setting persistent, repeat the command adding the --permanent option. For more information, see: Section 6.3.1, "The different NAT types: masquerading, source NAT, destination NAT, and redirect" Section 6.3.2, "Configuring masquerading using nftables" | [
"~]# firewall-cmd --zone=external --query-masquerade",
"~]# firewall-cmd --zone=external --add-masquerade",
"~]# firewall-cmd --zone=external --remove-masquerade"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/security_guide/sec-configuring_ip_address_masquerading |
4.182. mod_nss | 4.182. mod_nss 4.182.1. RHBA-2011:1656 - mod_nss bug fix update An updated mod_nss package that fixes several bugs is now available for Red Hat Enterprise Linux 6. The mod_nss module provides strong cryptography for the Apache HTTP Server via the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, using the Network Security Services (NSS) security library. Bug Fixes BZ# 691502 When the NSS library was not initialized and mod_nss tried to clear its SSL cache on start-up, mod_nss terminated unexpectedly when the NSS library was built with debugging enabled. With this update, mod_nss does not try to clear the SSL cache in the described scenario, thus preventing this bug. BZ# 714154 Previously, a static array containing the arguments for launching the nss_pcache command was overflowing the size by one. This could lead to a variety of issues including unexpected termination. This bug has been fixed, and mod_nss now uses properly sized static array when launching nss_pcache. BZ# 702437 Prior to this update, client certificates were only retrieved during the initial SSL handshake if the NSSVerifyClient option was set to "require" or "optional". Also, the FakeBasicAuth option only retrieved Common Name rather than the entire certificate subject. Consequently, it was possible to spoof an identity using that option. This bug has been fixed, the FakeBasicAuth option is now prefixed with "/" and is thus compatible with OpenSSL, and certificates are now retrieved on all subsequent requests beyond the first one. Users of mod_nss are advised to upgrade to this updated package, which fixes these bugs. 4.182.2. RHBA-2012:0394 - mod_nss bug fix update An updated mod_nss package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The mod_nss module provides strong cryptography for the Apache HTTP Server via the Secure Sockets Layer (SSL) and Transport Layer Security (TLS) protocols, using the Network Security Services (NSS) security library. Bug Fixes BZ# 800270 , BZ# 800271 The RHBA-2011:1656 errata advisory released a patch that fixed a problem of mod_nss crashing when clearing its SSL cache on startup without the NSS library initialized. However, that patch placed the fix in the improper location in the code, which caused a file descriptor leak in the Apache httpd daemon. With this update, the necessary fix has been relocated to the appropriate location in the code so that the problem is fixed and the file descriptor leak no longer occurs. All users of mod_nss are advised to upgrade to this updated package, which fixes these bugs. | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/mod_nss |
Chapter 1. Red Hat build of OpenJDK overview | Chapter 1. Red Hat build of OpenJDK overview The Red Hat build of OpenJDK is a free and open source implementation of the Java Platform, Standard Edition (Java SE). It is based on the upstream OpenJDK 8u, OpenJDK 11u, and OpenJDK 17u projects and includes the Shenandoah Garbage Collector in all versions. Multi-platform - The Red Hat build of OpenJDK is now supported on Windows and RHEL. This helps you standardize on a single Java platform across desktop, datacenter, and hybrid cloud. Frequent releases - Red Hat delivers quarterly updates of JRE and JDK for the Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 17 distributions. These are available as rpm , portables, msi , zip files and containers. Long-term support - Red Hat supports the recently released Red Hat build of OpenJDK 8, Red Hat build of OpenJDK 11, and Red Hat build of OpenJDK 17 distributions. For more information about the support lifecycle, see OpenJDK Life Cycle and Support Policy . Java Web Start - Red Hat build of OpenJDK supports Java Web Start for RHEL. | null | https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/getting_started_with_red_hat_build_of_openjdk_17/openjdk-overview |
Chapter 5. Security considerations | Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. Cluster wide encryption is supported in OpenShift Data Foundation 4.6 without Key Management System (KMS). Starting with OpenShift Data Foundation 4.7, it supports with and without HashiCorp Vault KMS. Starting with OpenShift Data Foundation 4.12, it supports with and without both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. This method is available from OpenShift Data Foundation 4.10. Currently, HashiCorp Vault is the only supported KMS. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault KV secret engine, API version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment. 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation. | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.15/html/planning_your_deployment/security-considerations_rhodf |
Chapter 70. JmxTransTemplate schema reference | Chapter 70. JmxTransTemplate schema reference Used in: JmxTransSpec Property Property type Description deployment DeploymentTemplate Template for JmxTrans Deployment . pod PodTemplate Template for JmxTrans Pods . container ContainerTemplate Template for JmxTrans container. serviceAccount ResourceTemplate Template for the JmxTrans service account. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-jmxtranstemplate-reference |
3.5. Configuring the Cluster Resources | 3.5. Configuring the Cluster Resources This section provides the procedure for configuring the cluster resources for this use case. Note It is recommended that when you create a cluster resource with the pcs resource create , you execute the pcs status command immediately afterwards to verify that the resource is running. Note that if you have not configured a fencing device for your cluster, as described in Section 1.3, "Fencing Configuration" , by default the resources do not start. If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration. This starts the service outside of the cluster's control and knowledge. At the point the configured resources are running again, run pcs resource cleanup resource to make the cluster aware of the updates. For information on the pcs resource debug-start command, see the High Availability Add-On Reference manual. The following procedure configures the system resources. To ensure these resources all run on the same node, they are configured as part of the resource group nfsgroup . The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only. The following command creates the LVM resource named my_lvm . This command specifies the exclusive=true parameter to ensure that only the cluster is capable of activating the LVM logical volume. Because the resource group nfsgroup does not yet exist, this command creates the resource group. Check the status of the cluster to verify that the resource is running. Configure a Filesystem resource for the cluster. Note You can specify mount options as part of the resource configuration for a Filesystem resource with the options= options parameter. Run the pcs resource describe Filesystem command for full configuration options. The following command configures an ext4 Filesystem resource named nfsshare as part of the nfsgroup resource group. This file system uses the LVM volume group and ext4 file system you created in Section 3.2, "Configuring an LVM Volume with an ext4 File System" and will be mounted on the /nfsshare directory you created in Section 3.3, "NFS Share Setup" . Verify that the my_lvm and nfsshare resources are running. Create the nfsserver resource named nfs-daemon part of the resource group nfsgroup . Note The nfsserver resource allows you to specify an nfs_shared_infodir parameter, which is a directory that NFS daemons will use to store NFS-related stateful information. It is recommended that this attribute be set to a subdirectory of one of the Filesystem resources you created in this collection of exports. This ensures that the NFS daemons are storing their stateful information on a device that will become available to another node if this resource group should need to relocate. In this example, /nfsshare is the shared-storage directory managed by the Filesystem resource, /nfsshare/exports/export1 and /nfsshare/exports/export2 are the export directories, and /nfsshare/nfsinfo is the shared-information directory for the nfsserver resource. Add the exportfs resources to export the /nfsshare/exports directory. These resources are part of the resource group nfsgroup . This builds a virtual directory for NFSv4 clients. NFSv3 clients can access these exports as well. Add the floating IP address resource that NFS clients will use to access the NFS share. The floating IP address that you specify requires a reverse DNS lookup or it must be specified in the /etc/hosts on all nodes in the cluster. This resource is part of the resource group nfsgroup . For this example deployment, we are using 192.168.122.200 as the floating IP address. Add an nfsnotify resource for sending NFSv3 reboot notifications once the entire NFS deployment has initialized. This resource is part of the resource group nfsgroup . Note For the NFS notification to be processed correctly, the floating IP address must have a host name associated with it that is consistent on both the NFS servers and the NFS client. After creating the resources and the resource constraints, you can check the status of the cluster. Note that all resources are running on the same node. | [
"pcs resource create my_lvm LVM volgrpname=my_vg exclusive=true --group nfsgroup",
"root@z1 ~]# pcs status Cluster name: my_cluster Last updated: Thu Jan 8 11:13:17 2015 Last change: Thu Jan 8 11:13:08 2015 Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.12-a14efad 2 Nodes configured 3 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com PCSD Status: z1.example.com: Online z2.example.com: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled pcsd: active/enabled",
"pcs resource create nfsshare Filesystem device=/dev/my_vg/my_lv directory=/nfsshare fstype=ext4 --group nfsgroup",
"pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com",
"pcs resource create nfs-daemon nfsserver nfs_shared_infodir=/nfsshare/nfsinfo nfs_no_notify=true --group nfsgroup pcs status",
"pcs resource create nfs-root exportfs clientspec=192.168.122.0/255.255.255.0 options=rw,sync,no_root_squash directory=/nfsshare/exports fsid=0 --group nfsgroup # pcs resource create nfs-export1 exportfs clientspec=192.168.122.0/255.255.255.0 options=rw,sync,no_root_squash directory=/nfsshare/exports/export1 fsid=1 --group nfsgroup # pcs resource create nfs-export2 exportfs clientspec=192.168.122.0/255.255.255.0 options=rw,sync,no_root_squash directory=/nfsshare/exports/export2 fsid=2 --group nfsgroup",
"pcs resource create nfs_ip IPaddr2 ip=192.168.122.200 cidr_netmask=24 --group nfsgroup",
"pcs resource create nfs-notify nfsnotify source_host=192.168.122.200 --group nfsgroup",
"pcs status Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: nfsgroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com nfsshare (ocf::heartbeat:Filesystem): Started z1.example.com nfs-daemon (ocf::heartbeat:nfsserver): Started z1.example.com nfs-root (ocf::heartbeat:exportfs): Started z1.example.com nfs-export1 (ocf::heartbeat:exportfs): Started z1.example.com nfs-export2 (ocf::heartbeat:exportfs): Started z1.example.com nfs_ip (ocf::heartbeat:IPaddr2): Started z1.example.com nfs-notify (ocf::heartbeat:nfsnotify): Started z1.example.com"
]
| https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_administration/s1-resourcegroupcreatenfs-HAAA |
Appendix A. Broker configuration parameters | Appendix A. Broker configuration parameters advertised.listeners Type: string Default: null Importance: high Dynamic update: per-broker Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners , it is not valid to advertise the 0.0.0.0 meta-address. Also unlike listeners , there can be duplicated ports in this property, so that one listener can be configured to advertise another listener's address. This can be useful in some cases where external load balancers are used. auto.create.topics.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enable auto creation of topic on the server. auto.leader.rebalance.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds . If the leader imbalance exceeds leader.imbalance.per.broker.percentage , leader rebalance to the preferred leader for partitions is triggered. background.threads Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads to use for various background processing tasks. broker.id Type: int Default: -1 Importance: high Dynamic update: read-only The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1. compression.type Type: string Default: producer Importance: high Dynamic update: cluster-wide Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. control.plane.listener.name Type: string Default: null Importance: high Dynamic update: read-only Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is : listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094 listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL". On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker. For example, if the broker's published endpoints on zookeeper are : "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"] and the controller's config is : listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker. If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections. controller.listener.names Type: string Default: null Importance: high Dynamic update: read-only A comma-separated list of the names of the listeners used by the controller. This is required if running in KRaft mode. The ZK-based controller will not use this configuration. controller.quorum.election.backoff.max.ms Type: int Default: 1000 (1 second) Importance: high Dynamic update: read-only Maximum time in milliseconds before starting new elections. This is used in the binary exponential backoff mechanism that helps prevent gridlocked elections. controller.quorum.election.timeout.ms Type: int Default: 1000 (1 second) Importance: high Dynamic update: read-only Maximum time in milliseconds to wait without being able to fetch from the leader before triggering a new election. controller.quorum.fetch.timeout.ms Type: int Default: 2000 (2 seconds) Importance: high Dynamic update: read-only Maximum time without a successful fetch from the current leader before becoming a candidate and triggering a election for voters; Maximum time without receiving fetch from a majority of the quorum before asking around to see if there's a new epoch for leader. controller.quorum.voters Type: list Default: "" Valid Values: non-empty list Importance: high Dynamic update: read-only Map of id/endpoint information for the set of voters in a comma-separated list of {id}@{host}:{port} entries. For example: 1@localhost:9092,2@localhost:9093,3@localhost:9094 . delete.topic.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off. leader.imbalance.check.interval.seconds Type: long Default: 300 Importance: high Dynamic update: read-only The frequency with which the partition rebalance check is triggered by the controller. leader.imbalance.per.broker.percentage Type: int Default: 10 Importance: high Dynamic update: read-only The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage. listeners Type: string Default: PLAINTEXT://:9092 Importance: high Dynamic update: per-broker Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Listener names and port numbers must be unique. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093. log.dir Type: string Default: /tmp/kafka-logs Importance: high Dynamic update: read-only The directory in which the log data is kept (supplemental for log.dirs property). log.dirs Type: string Default: null Importance: high Dynamic update: read-only The directories in which the log data is kept. If not set, the value in log.dir is used. log.flush.interval.messages Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of messages accumulated on a log partition before messages are flushed to disk. log.flush.interval.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used. log.flush.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of the last flush which acts as the log recovery point. log.flush.scheduler.interval.ms Type: long Default: 9223372036854775807 Importance: high Dynamic update: read-only The frequency in ms that the log flusher checks whether any log needs to be flushed to disk. log.flush.start.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of log start offset. log.retention.bytes Type: long Default: -1 Importance: high Dynamic update: cluster-wide The maximum size of the log before deleting it. log.retention.hours Type: int Default: 168 Importance: high Dynamic update: read-only The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property. log.retention.minutes Type: int Default: null Importance: high Dynamic update: read-only The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used. log.retention.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. log.roll.hours Type: int Default: 168 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property. log.roll.jitter.hours Type: int Default: 0 Valid Values: [0,... ] Importance: high Dynamic update: read-only The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property. log.roll.jitter.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used. log.roll.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used. log.segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Importance: high Dynamic update: cluster-wide The maximum size of a single log file. log.segment.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The amount of time to wait before deleting a file from the filesystem. message.max.bytes Type: int Default: 1048588 Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. metadata.log.dir Type: string Default: null Importance: high Dynamic update: read-only This configuration determines where we put the metadata log for clusters in KRaft mode. If it is not set, the metadata log is placed in the first log directory from log.dirs. metadata.log.max.record.bytes.between.snapshots Type: long Default: 20971520 Valid Values: [1,... ] Importance: high Dynamic update: read-only This is the maximum number of bytes in the log between the latest snapshot and the high-watermark needed before generating a new snapshot. metadata.log.segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [12,... ] Importance: high Dynamic update: read-only The maximum size of a single metadata log file. metadata.log.segment.ms Type: long Default: 604800000 (7 days) Importance: high Dynamic update: read-only The maximum time before a new metadata log file is rolled out (in milliseconds). metadata.max.retention.bytes Type: long Default: -1 Importance: high Dynamic update: read-only The maximum combined size of the metadata log and snapshots before deleting old snapshots and log files. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit. metadata.max.retention.ms Type: long Default: 604800000 (7 days) Importance: high Dynamic update: read-only The number of milliseconds to keep a metadata log file or snapshot before deleting it. Since at least one snapshot must exist before any logs can be deleted, this is a soft limit. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. node.id Type: int Default: -1 Importance: high Dynamic update: read-only The node ID associated with the roles this process is playing when process.roles is non-empty. This is required configuration when running in KRaft mode. num.io.threads Type: int Default: 8 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for processing requests, which may include disk I/O. num.network.threads Type: int Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for receiving requests from the network and sending responses to the network. num.recovery.threads.per.data.dir Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. num.replica.alter.log.dirs.threads Type: int Default: null Importance: high Dynamic update: read-only The number of threads that can move replicas between log directories, which may include disk I/O. num.replica.fetchers Type: int Default: 1 Importance: high Dynamic update: cluster-wide Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker. offset.metadata.max.bytes Type: int Default: 4096 (4 kibibytes) Importance: high Dynamic update: read-only The maximum size for a metadata entry associated with an offset commit. offsets.commit.required.acks Type: short Default: -1 Importance: high Dynamic update: read-only The required acks before the commit can be accepted. In general, the default (-1) should not be overridden. offsets.commit.timeout.ms Type: int Default: 5000 (5 seconds) Valid Values: [1,... ] Importance: high Dynamic update: read-only Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. offsets.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large). offsets.retention.check.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only Frequency at which to check for stale offsets. offsets.retention.minutes Type: int Default: 10080 Valid Values: [1,... ] Importance: high Dynamic update: read-only After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period. offsets.topic.compression.codec Type: int Default: 0 Importance: high Dynamic update: read-only Compression codec for the offsets topic - compression may be used to achieve "atomic" commits. offsets.topic.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the offset commit topic (should not change after deployment). offsets.topic.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. offsets.topic.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. process.roles Type: list Default: "" Valid Values: [broker, controller] Importance: high Dynamic update: read-only The roles that this process plays: 'broker', 'controller', or 'broker,controller' if it is both. This configuration is only applicable for clusters in KRaft (Kafka Raft) mode (instead of ZooKeeper). Leave this config undefined or empty for Zookeeper clusters. queued.max.requests Type: int Default: 500 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of queued requests allowed for data-plane, before blocking the network threads. replica.fetch.min.bytes Type: int Default: 1 Importance: high Dynamic update: read-only Minimum bytes expected for each fetch response. If not enough bytes, wait up to replica.fetch.wait.max.ms (broker config). replica.fetch.wait.max.ms Type: int Default: 500 Importance: high Dynamic update: read-only The maximum wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics. replica.high.watermark.checkpoint.interval.ms Type: long Default: 5000 (5 seconds) Importance: high Dynamic update: read-only The frequency with which the high watermark is saved out to disk. replica.lag.time.max.ms Type: long Default: 30000 (30 seconds) Importance: high Dynamic update: read-only If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr. replica.socket.receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Importance: high Dynamic update: read-only The socket receive buffer for network requests. replica.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms. request.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. sasl.mechanism.controller.protocol Type: string Default: GSSAPI Importance: high Dynamic update: read-only SASL mechanism used for communication with controllers. Default is GSSAPI. socket.receive.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. socket.request.max.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of bytes in a socket request. socket.send.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. transaction.max.timeout.ms Type: int Default: 900000 (15 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum allowed timeout for transactions. If a client's requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction. transaction.state.log.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large). transaction.state.log.min.isr Type: int Default: 2 Valid Values: [1,... ] Importance: high Dynamic update: read-only Overridden min.insync.replicas config for the transaction topic. transaction.state.log.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the transaction topic (should not change after deployment). transaction.state.log.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. transaction.state.log.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. transactional.id.expiration.ms Type: int Default: 604800000 (7 days) Valid Values: [1,... ] Importance: high Dynamic update: read-only The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings. unclean.leader.election.enable Type: boolean Default: false Importance: high Dynamic update: cluster-wide Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. zookeeper.connect Type: string Default: null Importance: high Dynamic update: read-only Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3 . The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path . zookeeper.connection.timeout.ms Type: int Default: null Importance: high Dynamic update: read-only The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used. zookeeper.max.in.flight.requests Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of unacknowledged requests the client will send to Zookeeper before blocking. zookeeper.session.timeout.ms Type: int Default: 18000 (18 seconds) Importance: high Dynamic update: read-only Zookeeper session timeout. zookeeper.set.acl Type: boolean Default: false Importance: high Dynamic update: read-only Set client to use secure ACLs. broker.heartbeat.interval.ms Type: int Default: 2000 (2 seconds) Importance: medium Dynamic update: read-only The length of time in milliseconds between broker heartbeats. Used when running in KRaft mode. broker.id.generation.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed. broker.rack Type: string Default: null Importance: medium Dynamic update: read-only Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1 , us-east-1d . broker.session.timeout.ms Type: int Default: 9000 (9 seconds) Importance: medium Dynamic update: read-only The length of time in milliseconds that a broker lease lasts if no heartbeats are made. Used when running in KRaft mode. connections.max.idle.ms Type: long Default: 600000 (10 minutes) Importance: medium Dynamic update: read-only Idle connections timeout: the server socket processor threads close the connections that idle more than this. connections.max.reauth.ms Type: long Default: 0 Importance: medium Dynamic update: read-only When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000. controlled.shutdown.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable controlled shutdown of the server. controlled.shutdown.max.retries Type: int Default: 3 Importance: medium Dynamic update: read-only Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens. controlled.shutdown.retry.backoff.ms Type: long Default: 5000 (5 seconds) Importance: medium Dynamic update: read-only Before each retry, the system needs time to recover from the state that caused the failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying. controller.quorum.append.linger.ms Type: int Default: 25 Importance: medium Dynamic update: read-only The duration in milliseconds that the leader will wait for writes to accumulate before flushing them to disk. controller.quorum.request.timeout.ms Type: int Default: 2000 (2 seconds) Importance: medium Dynamic update: read-only The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. controller.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The socket timeout for controller-to-broker channels. default.replication.factor Type: int Default: 1 Importance: medium Dynamic update: read-only The default replication factors for automatically created topics. delegation.token.expiry.time.ms Type: long Default: 86400000 (1 day) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token validity time in miliseconds before the token needs to be renewed. Default value 1 day. delegation.token.master.key Type: password Default: null Importance: medium Dynamic update: read-only DEPRECATED: An alias for delegation.token.secret.key, which should be used instead of this config. delegation.token.max.lifetime.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days. delegation.token.secret.key Type: password Default: null Importance: medium Dynamic update: read-only Secret key to generate and verify delegation tokens. The same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support. delete.records.purgatory.purge.interval.requests Type: int Default: 1 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the delete records request purgatory. fetch.max.bytes Type: int Default: 57671680 (55 mebibytes) Valid Values: [1024,... ] Importance: medium Dynamic update: read-only The maximum number of bytes we will return for a fetch request. Must be at least 1024. fetch.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the fetch request purgatory. group.initial.rebalance.delay.ms Type: int Default: 3000 (3 seconds) Importance: medium Dynamic update: read-only The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. group.max.session.timeout.ms Type: int Default: 1800000 (30 minutes) Importance: medium Dynamic update: read-only The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. group.max.size Type: int Default: 2147483647 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of consumers that a single consumer group can accommodate. group.min.session.timeout.ms Type: int Default: 6000 (6 seconds) Importance: medium Dynamic update: read-only The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources. initial.broker.registration.timeout.ms Type: int Default: 60000 (1 minute) Importance: medium Dynamic update: read-only When initially registering with the controller quorum, the number of milliseconds to wait before declaring failure and exiting the broker process. inter.broker.listener.name Type: string Default: null Importance: medium Dynamic update: read-only Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time. inter.broker.protocol.version Type: string Default: 3.1-IV0 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0] Importance: medium Dynamic update: read-only Specify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list. log.cleaner.backoff.ms Type: long Default: 15000 (15 seconds) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The amount of time to sleep when there are no logs to clean. log.cleaner.dedupe.buffer.size Type: long Default: 134217728 Importance: medium Dynamic update: cluster-wide The total memory used for log deduplication across all cleaner threads. log.cleaner.delete.retention.ms Type: long Default: 86400000 (1 day) Importance: medium Dynamic update: cluster-wide How long are delete records retained? log.cleaner.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size. log.cleaner.io.buffer.load.factor Type: double Default: 0.9 Importance: medium Dynamic update: cluster-wide Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions. log.cleaner.io.buffer.size Type: int Default: 524288 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The total memory used for log cleaner I/O buffers across all cleaner threads. log.cleaner.io.max.bytes.per.second Type: double Default: 1.7976931348623157E308 Importance: medium Dynamic update: cluster-wide The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average. log.cleaner.max.compaction.lag.ms Type: long Default: 9223372036854775807 Importance: medium Dynamic update: cluster-wide The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. log.cleaner.min.cleanable.ratio Type: double Default: 0.5 Importance: medium Dynamic update: cluster-wide The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period. log.cleaner.min.compaction.lag.ms Type: long Default: 0 Importance: medium Dynamic update: cluster-wide The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. log.cleaner.threads Type: int Default: 1 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The number of background threads to use for log cleaning. log.cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Importance: medium Dynamic update: cluster-wide The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact". log.index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The interval with which we add an entry to the offset index. log.index.size.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [4,... ] Importance: medium Dynamic update: cluster-wide The maximum size in bytes of the offset index. log.message.format.version Type: string Default: 3.0-IV1 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0, 2.7-IV0, 2.7-IV1, 2.7-IV2, 2.8-IV0, 2.8-IV1, 3.0-IV0, 3.0-IV1, 3.1-IV0] Importance: medium Dynamic update: read-only Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. log.message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Importance: medium Dynamic update: cluster-wide The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling. log.message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Importance: medium Dynamic update: cluster-wide Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . log.preallocate Type: boolean Default: false Importance: medium Dynamic update: cluster-wide Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true. log.retention.check.interval.ms Type: long Default: 300000 (5 minutes) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion. max.connection.creation.rate Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum connection creation rate we allow in the broker at any time. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connection.creation.rate .Broker-wide connection rate limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections will be throttled if either the listener or the broker limit is reached, with the exception of inter-broker listener. Connections on the inter-broker listener will be throttled only when the listener-level rate limit is reached. max.connections Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections . Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case. max.connections.per.ip Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached. max.connections.per.ip.overrides Type: string Default: "" Importance: medium Dynamic update: cluster-wide A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200". max.incremental.fetch.session.cache.slots Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only The maximum number of incremental fetch sessions that we will maintain. num.partitions Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The default number of log partitions per topic. password.encoder.old.secret Type: password Default: null Importance: medium Dynamic update: read-only The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up. password.encoder.secret Type: password Default: null Importance: medium Dynamic update: read-only The secret used for encoding dynamically configured passwords for this broker. principal.builder.class Type: class Default: org.apache.kafka.common.security.authenticator.DefaultKafkaPrincipalBuilder Importance: medium Dynamic update: per-broker The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS. producer.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the producer request purgatory. queued.max.request.bytes Type: long Default: -1 Importance: medium Dynamic update: read-only The number of queued bytes allowed before no more requests are read. replica.fetch.backoff.ms Type: int Default: 1000 (1 second) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The amount of time to sleep when fetch partition error occurs. replica.fetch.max.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.fetch.response.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Importance: medium Dynamic update: read-only Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.selector.class Type: string Default: null Importance: medium Dynamic update: read-only The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader. reserved.broker.max.id Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only Max number that can be used for a broker.id. sasl.client.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.enabled.mechanisms Type: list Default: GSSAPI Importance: medium Dynamic update: per-broker The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default. sasl.jaas.config Type: password Default: null Importance: medium Dynamic update: per-broker JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: loginModuleClass controlFlag (optionName=optionValue)*; . For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: medium Dynamic update: per-broker Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: medium Dynamic update: per-broker Login thread sleep time between refresh attempts. sasl.kerberos.principal.to.local.rules Type: list Default: DEFAULT Importance: medium Dynamic update: per-broker A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. sasl.kerberos.service.name Type: string Default: null Importance: medium Dynamic update: per-broker The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.login.refresh.buffer.seconds Type: short Default: 300 Importance: medium Dynamic update: per-broker The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Importance: medium Dynamic update: per-broker The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.mechanism.inter.broker.protocol Type: string Default: GSSAPI Importance: medium Dynamic update: per-broker SASL mechanism used for inter-broker communication. Default is GSSAPI. sasl.oauthbearer.jwks.endpoint.url Type: string Default: null Importance: medium Dynamic update: read-only The OAuth/OIDC provider URL from which the provider's JWKS (JSON Web Key Set) can be retrieved. The URL can be HTTP(S)-based or file-based. If the URL is HTTP(S)-based, the JWKS data will be retrieved from the OAuth/OIDC provider via the configured URL on broker startup. All then-current keys will be cached on the broker for incoming requests. If an authentication request is received for a JWT that includes a "kid" header claim value that isn't yet in the cache, the JWKS endpoint will be queried again on demand. However, the broker polls the URL every sasl.oauthbearer.jwks.endpoint.refresh.ms milliseconds to refresh the cache with any forthcoming keys before any JWT requests that include them are received. If the URL is file-based, the broker will load the JWKS file from a configured location on startup. In the event that the JWT includes a "kid" header value that isn't in the JWKS file, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.token.endpoint.url Type: string Default: null Importance: medium Dynamic update: read-only The URL for the OAuth/OIDC identity provider. If the URL is HTTP(S)-based, it is the issuer's token endpoint URL to which requests will be made to login based on the configuration in sasl.jaas.config. If the URL is file-based, it specifies a file containing an access token (in JWT serialized form) issued by the OAuth/OIDC identity provider to use for authorization. sasl.server.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler. security.inter.broker.protocol Type: string Default: PLAINTEXT Importance: medium Dynamic update: read-only Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time. socket.connection.setup.timeout.max.ms Type: long Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The maximum amount of time the client will wait for the socket connection to be established. The connection setup timeout will increase exponentially for each consecutive connection failure up to this maximum. To avoid connection storms, a randomization factor of 0.2 will be applied to the timeout resulting in a random range between 20% below and 20% above the computed value. socket.connection.setup.timeout.ms Type: long Default: 10000 (10 seconds) Importance: medium Dynamic update: read-only The amount of time the client will wait for the socket connection to be established. If the connection is not built before the timeout elapses, clients will close the socket channel. ssl.cipher.suites Type: list Default: "" Importance: medium Dynamic update: per-broker A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: medium Dynamic update: per-broker Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike required, if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium Dynamic update: per-broker The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.key.password Type: password Default: null Importance: medium Dynamic update: per-broker The password of the private key in the key store file orthe PEM key specified in `ssl.keystore.key'. This is required for clients only if two-way authentication is configured. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: medium Dynamic update: per-broker The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.keystore.certificate.chain Type: password Default: null Importance: medium Dynamic update: per-broker Certificate chain in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with a list of X.509 certificates. ssl.keystore.key Type: password Default: null Importance: medium Dynamic update: per-broker Private key in the format specified by 'ssl.keystore.type'. Default SSL engine factory supports only PEM format with PKCS#8 keys. If the key is encrypted, key password must be specified using 'ssl.key.password'. ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: per-broker The store password for the key store file. This is optional for client and only needed if 'ssl.keystore.location' is configured. Key store password is not supported for PEM format. ssl.keystore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium Dynamic update: per-broker The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium Dynamic update: per-broker The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: medium Dynamic update: per-broker The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. ssl.truststore.certificates Type: password Default: null Importance: medium Dynamic update: per-broker Trusted certificates in the format specified by 'ssl.truststore.type'. Default SSL engine factory supports only PEM format with X.509 certificates. ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: per-broker The password for the trust store file. If a password is not set, trust store file configured will still be used, but integrity checking is disabled. Trust store password is not supported for PEM format. ssl.truststore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the trust store file. zookeeper.clientCnxnSocket Type: string Default: null Importance: medium Dynamic update: read-only Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named zookeeper.clientCnxnSocket system property. zookeeper.ssl.client.enable Type: boolean Default: false Importance: medium Dynamic update: read-only Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure system property (note the different name). Defaults to false if neither is set; when true, zookeeper.clientCnxnSocket must be set (typically to org.apache.zookeeper.ClientCnxnSocketNetty ); other values to set may include zookeeper.ssl.cipher.suites , zookeeper.ssl.crl.enable , zookeeper.ssl.enabled.protocols , zookeeper.ssl.endpoint.identification.algorithm , zookeeper.ssl.keystore.location , zookeeper.ssl.keystore.password , zookeeper.ssl.keystore.type , zookeeper.ssl.ocsp.enable , zookeeper.ssl.protocol , zookeeper.ssl.truststore.location , zookeeper.ssl.truststore.password , zookeeper.ssl.truststore.type . zookeeper.ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: read-only Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.location system property (note the camelCase). zookeeper.ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: read-only Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.password system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail. zookeeper.ssl.keystore.type Type: string Default: null Importance: medium Dynamic update: read-only Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the keystore. zookeeper.ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: read-only Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.location system property (note the camelCase). zookeeper.ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: read-only Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.password system property (note the camelCase). zookeeper.ssl.truststore.type Type: string Default: null Importance: medium Dynamic update: read-only Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the truststore. alter.config.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface. alter.log.dirs.replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for alter log dirs replication quotas. alter.log.dirs.replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for alter log dirs replication quotas. authorizer.class.name Type: string Default: "" Importance: low Dynamic update: read-only The fully qualified name of a class that implements org.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization. client.quota.callback.class Type: class Default: null Importance: low Dynamic update: read-only The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, <user>, <client-id>, <user> or <client-id> quotas stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied. connection.failed.authentication.delay.ms Type: int Default: 100 Valid Values: [0,... ] Importance: low Dynamic update: read-only Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout. controller.quorum.retry.backoff.ms Type: int Default: 20 Importance: low Dynamic update: read-only The amount of time to wait before attempting to retry a failed request to a given topic partition. This avoids repeatedly sending requests in a tight loop under some failure scenarios. controller.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for controller mutation quotas. controller.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for controller mutations quotas. create.topic.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface. delegation.token.expiry.check.interval.ms Type: long Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only Scan interval to remove expired delegation tokens. kafka.metrics.polling.interval.secs Type: int Default: 10 Valid Values: [1,... ] Importance: low Dynamic update: read-only The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations. kafka.metrics.reporters Type: list Default: "" Importance: low Dynamic update: read-only A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention. listener.security.protocol.map Type: string Default: PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL Importance: low Dynamic update: per-broker Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL . As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location ). Note that in KRaft a default mapping from the listener names defined by controller.listener.names to PLAINTEXT is assumed if no explicit mapping is provided and no other security protocol is in use. log.message.downconversion.enable Type: boolean Default: true Importance: low Dynamic update: cluster-wide This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers. metric.reporters Type: list Default: "" Importance: low Dynamic update: cluster-wide A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Importance: low Dynamic update: read-only The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The window of time a metrics sample is computed over. password.encoder.cipher.algorithm Type: string Default: AES/CBC/PKCS5Padding Importance: low Dynamic update: read-only The Cipher algorithm used for encoding dynamically configured passwords. password.encoder.iterations Type: int Default: 4096 Valid Values: [1024,... ] Importance: low Dynamic update: read-only The iteration count used for encoding dynamically configured passwords. password.encoder.key.length Type: int Default: 128 Valid Values: [8,... ] Importance: low Dynamic update: read-only The key length used for encoding dynamically configured passwords. password.encoder.keyfactory.algorithm Type: string Default: null Importance: low Dynamic update: read-only The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise. quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for client quotas. quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for client quotas. replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for replication quotas. replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for replication quotas. sasl.login.connect.timeout.ms Type: int Default: null Importance: low Dynamic update: read-only The (optional) value in milliseconds for the external authentication provider connection timeout. Currently applies only to OAUTHBEARER. sasl.login.read.timeout.ms Type: int Default: null Importance: low Dynamic update: read-only The (optional) value in milliseconds for the external authentication provider read timeout. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low Dynamic update: read-only The (optional) value in milliseconds for the maximum wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.login.retry.backoff.ms Type: long Default: 100 Importance: low Dynamic update: read-only The (optional) value in milliseconds for the initial wait between login attempts to the external authentication provider. Login uses an exponential backoff algorithm with an initial wait based on the sasl.login.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.login.retry.backoff.max.ms setting. Currently applies only to OAUTHBEARER. sasl.oauthbearer.clock.skew.seconds Type: int Default: 30 Importance: low Dynamic update: read-only The (optional) value in seconds to allow for differences between the time of the OAuth/OIDC identity provider and the broker. sasl.oauthbearer.expected.audience Type: list Default: null Importance: low Dynamic update: read-only The (optional) comma-delimited setting for the broker to use to verify that the JWT was issued for one of the expected audiences. The JWT will be inspected for the standard OAuth "aud" claim and if this value is set, the broker will match the value from JWT's "aud" claim to see if there is an exact match. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.expected.issuer Type: string Default: null Importance: low Dynamic update: read-only The (optional) setting for the broker to use to verify that the JWT was created by the expected issuer. The JWT will be inspected for the standard OAuth "iss" claim and if this value is set, the broker will match it exactly against what is in the JWT's "iss" claim. If there is no match, the broker will reject the JWT and authentication will fail. sasl.oauthbearer.jwks.endpoint.refresh.ms Type: long Default: 3600000 (1 hour) Importance: low Dynamic update: read-only The (optional) value in milliseconds for the broker to wait between refreshing its JWKS (JSON Web Key Set) cache that contains the keys to verify the signature of the JWT. sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms Type: long Default: 10000 (10 seconds) Importance: low Dynamic update: read-only The (optional) value in milliseconds for the maximum wait between attempts to retrieve the JWKS (JSON Web Key Set) from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.jwks.endpoint.retry.backoff.ms Type: long Default: 100 Importance: low Dynamic update: read-only The (optional) value in milliseconds for the initial wait between JWKS (JSON Web Key Set) retrieval attempts from the external authentication provider. JWKS retrieval uses an exponential backoff algorithm with an initial wait based on the sasl.oauthbearer.jwks.endpoint.retry.backoff.ms setting and will double in wait length between attempts up to a maximum wait length specified by the sasl.oauthbearer.jwks.endpoint.retry.backoff.max.ms setting. sasl.oauthbearer.scope.claim.name Type: string Default: scope Importance: low Dynamic update: read-only The OAuth claim for the scope is often named "scope", but this (optional) setting can provide a different name to use for the scope included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. sasl.oauthbearer.sub.claim.name Type: string Default: sub Importance: low Dynamic update: read-only The OAuth claim for the subject is often named "sub", but this (optional) setting can provide a different name to use for the subject included in the JWT payload's claims if the OAuth/OIDC provider uses a different name for that claim. security.providers Type: string Default: null Importance: low Dynamic update: read-only A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low Dynamic update: per-broker The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low Dynamic update: per-broker The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.principal.mapping.rules Type: string Default: DEFAULT Importance: low Dynamic update: read-only A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. ssl.secure.random.implementation Type: string Default: null Importance: low Dynamic update: per-broker The SecureRandom PRNG implementation to use for SSL cryptography operations. transaction.abort.timed.out.transaction.cleanup.interval.ms Type: int Default: 10000 (10 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to rollback transactions that have timed out. transaction.remove.expired.transaction.cleanup.interval.ms Type: int Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing. zookeeper.ssl.cipher.suites Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used. zookeeper.ssl.crl.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.crl system property (note the shorter name). zookeeper.ssl.enabled.protocols Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabledProtocols system property (note the camelCase). The default value of null means the enabled protocol will be the value of the zookeeper.ssl.protocol configuration property. zookeeper.ssl.endpoint.identification.algorithm Type: string Default: HTTPS Importance: low Dynamic update: read-only Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the zookeeper.ssl.hostnameVerification system property (note the different name and values; true implies https and false implies blank). zookeeper.ssl.ocsp.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.ocsp system property (note the shorter name). zookeeper.ssl.protocol Type: string Default: TLSv1.2 Importance: low Dynamic update: read-only Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zookeeper.ssl.protocol system property. zookeeper.sync.time.ms Type: int Default: 2000 (2 seconds) Importance: low Dynamic update: read-only How far a ZK follower can be behind a ZK leader. | null | https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.1/html/using_amq_streams_on_rhel/broker-configuration-parameters-str |
Chapter 38. JMS - AMQP 1.0 Kamelet Source | Chapter 38. JMS - AMQP 1.0 Kamelet Source A Kamelet that can consume events from any AMQP 1.0 compliant message broker using the Apache Qpid JMS client 38.1. Configuration Options The following table summarizes the configuration options available for the jms-amqp-10-source Kamelet: Property Name Description Type Default Example destinationName * Destination Name The JMS destination name string remoteURI * Broker URL The JMS URL string "amqp://my-host:31616" destinationType Destination Type The JMS destination type (i.e.: queue or topic) string "queue" Note Fields marked with an asterisk (*) are mandatory. 38.2. Dependencies At runtime, the jms-amqp-10-source Kamelet relies upon the presence of the following dependencies: camel:jms camel:kamelet mvn:org.apache.qpid:qpid-jms-client:0.55.0 38.3. Usage This section describes how you can use the jms-amqp-10-source . 38.3.1. Knative Source You can use the jms-amqp-10-source Kamelet as a Knative source by binding it to a Knative object. jms-amqp-10-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-source properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel 38.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 38.3.1.2. Procedure for using the cluster CLI Save the jms-amqp-10-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jms-amqp-10-source-binding.yaml 38.3.1.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" channel:mychannel This command creates the KameletBinding in the current namespace on the cluster. 38.3.2. Kafka Source You can use the jms-amqp-10-source Kamelet as a Kafka source by binding it to a Kafka topic. jms-amqp-10-source-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-source properties: destinationName: "The Destination Name" remoteURI: "amqp://my-host:31616" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic 38.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 38.3.2.2. Procedure for using the cluster CLI Save the jms-amqp-10-source-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the source by using the following command: oc apply -f jms-amqp-10-source-binding.yaml 38.3.2.3. Procedure for using the Kamel CLI Configure and run the source by using the following command: kamel bind jms-amqp-10-source -p "source.destinationName=The Destination Name" -p "source.remoteURI=amqp://my-host:31616" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic This command creates the KameletBinding in the current namespace on the cluster. 38.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/jms-amqp-10-source.kamelet.yaml | [
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-source properties: destinationName: \"The Destination Name\" remoteURI: \"amqp://my-host:31616\" sink: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel",
"apply -f jms-amqp-10-source-binding.yaml",
"kamel bind jms-amqp-10-source -p \"source.destinationName=The Destination Name\" -p \"source.remoteURI=amqp://my-host:31616\" channel:mychannel",
"apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: jms-amqp-10-source-binding spec: source: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: jms-amqp-10-source properties: destinationName: \"The Destination Name\" remoteURI: \"amqp://my-host:31616\" sink: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic",
"apply -f jms-amqp-10-source-binding.yaml",
"kamel bind jms-amqp-10-source -p \"source.destinationName=The Destination Name\" -p \"source.remoteURI=amqp://my-host:31616\" kafka.strimzi.io/v1beta1:KafkaTopic:my-topic"
]
| https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.5/html/kamelets_reference/jms-source |
Making open source more inclusive | Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . | null | https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/planning_your_deployment/making-open-source-more-inclusive |
Installation configuration | Installation configuration OpenShift Container Platform 4.15 Cluster-wide configuration during installations Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform_installation/4.15/html/installation_configuration/index |
Chapter 9. Technology Previews | Chapter 9. Technology Previews This part provides a list of all Technology Previews available in Red Hat Enterprise Linux 9. For information on Red Hat scope of support for Technology Preview features, see Technology Preview Features Support Scope . 9.1. Installer and image creation NVMe over Fibre Channel devices are now available in RHEL installer as a Technology Preview You can now add NVMe over Fibre Channel devices to your RHEL installation as a Technology Preview. In RHEL Installer, you can select these devices under the NVMe Fabrics Devices section while adding disks on the Installation Destination screen. Bugzilla:2107346 9.2. Shells and command-line tools GIMP available as a Technology Preview in RHEL 9 GNU Image Manipulation Program (GIMP) 2.99.8 is now available in RHEL 9 as a Technology Preview. The gimp package version 2.99.8 is a pre-release version with a set of improvements, but a limited set of features and no guarantee for stability. As soon as the official GIMP 3 is released, it will be introduced into RHEL 9 as an update of this pre-release version. In RHEL 9, you can install gimp easily as an RPM package. Bugzilla:2047161 9.3. Infrastructure services Socket API for TuneD available as a Technology Preview The socket API for controlling TuneD through Unix domain socket is now available as a Technology Preview. The socket API maps one-to-one with the D-Bus API and provides an alternative communication method for cases where D-Bus is not available. By using the socket API, you can control the TuneD daemon to optimize the performance, and change the values of various tuning parameters. The socket API is disabled by default, you can enable it in the tuned-main.conf file. Bugzilla:2113900 9.4. Security gnutls now uses KTLS as a Technology Preview The updated gnutls packages can use Kernel TLS (KTLS) for accelerating data transfer on encrypted channels as a Technology Preview. To enable KTLS, add the tls.ko kernel module using the modprobe command, and create a new configuration file /etc/crypto-policies/local.d/gnutls-ktls.txt for the system-wide cryptographic policies with the following content: Note that the current version does not support updating traffic keys through TLS KeyUpdate messages, which impacts the security of AES-GCM ciphersuites. See the RFC 7841 - TLS 1.3 document for more information. Bugzilla:2042009 9.5. Networking WireGuard VPN is available as a Technology Preview WireGuard, which Red Hat provides as an unsupported Technology Preview, is a high-performance VPN solution that runs in the Linux kernel. It uses modern cryptography and is easier to configure than other VPN solutions. Additionally, the small code-basis of WireGuard reduces the surface for attacks and, therefore, improves the security. For further details, see Setting up a WireGuard VPN . Bugzilla:1613522 KTLS available as a Technology Preview RHEL provides Kernel Transport Layer Security (KTLS) as a Technology Preview. KTLS handles TLS records using the symmetric encryption or decryption algorithms in the kernel for the AES-GCM cipher. KTLS also includes the interface for offloading TLS record encryption to Network Interface Controllers (NICs) that provides this functionality. Bugzilla:1570255 The systemd-resolved service is available as a Technology Preview The systemd-resolved service provides name resolution to local applications. The service implements a caching and validating DNS stub resolver, a Link-Local Multicast Name Resolution (LLMNR), and Multicast DNS resolver and responder. Note that systemd-resolved is an unsupported Technology Preview. Bugzilla:2020529 9.6. Kernel SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel partially provides the SGX v1 and v1.5 functionality. The version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Bugzilla:1874182 The Intel data streaming accelerator driver for kernel is available as a Technology Preview The Intel data streaming accelerator driver (IDXD) for the kernel is currently available as a Technology Preview. It is an Intel CPU integrated accelerator and includes the shared work queue with process address space ID (pasid) submission and shared virtual memory (SVM). Bugzilla:2030412 The Soft-iWARP driver is available as a Technology Preview Soft-iWARP (siw) is a software, Internet Wide-area RDMA Protocol (iWARP), kernel driver for Linux. Soft-iWARP implements the iWARP protocol suite over the TCP/IP network stack. This protocol suite is fully implemented in software and does not require a specific Remote Direct Memory Access (RDMA) hardware. Soft-iWARP enables a system with a standard Ethernet adapter to connect to an iWARP adapter or to another system with already installed Soft-iWARP. Bugzilla:2023416 SGX available as a Technology Preview Software Guard Extensions (SGX) is an Intel(R) technology for protecting software code and data from disclosure and modification. The RHEL kernel partially provides the SGX v1 and v1.5 functionality. Version 1 enables platforms using the Flexible Launch Control mechanism to use the SGX technology. Version 2 adds Enclave Dynamic Memory Management (EDMM). Notable features include: Modifying EPCM permissions of regular enclave pages that belong to an initialized enclave. Dynamic addition of regular enclave pages to an initialized enclave. Expanding an initialized enclave to accommodate more threads. Removing regular and TCS pages from an initialized enclave. Bugzilla:1660337 rvu_af , rvu_nicpf , and rvu_nicvf available as Technology Preview The following kernel modules are available as Technology Preview for Marvell OCTEON TX2 Infrastructure Processor family: rvu_nicpf - Marvell OcteonTX2 NIC Physical Function driver rvu_nicvf - Marvell OcteonTX2 NIC Virtual Function driver rvu_nicvf - Marvell OcteonTX2 RVU Admin Function driver Bugzilla:2040643 9.7. File systems and storage DAX is now available for ext4 and XFS as a Technology Preview In RHEL 9, the DAX file system is available as a Technology Preview. DAX provides means for an application to directly map persistent memory into its address space. To use DAX, a system must have some form of persistent memory available, usually in the form of one or more Non-Volatile Dual In-line Memory Modules (NVDIMMs), and a DAX compatible file system must be created on the NVDIMM(s). Also, the file system must be mounted with the dax mount option. Then, an mmap of a file on the dax-mounted file system results in a direct mapping of storage into the application's address space. Bugzilla:1995338 Stratis is available as a Technology Preview Stratis is a local storage manager. It provides managed file systems on top of pools of storage with additional features to the user: Manage snapshots and thin provisioning Automatically grow file system sizes as needed Maintain file systems To administer Stratis storage, use the stratis utility, which communicates with the stratisd background service. Stratis is provided as a Technology Preview. For more information, see the Stratis documentation: Setting up Stratis file systems . Bugzilla:2041558 NVMe-oF Discovery Service features available as a Technology Preview The NVMe-oF Discovery Service features, defined in the NVMexpress.org Technical Proposals (TP) 8013 and 8014, are available as a Technology Preview. To preview these features, use the nvme-cli 2.0 package and attach the host to an NVMe-oF target device that implements TP-8013 or TP-8014. For more information about TP-8013 and TP-8014, see the NVM Express 2.0 Ratified TPs from the https://nvmexpress.org/specifications/ website. Bugzilla:2021672 nvme-stas package available as a Technology Preview The nvme-stas package, which is a Central Discovery Controller (CDC) client for Linux, is now available as a Technology Preview. It handles Asynchronous Event Notifications (AEN), Automated NVMe subsystem connection controls, Error handling and reporting, and Automatic ( zeroconf ) and Manual configuration. This package consists of two daemons, Storage Appliance Finder ( stafd ) and Storage Appliance Connector ( stacd ). Bugzilla:1893841 NVMe TP 8006 in-band authentication available as a Technology Preview Implementing Non-Volatile Memory Express (NVMe) TP 8006, which is an in-band authentication for NVMe over Fabrics (NVMe-oF) is now available as an unsupported Technology Preview. The NVMe Technical Proposal 8006 defines the DH-HMAC-CHAP in-band authentication protocol for NVMe-oF, which is provided with this enhancement. For more information, see the dhchap-secret and dhchap-ctrl-secret option descriptions in the nvme-connect(1) man page. Bugzilla:2027304 9.8. Compilers and development tools jmc-core and owasp-java-encoder available as a Technology Preview RHEL 9 is distributed with the jmc-core and owasp-java-encoder packages as Technology Preview features for the AMD and Intel 64-bit architectures. jmc-core is a library providing core APIs for Java Development Kit (JDK) Mission Control, including libraries for parsing and writing JDK Flight Recording files, as well as libraries for Java Virtual Machine (JVM) discovery through Java Discovery Protocol (JDP). The owasp-java-encoder package provides a collection of high-performance low-overhead contextual encoders for Java. Note that since RHEL 9.2, jmc-core and owasp-java-encoder are available in the CodeReady Linux Builder (CRB) repository, which you must explicitly enable. See How to enable and make use of content within CodeReady Linux Builder for more information. Bugzilla:1980981 9.9. Identity Management DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now implement DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2 Secure Domain Name System (DNS) Deployment Guide DNSSEC Key Rollover Timing Considerations Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices. Bugzilla:2084180 Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as a Technology Preview. Previously, the IdM API was enhanced to enable multiple versions of API commands. These enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers can use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see Using the Identity Management API to Communicate with the IdM Server (TECHNOLOGY PREVIEW) . Bugzilla:2084166 sssd-idp sub-package available as a Technology Preview The sssd-idp sub-package for SSSD contains the oidc_child and krb5 idp plugins, which are client-side components that perform OAuth2 authentication against Identity Management (IdM) servers. This feature is available only with IdM servers on RHEL 9.1 and later. Bugzilla:2065693 SSSD internal krb5 idp plugin available as a Technology Preview The SSSD krb5 idp plugin allows you to authenticate against an external identity provider (IdP) using the OAuth2 protocol. This feature is available only with IdM servers on RHEL 9.1 and later. Bugzilla:2056482 RHEL IdM allows delegating user authentication to external identity providers as a Technology Preview In RHEL IdM, you can now associate users with external identity providers (IdP) that support the OAuth 2 device authorization flow. When these users authenticate with the SSSD version available in RHEL 9.1 or later, they receive RHEL IdM single sign-on capabilities with Kerberos tickets after performing authentication and authorization at the external IdP. Notable features include: Adding, modifying, and deleting references to external IdPs with ipa idp-* commands Enabling IdP authentication for users with the ipa user-mod --user-auth-type=idp command For additional information, see Using external identity providers to authenticate to IdM . Bugzilla:2069202 ACME supports automatically removing expired certificates as a Technology Preview The Automated Certificate Management Environment (ACME) service in Identity Management (IdM) adds an automatic mechanism to purge expired certificates from the certificate authority (CA) as a Technology Preview. As a result, ACME can now automatically remove expired certificates at specified intervals. Removing expired certificates is disabled by default. To enable it, enter: With this enhancement, ACME can now automatically remove expired certificates at specified intervals. Removing expired certificates is disabled by default. To enable it, enter: This removes expired certificates on the first day of every month at midnight. Note Expired certificates are removed after their retention period. By default, this is 30 days after expiry. For more details, see the ipa-acme-manage(1) man page. Bugzilla:2162677 9.10. Desktop GNOME for the 64-bit ARM architecture available as a Technology Preview The GNOME desktop environment is available for the 64-bit ARM architecture as a Technology Preview. You can now connect to the desktop session on a 64-bit ARM server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on 64-bit ARM. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27394 GNOME for the IBM Z architecture available as a Technology Preview The GNOME desktop environment is available for the IBM Z architecture as a Technology Preview. You can now connect to the desktop session on an IBM Z server using VNC. As a result, you can manage the server using graphical applications. A limited set of graphical applications is available on IBM Z. For example: The Firefox web browser Red Hat Subscription Manager ( subscription-manager-cockpit ) Firewall Configuration ( firewall-config ) Disk Usage Analyzer ( baobab ) Using Firefox, you can connect to the Cockpit service on the server. Certain applications, such as LibreOffice, only provide a command-line interface, and their graphical interface is disabled. Jira:RHELPLAN-27737 9.11. Graphics infrastructures Intel Arc A-Series graphics available as a Technology Preview Intel Arc A-Series graphics, also known as Alchemist or DG2, are now available as a Technology Preview. To enable hardware acceleration with Intel Arc A-Series graphics, add the following option on the kernel command line: In this option, replace pci-id with either of the following: The PCI ID of your Intel GPU. The * character to enable the i915 driver with all alpha-quality hardware. Bugzilla:2041690 9.12. The web console Stratis available as a Technology Preview in the RHEL web console With this update, the Red Hat Enterprise Linux web console provides the ability to manage Stratis storage as a Technology Preview. To learn more about Stratis, see What is Stratis . Jira:RHELPLAN-122345 9.13. Virtualization Creating nested virtual machines Nested KVM virtualization is provided as a Technology Preview for KVM virtual machines (VMs) running on Intel, AMD64, and IBM Z hosts with RHEL 9. With this feature, a RHEL 7, RHEL 8, or RHEL 9 VM that runs on a physical RHEL 9 host can act as a hypervisor, and host its own VMs. Jira:RHELDOCS-17040 Intel SGX available for VMs as a Technology Preview As a Technology Preview, the Intel Software Guard Extensions (SGX) can now be configured for virtual machines (VMs) hosted on RHEL 9. SGX helps protect data integrity and confidentiality for specific processes on Intel hardware. After you set up SGX on your host, the feature is passed on to its VMs, so that the guest operating systems (OSs) can use it. Note that for a guest OS to use SGX, you must first install SGX drivers for that specific OS. In addition, SGX on your host cannot memory-encrypt VMs. Jira:RHELPLAN-69761 AMD SEV and SEV-ES for KVM virtual machines As a Technology Preview, RHEL 9 provides the Secure Encrypted Virtualization (SEV) feature for AMD EPYC host machines that use the KVM hypervisor. If enabled on a virtual machine (VM), SEV encrypts the VM's memory to protect the VM from access by the host. This increases the security of the VM. In addition, the enhanced Encrypted State version of SEV (SEV-ES) is also provided as Technology Preview. SEV-ES encrypts all CPU register contents when a VM stops running. This prevents the host from modifying the VM's CPU registers or reading any information from them. Note that SEV and SEV-ES work only on the 2nd generation of AMD EPYC CPUs (codenamed Rome) or later. Also note that RHEL 9 includes SEV and SEV-ES encryption, but not the SEV and SEV-ES security attestation. Jira:RHELPLAN-65217 Virtualization is now available on ARM 64 As a Technology Preview, it is now possible to create KVM virtual machines on systems using ARM 64 CPUs. Jira:RHELPLAN-103993 virtio-mem is now available on AMD64, Intel 64, and ARM 64 As a Technology Preview, RHEL 9 introduces the virtio-mem feature on AMD64, Intel 64, and ARM 64 systems. Using virtio-mem makes it possible to dynamically add or remove host memory in virtual machines (VMs). To use virtio-mem , define virtio-mem memory devices in the XML configuration of a VM and use the virsh update-memory-device command to request memory device size changes while the VM is running. To see the current memory size exposed by such memory devices to a running VM, view the XML configuration of the VM. Bugzilla:2014487 , Bugzilla:2044172 , Bugzilla:2044162 Intel TDX in RHEL guests As a Technology Preview, the Intel Trust Domain Extension (TDX) feature can now be used in RHEL 9.2 guest operating systems. If the host system supports TDX, you can deploy hardware-isolated RHEL 9 virtual machines (VMs), called trust domains (TDs). Note, however, that TDX currently does not work with kdump , and enabling TDX will cause kdump to fail on the VM. Bugzilla:1955275 A unified kernel image of RHEL is now available as a Technology Preview As a Technology Preview, you can now obtain the RHEL kernel as a unified kernel image (UKI) for virtual machines (VMs). A unified kernel image combines the kernel, initramfs, and kernel command line into a single signed binary file. UKIs can be used in virtualized and cloud environments, especially in confidential VMs where strong SecureBoot capabilities are required. The UKI is available as a kernel-uki-virt package in RHEL 9 repositories. Currently, the RHEL UKI can only be used in a UEFI boot configuration. Bugzilla:2142102 Intel vGPU available as a Technology Preview As a Technology Preview, it is possible to divide a physical Intel GPU device into multiple virtual devices referred to as mediated devices . These mediated devices can then be assigned to multiple virtual machines (VMs) as virtual GPUs. As a result, these VMs share the performance of a single physical Intel GPU. Note that this feature is deprecated and will be removed entirely in a future RHEL release. Jira:RHELDOCS-17050 9.14. RHEL in cloud environments RHEL is now available on Azure confidential VMs as a Technology Preview With the updated RHEL kernel, you can now create and run RHEL confidential virtual machines (VMs) on Microsoft Azure as a Technology Preview. The newly added unified kernel image (UKI) now enables booting encrypted confidential VM images on Azure. The UKI is available as a kernel-uki-virt package in RHEL 9 repositories. Currently, the RHEL UKI can only be used in a UEFI boot configuration. Jira:RHELPLAN-139800 9.15. Containers Quadlet in Podman is now available as a Technology Preview Beginning with Podman v4.4, you can use Quadlet to automatically generate a systemd service file from the container description as a Technology Preview. The container description is in the systemd unit file format. The description focuses on the relevant container details and hides the technical complexity of running containers under systemd . The Quadlets are easier to write and maintain than the systemd unit files. For more details, see the upstream documentation and Make systemd better for Podman with Quadlet . Jira:RHELPLAN-148394 Clients for sigstore signatures with Fulcio and Rekor are now available as a Technology Preview With Fulcio and Rekor servers, you can now create signatures by using short-term certificates based on an OpenID Connect (OIDC) server authentication, instead of manually managing a private key. Clients for sigstore signatures with Fulcio and Rekor are now available as a Technology Preview. This added functionality is the client side support only, and does not include either the Fulcio or Rekor servers. Add the fulcio section in the policy.json file. To sign container images, use the podman push --sign-by-sigstore=file.yml or skopeo copy --sign-by-sigstore= file.yml commands, where file.yml is the sigstore signing parameter file. To verify signatures, add the fulcio section and the rekorPublicKeyPath or rekorPublicKeyData fields in the policy.json file. For more information, see containers-policy.json man page. Jira:RHELPLAN-136611 The podman-machine command is unsupported The podman-machine command for managing virtual machines, is available only as a Technology Preview. Instead, run Podman directly from the command line. Jira:RHELDOCS-16861 | [
"[global] ktls = true",
"ipa-acme-manage pruning --enable --cron \"0 0 1 * *\"",
"i915.force_probe= pci-id"
]
| https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/9.2_release_notes/technology-previews |
Chapter 1. Operators overview | Chapter 1. Operators overview Operators are among the most important components of OpenShift Container Platform. Operators are the preferred method of packaging, deploying, and managing services on the control plane. They can also provide advantages to applications that users run. Operators integrate with Kubernetes APIs and CLI tools such as kubectl and oc commands. They provide the means of monitoring applications, performing health checks, managing over-the-air (OTA) updates, and ensuring that applications remain in your specified state. While both follow similar Operator concepts and goals, Operators in OpenShift Container Platform are managed by two different systems, depending on their purpose: Cluster Operators, which are managed by the Cluster Version Operator (CVO), are installed by default to perform cluster functions. Optional add-on Operators, which are managed by Operator Lifecycle Manager (OLM), can be made accessible for users to run in their applications. With Operators, you can create applications to monitor the running services in the cluster. Operators are designed specifically for your applications. Operators implement and automate the common Day 1 operations such as installation and configuration as well as Day 2 operations such as autoscaling up and down and creating backups. All these activities are in a piece of software running inside your cluster. 1.1. For developers As a developer, you can perform the following Operator tasks: Install Operator SDK CLI . Create Go-based Operators , Ansible-based Operators , Java-based Operators , and Helm-based Operators . Use Operator SDK to build, test, and deploy an Operator . Install and subscribe an Operator to your namespace . Create an application from an installed Operator through the web console . Additional resources Machine deletion lifecycle hook examples for Operator developers 1.2. For administrators As a cluster administrator, you can perform the following Operator tasks: Manage custom catalogs . Allow non-cluster administrators to install Operators . Install an Operator from OperatorHub . View Operator status . Manage Operator conditions . Upgrade installed Operators . Delete installed Operators . Configure proxy support . Use Operator Lifecycle Manager on restricted networks . To know all about the cluster Operators that Red Hat provides, see Cluster Operators reference . 1.3. steps To understand more about Operators, see What are Operators? | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/operators/operators-overview |
probe::ipmib.ReasmReqds | probe::ipmib.ReasmReqds Name probe::ipmib.ReasmReqds - Count number of packet fragments reassembly requests Synopsis ipmib.ReasmReqds Values op value to be added to the counter (default value of 1) skb pointer to the struct sk_buff being acted on Description The packet pointed to by skb is filtered by the function ipmib_filter_key . If the packet passes the filter is is counted in the global ReasmReqds (equivalent to SNMP's MIB IPSTATS_MIB_REASMREQDS) | null | https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-ipmib-reasmreqds |
probe::stap.system.return | probe::stap.system.return Name probe::stap.system.return - Finished a command from stap Synopsis stap.system.return Values ret a return code associated with running waitpid on the spawned process; a non-zero value indicates error Description Fires just before the return of the stap_system function, after waitpid. | null | https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-stap-system-return |
Chapter 4. Cluster Network Operator in OpenShift Container Platform | Chapter 4. Cluster Network Operator in OpenShift Container Platform The Cluster Network Operator (CNO) deploys and manages the cluster network components on an OpenShift Container Platform cluster, including the Container Network Interface (CNI) default network provider plugin selected for the cluster during installation. 4.1. Cluster Network Operator The Cluster Network Operator implements the network API from the operator.openshift.io API group. The Operator deploys the OpenShift SDN default Container Network Interface (CNI) network provider plugin, or the default network provider plugin that you selected during cluster installation, by using a daemon set. Procedure The Cluster Network Operator is deployed during installation as a Kubernetes Deployment . Run the following command to view the Deployment status: USD oc get -n openshift-network-operator deployment/network-operator Example output NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m Run the following command to view the state of the Cluster Network Operator: USD oc get clusteroperator/network Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m The following fields provide information about the status of the operator: AVAILABLE , PROGRESSING , and DEGRADED . The AVAILABLE field is True when the Cluster Network Operator reports an available status condition. 4.2. Viewing the cluster network configuration Every new OpenShift Container Platform installation has a network.config object named cluster . Procedure Use the oc describe command to view the cluster network configuration: USD oc describe network.config/cluster Example output Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none> 1 The Spec field displays the configured state of the cluster network. 2 The Status field displays the current state of the cluster network configuration. 4.3. Viewing Cluster Network Operator status You can inspect the status and view the details of the Cluster Network Operator using the oc describe command. Procedure Run the following command to view the status of the Cluster Network Operator: USD oc describe clusteroperators/network 4.4. Viewing Cluster Network Operator logs You can view Cluster Network Operator logs by using the oc logs command. Procedure Run the following command to view the logs of the Cluster Network Operator: USD oc logs --namespace=openshift-network-operator deployment/network-operator 4.5. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. Note After cluster installation, you cannot modify the fields listed in the section. You can specify the cluster network provider configuration for your cluster by setting the fields for the defaultNetwork object in the CNO object named cluster . 4.5.1. Cluster Network Operator configuration object The fields for the Cluster Network Operator (CNO) are described in the following table: Table 4.1. Cluster Network Operator configuration object Field Type Description metadata.name string The name of the CNO object. This name is always cluster . spec.clusterNetwork array A list specifying the blocks of IP addresses from which pod IP addresses are allocated and the subnet prefix length assigned to each individual node in the cluster. For example: spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23 This value is ready-only and inherited from the Network.config.openshift.io object named cluster during cluster installation. spec.serviceNetwork array A block of IP addresses for services. The OpenShift SDN and OVN-Kubernetes Container Network Interface (CNI) network providers support only a single IP address block for the service network. For example: spec: serviceNetwork: - 172.30.0.0/14 This value is ready-only and inherited from the Network.config.openshift.io object named cluster during cluster installation. spec.defaultNetwork object Configures the Container Network Interface (CNI) cluster network provider for the cluster network. spec.kubeProxyConfig object The fields for this object specify the kube-proxy configuration. If you are using the OVN-Kubernetes cluster network provider, the kube-proxy configuration has no effect. defaultNetwork object configuration The values for the defaultNetwork object are defined in the following table: Table 4.2. defaultNetwork object Field Type Description type string Either OpenShiftSDN or OVNKubernetes . The cluster network provider is selected during installation. This value cannot be changed after cluster installation. Note OpenShift Container Platform uses the OpenShift SDN Container Network Interface (CNI) cluster network provider by default. openshiftSDNConfig object This object is only valid for the OpenShift SDN cluster network provider. ovnKubernetesConfig object This object is only valid for the OVN-Kubernetes cluster network provider. Configuration for the OpenShift SDN CNI cluster network provider The following table describes the configuration fields for the OpenShift SDN Container Network Interface (CNI) cluster network provider. Table 4.3. openshiftSDNConfig object Field Type Description mode string The network isolation mode for OpenShift SDN. mtu integer The maximum transmission unit (MTU) for the VXLAN overlay network. This value is normally configured automatically. vxlanPort integer The port to use for all VXLAN packets. The default value is 4789 . Note You can only change the configuration for your cluster network provider during cluster installation. Example OpenShift SDN configuration defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 Configuration for the OVN-Kubernetes CNI cluster network provider The following table describes the configuration fields for the OVN-Kubernetes CNI cluster network provider. Table 4.4. ovnKubernetesConfig object Field Type Description mtu integer The maximum transmission unit (MTU) for the Geneve (Generic Network Virtualization Encapsulation) overlay network. This value is normally configured automatically. genevePort integer The UDP port for the Geneve overlay network. ipsecConfig object If the field is present, IPsec is enabled for the cluster. policyAuditConfig object Specify a configuration object for customizing network policy audit logging. If unset, the defaults audit log settings are used. gatewayConfig object Optional: Specify a configuration object for customizing how egress traffic is sent to the node gateway. Note Table 4.5. policyAuditConfig object Field Type Description rateLimit integer The maximum number of messages to generate every second per node. The default value is 20 messages per second. maxFileSize integer The maximum size for the audit log in bytes. The default value is 50000000 or 50 MB. destination string One of the following additional audit log targets: libc The libc syslog() function of the journald process on the host. udp:<host>:<port> A syslog server. Replace <host>:<port> with the host and port of the syslog server. unix:<file> A Unix Domain Socket file specified by <file> . null Do not send the audit logs to any additional target. syslogFacility string The syslog facility, such as kern , as defined by RFC5424. The default value is local0 . Table 4.6. gatewayConfig object Field Type Description routingViaHost boolean Set this field to true to send egress traffic from pods to the host networking stack. For highly-specialized installations and applications that rely on manually configured routes in the kernel routing table, you might want to route egress traffic to the host networking stack. By default, egress traffic is processed in OVN to exit the cluster and is not affected by specialized routes in the kernel routing table. The default value is false . This field has an interaction with the Open vSwitch hardware offloading feature. If you set this field to true , you do not receive the performance benefits of the offloading because egress traffic is processed by the host networking stack. Note You can only change the configuration for your cluster network provider during cluster installation, except for the gatewayConfig field that can be changed at runtime as a post-installation activity. Example OVN-Kubernetes configuration with IPSec enabled defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {} kubeProxyConfig object configuration The values for the kubeProxyConfig object are defined in the following table: Table 4.7. kubeProxyConfig object Field Type Description iptablesSyncPeriod string The refresh period for iptables rules. The default value is 30s . Valid suffixes include s , m , and h and are described in the Go time package documentation. Note Because of performance improvements introduced in OpenShift Container Platform 4.3 and greater, adjusting the iptablesSyncPeriod parameter is no longer necessary. proxyArguments.iptables-min-sync-period array The minimum duration before refreshing iptables rules. This field ensures that the refresh does not happen too frequently. Valid suffixes include s , m , and h and are described in the Go time package . The default value is: kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s 4.5.2. Cluster Network Operator example configuration A complete CNO configuration is specified in the following example: Example Cluster Network Operator object apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: 3 type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 0s 1 2 3 Configured only during cluster installation. 4.6. Additional resources Network API in the operator.openshift.io API group | [
"oc get -n openshift-network-operator deployment/network-operator",
"NAME READY UP-TO-DATE AVAILABLE AGE network-operator 1/1 1 1 56m",
"oc get clusteroperator/network",
"NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE network 4.5.4 True False False 50m",
"oc describe network.config/cluster",
"Name: cluster Namespace: Labels: <none> Annotations: <none> API Version: config.openshift.io/v1 Kind: Network Metadata: Self Link: /apis/config.openshift.io/v1/networks/cluster Spec: 1 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Status: 2 Cluster Network: Cidr: 10.128.0.0/14 Host Prefix: 23 Cluster Network MTU: 8951 Network Type: OpenShiftSDN Service Network: 172.30.0.0/16 Events: <none>",
"oc describe clusteroperators/network",
"oc logs --namespace=openshift-network-operator deployment/network-operator",
"spec: clusterNetwork: - cidr: 10.128.0.0/19 hostPrefix: 23 - cidr: 10.128.32.0/19 hostPrefix: 23",
"spec: serviceNetwork: - 172.30.0.0/14",
"defaultNetwork: type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789",
"While migrating egress traffic, you can expect some disruption to workloads and service traffic until the Cluster Network Operator (CNO) successfully rolls out the changes.",
"defaultNetwork: type: OVNKubernetes ovnKubernetesConfig: mtu: 1400 genevePort: 6081 ipsecConfig: {}",
"kubeProxyConfig: proxyArguments: iptables-min-sync-period: - 0s",
"apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: 1 - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: 2 - 172.30.0.0/16 defaultNetwork: 3 type: OpenShiftSDN openshiftSDNConfig: mode: NetworkPolicy mtu: 1450 vxlanPort: 4789 kubeProxyConfig: iptablesSyncPeriod: 30s proxyArguments: iptables-min-sync-period: - 0s"
]
| https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/networking/cluster-network-operator |
Config APIs | Config APIs OpenShift Container Platform 4.12 Reference guide for config APIs Red Hat OpenShift Documentation Team | null | https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/config_apis/index |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.