title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
4.3. Reverting to an Ext2 File System
4.3. Reverting to an Ext2 File System In order to revert to an ext2 file system, use the following procedure. For simplicity, the sample commands in this section use the following value for the block device: /dev/mapper/VolGroup00-LogVol02 Procedure 4.1. Revert from ext3 to ext2 Unmount the partition by logging in as root and typing: Change the file system type to ext2 by typing the following command: Check the partition for errors by typing the following command: Then mount the partition again as ext2 file system by typing: Replace /mount/point with the mount point of the partition. Note If a .journal file exists at the root level of the partition, delete it. To permanently change the partition to ext2, remember to update the /etc/fstab file, otherwise it will revert back after booting.
[ "umount /dev/mapper/VolGroup00-LogVol02", "tune2fs -O ^has_journal /dev/mapper/VolGroup00-LogVol02", "e2fsck -y /dev/mapper/VolGroup00-LogVol02", "mount -t ext2 /dev/mapper/VolGroup00-LogVol02 /mount/point" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/storage_administration_guide/s1-filesystem-ext2-revert
Chapter 10. Installation configuration parameters for IBM Cloud
Chapter 10. Installation configuration parameters for IBM Cloud Before you deploy an OpenShift Container Platform cluster on IBM Cloud(R), you provide parameters to customize your cluster and the platform that hosts it. When you create the install-config.yaml file, you provide values for the required parameters through the command line. You can then modify the install-config.yaml file to customize your cluster further. 10.1. Available installation configuration parameters for IBM Cloud The following tables specify the required, optional, and IBM Cloud-specific installation configuration parameters that you can set as part of the installation process. Note After installation, you cannot modify these parameters in the install-config.yaml file. 10.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 10.1. Required parameters Parameter Description Values The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters, hyphens ( - ), and periods ( . ), such as dev . The configuration for the specific platform upon which to perform the installation: aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object Get a pull secret from Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 10.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. Only IPv4 addresses are supported. Table 10.2. Network parameters Parameter Description Values The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. The Red Hat OpenShift Networking network plugin to install. OVNKubernetes . OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . The IP address block for services. The default value is 172.30.0.0/16 . The OVN-Kubernetes network plugins supports only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power(R) Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power(R) Virtual Server, the default value is 192.168.0.0/24 . If you are deploying the cluster to an existing Virtual Private Cloud (VPC), the CIDR must contain the subnets defined in platform.ibmcloud.controlPlaneSubnets and platform.ibmcloud.computeSubnets . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 10.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 10.3. Optional parameters Parameter Description Values A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on compute machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use compute . The name of the machine pool. worker Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . The configuration for the machines that comprise the control plane. Array of MachinePool objects. Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String Whether to enable or disable simultaneous multithreading, or hyperthreading , on control plane machines. By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Enabled or Disabled Required if you use controlPlane . The name of the machine pool. master Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. aws , azure , gcp , ibmcloud , nutanix , openstack , powervs , vsphere , or {} The number of control plane machines to provision. Supported values are 3 , or 1 when deploying single-node OpenShift. The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. Mint , Passthrough , Manual or an empty string ( "" ). Enable or disable FIPS mode. The default is false (disabled). If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Switching RHEL to FIPS mode . When running Red Hat Enterprise Linux (RHEL) or Red Hat Enterprise Linux CoreOS (RHCOS) booted in FIPS mode, OpenShift Container Platform core components use the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64, ppc64le, and s390x architectures. Note If you are using Azure File storage, you cannot enable FIPS mode. false or true Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String Specify one or more repositories that may also contain the same images. Array of strings How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . To deploy a private cluster, which cannot be accessed from the internet, set publish to Internal . The default value is External . The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . 10.1.4. Additional IBM Cloud configuration parameters Additional IBM Cloud(R) configuration parameters are described in the following table: Table 10.4. Additional IBM Cloud(R) parameters Parameter Description Values An IBM(R) Key Protect for IBM Cloud(R) (Key Protect) root key that should be used to encrypt the root (boot) volume of only control plane machines. The Cloud Resource Name (CRN) of the root key. The CRN must be enclosed in quotes (""). A Key Protect root key that should be used to encrypt the root (boot) volume of only compute machines. The CRN of the root key. The CRN must be enclosed in quotes (""). A Key Protect root key that should be used to encrypt the root (boot) volume of all of the cluster's machines. When specified as part of the default machine configuration, all managed storage classes are updated with this key. As such, data volumes that are provisioned after the installation are also encrypted using this key. The CRN of the root key. The CRN must be enclosed in quotes (""). The name of an existing resource group. By default, an installer-provisioned VPC and cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. If you are deploying the cluster into an existing VPC, the installer-provisioned cluster resources are placed in this resource group. When not specified, the installation program creates the resource group for the cluster. The VPC resources that you have provisioned must exist in a resource group that you specify using the networkResourceGroupName parameter. In either case, this resource group must only be used for a single cluster installation, as the cluster components assume ownership of all of the resources in the resource group. [ 1 ] String, for example existing_resource_group . A list of service endpoint names and URIs. By default, the installation program and cluster components use public service endpoints to access the required IBM Cloud(R) services. If network restrictions limit access to public service endpoints, you can specify an alternate service endpoint to override the default behavior. You can specify only one alternate service endpoint for each of the following services: Cloud Object Storage DNS Services Global Search Global Tagging Identity Services Key Protect Resource Controller Resource Manager VPC A valid service endpoint name and fully qualified URI. Valid names include: COS DNSServices GlobalServices GlobalTagging IAM KeyProtect ResourceController ResourceManager VPC The name of an existing resource group. This resource contains the existing VPC and subnets to which the cluster will be deployed. This parameter is required when deploying the cluster to a VPC that you have provisioned. String, for example existing_network_resource_group . The new dedicated host to create. If you specify a value for platform.ibmcloud.dedicatedHosts.name , this parameter is not required. Valid IBM Cloud(R) dedicated host profile, such as cx2-host-152x304 . [ 2 ] An existing dedicated host. If you specify a value for platform.ibmcloud.dedicatedHosts.profile , this parameter is not required. String, for example my-dedicated-host-name . The instance type for all IBM Cloud(R) machines. Valid IBM Cloud(R) instance type, such as bx2-8x32 . [ 2 ] The name of the existing VPC that you want to deploy your cluster to. String. The name(s) of the existing subnet(s) in your VPC that you want to deploy your control plane machines to. Specify a subnet for each availability zone. String array The name(s) of the existing subnet(s) in your VPC that you want to deploy your compute machines to. Specify a subnet for each availability zone. Subnet IDs are not supported. String array Whether you define an existing resource group, or if the installer creates one, determines how the resource group is treated when the cluster is uninstalled. If you define a resource group, the installer removes all of the installer-provisioned resources, but leaves the resource group alone; if a resource group is created as part of the installation, the installer removes all of the installer-provisioned resources and the resource group. To determine which profile best meets your needs, see Instance Profiles in the IBM(R) documentation.
[ "apiVersion:", "baseDomain:", "metadata:", "metadata: name:", "platform:", "pullSecret:", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking:", "networking: networkType:", "networking: clusterNetwork:", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: clusterNetwork: cidr:", "networking: clusterNetwork: hostPrefix:", "networking: serviceNetwork:", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork:", "networking: machineNetwork: - cidr: 10.0.0.0/16", "networking: machineNetwork: cidr:", "additionalTrustBundle:", "capabilities:", "capabilities: baselineCapabilitySet:", "capabilities: additionalEnabledCapabilities:", "cpuPartitioningMode:", "compute:", "compute: architecture:", "compute: hyperthreading:", "compute: name:", "compute: platform:", "compute: replicas:", "featureSet:", "controlPlane:", "controlPlane: architecture:", "controlPlane: hyperthreading:", "controlPlane: name:", "controlPlane: platform:", "controlPlane: replicas:", "credentialsMode:", "fips:", "imageContentSources:", "imageContentSources: source:", "imageContentSources: mirrors:", "publish:", "sshKey:", "controlPlane: platform: ibmcloud: bootVolume: encryptionKey:", "compute: platform: ibmcloud: bootVolume: encryptionKey:", "platform: ibmcloud: defaultMachinePlatform: bootvolume: encryptionKey:", "platform: ibmcloud: resourceGroupName:", "platform: ibmcloud: serviceEndpoints: - name: url:", "platform: ibmcloud: networkResourceGroupName:", "platform: ibmcloud: dedicatedHosts: profile:", "platform: ibmcloud: dedicatedHosts: name:", "platform: ibmcloud: type:", "platform: ibmcloud: vpcName:", "platform: ibmcloud: controlPlaneSubnets:", "platform: ibmcloud: computeSubnets:" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_cloud/installation-config-parameters-ibm-cloud-vpc
Chapter 14. Using qemu-img
Chapter 14. Using qemu-img The qemu-img command-line tool is used for formatting, modifying, and verifying various file systems used by KVM. qemu-img options and usages are highlighted in the sections that follow. Warning Never use qemu-img to modify images in use by a running virtual machine or any other process. This may destroy the image. Also, be aware that querying an image that is being modified by another process may encounter inconsistent state. 14.1. Checking the Disk Image To perform a consistency check on a disk image with the file name imgname . Note Only a selected group of formats support consistency checks. These include qcow2 , vdi , vhdx , vmdk , and qed .
[ "qemu-img check [-f format ] imgname" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/virtualization_deployment_and_administration_guide/chap-Using_qemu_img
Chapter 9. Deprecated Functionality
Chapter 9. Deprecated Functionality This chapter provides an overview of functionality that has been deprecated in all minor releases of Red Hat Enterprise Linux 7 up to Red Hat Enterprise Linux 7.9. Deprecated functionality continues to be supported until the end of life of Red Hat Enterprise Linux 7. Deprecated functionality will likely not be supported in future major releases of this product and is not recommended for new deployments. For the most recent list of deprecated functionality within a particular major release, refer to the latest version of release documentation. Deprecated hardware components are not recommended for new deployments on the current or future major releases. Hardware driver updates are limited to security and critical fixes only. Red Hat recommends replacing this hardware as soon as reasonably feasible. A package can be deprecated and not recommended for further use. Under certain circumstances, a package can be removed from a product. Product documentation then identifies more recent packages that offer functionality similar, identical, or more advanced to the one deprecated, and provides further recommendations. For details regarding differences between RHEL 7 and RHEL 8, see Considerations in adopting RHEL 8 . 9.1. Deprecated Packages The following packages are now deprecated. For information regarding replaced packages or availability in an unsupported RHEL 8 repository (if applicable), see Considerations in adopting RHEL 8 . a2ps abrt-addon-upload-watch abrt-devel abrt-gui-devel abrt-retrace-client acpid-sysvinit advancecomp adwaita-icon-theme-devel adwaita-qt-common adwaita-qt4 agg aic94xx-firmware akonadi akonadi-devel akonadi-mysql alacarte alsa-tools anaconda-widgets-devel ant-antunit ant-antunit-javadoc antlr-C++-doc antlr-python antlr-tool apache-commons-collections-javadoc apache-commons-collections-testframework apache-commons-configuration apache-commons-configuration-javadoc apache-commons-daemon apache-commons-daemon-javadoc apache-commons-daemon-jsvc apache-commons-dbcp apache-commons-dbcp-javadoc apache-commons-digester apache-commons-digester-javadoc apache-commons-jexl apache-commons-jexl-javadoc apache-commons-lang-javadoc apache-commons-pool apache-commons-pool-javadoc apache-commons-validator apache-commons-validator-javadoc apache-commons-vfs apache-commons-vfs-ant apache-commons-vfs-examples apache-commons-vfs-javadoc apache-rat apache-rat-core apache-rat-javadoc apache-rat-plugin apache-rat-tasks apr-util-nss args4j args4j-javadoc ark ark-libs asciidoc-latex at-spi at-spi-devel at-spi-python at-sysvinit atlas-static attica attica-devel audiocd-kio audiocd-kio-devel audiocd-kio-libs audiofile audiofile-devel audit-libs-python audit-libs-static authconfig authconfig-gtk authd autogen-libopts-devel automoc autotrace-devel avahi-dnsconfd avahi-glib-devel avahi-gobject-devel avahi-qt3 avahi-qt3-devel avahi-qt4 avahi-qt4-devel avahi-tools avahi-ui avahi-ui-devel avahi-ui-tools avalon-framework avalon-framework-javadoc avalon-logkit avalon-logkit-javadoc bacula-console-bat bacula-devel bacula-traymonitor baekmuk-ttf-batang-fonts baekmuk-ttf-dotum-fonts baekmuk-ttf-fonts-common baekmuk-ttf-fonts-ghostscript baekmuk-ttf-gulim-fonts baekmuk-ttf-hline-fonts base64coder base64coder-javadoc batik batik-demo batik-javadoc batik-rasterizer batik-slideshow batik-squiggle batik-svgpp batik-ttf2svg bcc-devel bcel bison-devel blas-static blas64-devel blas64-static bltk bluedevil bluedevil-autostart bmc-snmp-proxy bogofilter-bogoupgrade bridge-utils bsdcpio bsh-demo bsh-utils btrfs-progs btrfs-progs-devel buildnumber-maven-plugin buildnumber-maven-plugin-javadoc bwidget bzr bzr-doc cairo-tools cal10n caribou caribou-antler caribou-devel caribou-gtk2-module caribou-gtk3-module cdi-api-javadoc cdparanoia-static cdrskin ceph-common check-static cheese-libs-devel cifs-utils-devel cim-schema-docs cim-schema-docs cjkuni-ukai-fonts clutter-gst2-devel clutter-tests cmpi-bindings-pywbem cobertura cobertura-javadoc cockpit-machines-ovirt codehaus-parent codemodel codemodel-javadoc cogl-tests colord-extra-profiles colord-kde compat-cheese314 compat-dapl compat-dapl-devel compat-dapl-static compat-dapl-utils compat-db compat-db-headers compat-db47 compat-exiv2-023 compat-gcc-44 compat-gcc-44-c++ compat-gcc-44-gfortran compat-glade315 compat-glew compat-glibc compat-glibc-headers compat-gnome-desktop314 compat-grilo02 compat-libcap1 compat-libcogl-pango12 compat-libcogl12 compat-libcolord1 compat-libf2c-34 compat-libgdata13 compat-libgfortran-41 compat-libgnome-bluetooth11 compat-libgnome-desktop3-7 compat-libgweather3 compat-libical1 compat-libmediaart0 compat-libmpc compat-libpackagekit-glib2-16 compat-libstdc++-33 compat-libtiff3 compat-libupower-glib1 compat-libxcb compat-locales-sap-common compat-openldap compat-openmpi16 compat-openmpi16-devel compat-opensm-libs compat-poppler022 compat-poppler022-cpp compat-poppler022-glib compat-poppler022-qt compat-sap-c++-5 compat-sap-c++-6 compat-sap-c++-7 conman console-setup coolkey coolkey-devel cpptest cpptest-devel cppunit cppunit-devel cppunit-doc cpuid cracklib-python crda-devel crit criu-devel crypto-utils cryptsetup-python ctdb-tests cvs cvs-contrib cvs-doc cvs-inetd cvsps cyrus-imapd-devel dapl dapl-devel dapl-static dapl-utils dbus-doc dbus-python-devel dbus-tests dbusmenu-qt dbusmenu-qt-devel dbusmenu-qt-devel-docs debugmode dejagnu dejavu-lgc-sans-fonts dejavu-lgc-sans-mono-fonts dejavu-lgc-serif-fonts deltaiso dhcp-devel dialog-devel dleyna-connector-dbus-devel dleyna-core-devel dlm-devel dmraid dmraid-devel dmraid-events dmraid-events-logwatch docbook-simple docbook-slides docbook-style-dsssl docbook-utils docbook-utils-pdf docbook5-schemas docbook5-style-xsl docbook5-style-xsl-extensions docker-rhel-push-plugin dom4j dom4j-demo dom4j-javadoc dom4j-manual dovecot-pigeonhole dracut-fips dracut-fips-aesni dragon drm-utils drpmsync dtdinst e2fsprogs-static ecj edac-utils-devel efax efivar-devel egl-utils ekiga ElectricFence emacs-a2ps emacs-a2ps-el emacs-auctex emacs-auctex-doc emacs-git emacs-git-el emacs-gnuplot emacs-gnuplot-el emacs-php-mode empathy enchant-aspell enchant-voikko eog-devel epydoc espeak-devel evince-devel evince-dvi evolution-data-server-doc evolution-data-server-perl evolution-data-server-tests evolution-devel evolution-devel-docs evolution-tests expat-static expect-devel expectk farstream farstream-devel farstream-python farstream02-devel fedfs-utils-admin fedfs-utils-client fedfs-utils-common fedfs-utils-devel fedfs-utils-lib fedfs-utils-nsdbparams fedfs-utils-python fedfs-utils-server felix-bundlerepository felix-bundlerepository-javadoc felix-framework felix-framework-javadoc felix-osgi-obr felix-osgi-obr-javadoc felix-shell felix-shell-javadoc fence-sanlock festival festival-devel festival-docs festival-freebsoft-utils festival-lib festival-speechtools-devel festival-speechtools-libs festival-speechtools-utils festvox-awb-arctic-hts festvox-bdl-arctic-hts festvox-clb-arctic-hts festvox-jmk-arctic-hts festvox-kal-diphone festvox-ked-diphone festvox-rms-arctic-hts festvox-slt-arctic-hts file-static filebench filesystem-content finch finch-devel finger finger-server flatpak-devel flex-devel fltk-fluid fltk-static flute-javadoc folks folks-devel folks-tools fontforge-devel fontpackages-tools fonttools fop fop-javadoc fprintd-devel freeradius-python freetype-demos fros fros-gnome fros-recordmydesktop fwupd-devel fwupdate-devel gamin-python gavl-devel gcab gcc-gnat gcc-go gcc-objc gcc-objc++ gcc-plugin-devel gconf-editor gd-progs gdk-pixbuf2-tests gdm-devel gdm-pam-extensions-devel gedit-devel gedit-plugin-bookmarks gedit-plugin-bracketcompletion gedit-plugin-charmap gedit-plugin-codecomment gedit-plugin-colorpicker gedit-plugin-colorschemer gedit-plugin-commander gedit-plugin-drawspaces gedit-plugin-findinfiles gedit-plugin-joinlines gedit-plugin-multiedit gedit-plugin-smartspaces gedit-plugin-synctex gedit-plugin-terminal gedit-plugin-textsize gedit-plugin-translate gedit-plugin-wordcompletion gedit-plugins gedit-plugins-data gegl-devel geoclue geoclue-devel geoclue-doc geoclue-gsmloc geoclue-gui GeoIP GeoIP-data GeoIP-devel GeoIP-update geronimo-jaspic-spec geronimo-jaspic-spec-javadoc geronimo-jaxrpc geronimo-jaxrpc-javadoc geronimo-jms geronimo-jta geronimo-jta-javadoc geronimo-osgi-support geronimo-osgi-support-javadoc geronimo-saaj geronimo-saaj-javadoc ghostscript-chinese ghostscript-chinese-zh_CN ghostscript-chinese-zh_TW ghostscript-cups ghostscript-devel ghostscript-gtk giflib-utils gimp-data-extras gimp-help gimp-help-ca gimp-help-da gimp-help-de gimp-help-el gimp-help-en_GB gimp-help-es gimp-help-fr gimp-help-it gimp-help-ja gimp-help-ko gimp-help-nl gimp-help-nn gimp-help-pt_BR gimp-help-ru gimp-help-sl gimp-help-sv gimp-help-zh_CN git-bzr git-cvs git-gnome-keyring git-hg git-p4 gjs-tests glade glade3 glade3-libgladeui glade3-libgladeui-devel glassfish-dtd-parser glassfish-dtd-parser-javadoc glassfish-jaxb-javadoc glassfish-jsp glassfish-jsp-javadoc glew glib-networking-tests gmp-static gnome-clocks gnome-common gnome-contacts gnome-desktop3-tests gnome-devel-docs gnome-dictionary gnome-doc-utils gnome-doc-utils-stylesheets gnome-documents gnome-documents-libs gnome-icon-theme gnome-icon-theme-devel gnome-icon-theme-extras gnome-icon-theme-legacy gnome-icon-theme-symbolic gnome-packagekit gnome-packagekit-common gnome-packagekit-installer gnome-packagekit-updater gnome-python2 gnome-python2-bonobo gnome-python2-canvas gnome-python2-devel gnome-python2-gconf gnome-python2-gnome gnome-python2-gnomevfs gnome-settings-daemon-devel gnome-software-devel gnome-vfs2 gnome-vfs2-devel gnome-vfs2-smb gnome-weather gnome-weather-tests gnote gnu-efi-utils gnu-getopt gnu-getopt-javadoc gnuplot-latex gnuplot-minimal gob2 gom-devel google-noto-sans-korean-fonts google-noto-sans-simplified-chinese-fonts google-noto-sans-traditional-chinese-fonts gperftools gperftools-devel gperftools-libs gpm-static grantlee grantlee-apidocs grantlee-devel graphviz-graphs graphviz-guile graphviz-java graphviz-lua graphviz-ocaml graphviz-perl graphviz-php graphviz-python graphviz-ruby graphviz-tcl groff-doc groff-perl groff-x11 groovy groovy-javadoc grub2 grub2-ppc-modules grub2-ppc64-modules gsm-tools gsound-devel gssdp-utils gstreamer gstreamer-devel gstreamer-devel-docs gstreamer-plugins-bad-free gstreamer-plugins-bad-free-devel gstreamer-plugins-bad-free-devel-docs gstreamer-plugins-base gstreamer-plugins-base-devel gstreamer-plugins-base-devel-docs gstreamer-plugins-base-tools gstreamer-plugins-good gstreamer-plugins-good-devel-docs gstreamer-python gstreamer-python-devel gstreamer-tools gstreamer1-devel-docs gstreamer1-plugins-base-devel-docs gstreamer1-plugins-base-tools gstreamer1-plugins-ugly-free-devel gtk-vnc gtk-vnc-devel gtk-vnc-python gtk-vnc2-devel gtk3-devel-docs gtk3-immodules gtk3-tests gtkhtml3 gtkhtml3-devel gtksourceview3-tests gucharmap gucharmap-devel gucharmap-libs gupnp-av-devel gupnp-av-docs gupnp-dlna-devel gupnp-dlna-docs gupnp-docs gupnp-igd-python gutenprint-devel gutenprint-extras gutenprint-foomatic gvfs-tests gvnc-devel gvnc-tools gvncpulse gvncpulse-devel gwenview gwenview-libs hamcrest hawkey-devel hesiod highcontrast-qt highcontrast-qt4 highcontrast-qt5 highlight-gui hispavoces-pal-diphone hispavoces-sfl-diphone hsakmt hsakmt-devel hspell-devel hsqldb hsqldb-demo hsqldb-javadoc hsqldb-manual htdig html2ps http-parser-devel httpunit httpunit-doc httpunit-javadoc i2c-tools-eepromer i2c-tools-python ibus-pygtk2 ibus-qt ibus-qt-devel ibus-qt-docs ibus-rawcode ibus-table-devel ibutils ibutils-devel ibutils-libs icc-profiles-openicc icon-naming-utils im-chooser im-chooser-common ImageMagick ImageMagick-c++ ImageMagick-c++-devel ImageMagick-devel ImageMagick-doc ImageMagick-perl imake imsettings imsettings-devel imsettings-gsettings imsettings-libs imsettings-qt imsettings-xim indent infinipath-psm infinipath-psm-devel iniparser iniparser-devel iok ipa-gothic-fonts ipa-mincho-fonts ipa-pgothic-fonts ipa-pmincho-fonts iperf3-devel iproute-doc ipset-devel ipsilon ipsilon-authform ipsilon-authgssapi ipsilon-authldap ipsilon-base ipsilon-client ipsilon-filesystem ipsilon-infosssd ipsilon-persona ipsilon-saml2 ipsilon-saml2-base ipsilon-tools-ipa iputils-sysvinit iscsi-initiator-utils-devel isdn4k-utils isdn4k-utils-devel isdn4k-utils-doc isdn4k-utils-static isdn4k-utils-vboxgetty isomd5sum-devel isorelax istack-commons-javadoc ixpdimm_sw ixpdimm_sw-devel ixpdimm-cli ixpdimm-monitor jai-imageio-core jai-imageio-core-javadoc jakarta-commons-httpclient-demo jakarta-commons-httpclient-javadoc jakarta-commons-httpclient-manual jakarta-oro jakarta-taglibs-standard jakarta-taglibs-standard-javadoc jandex jandex-javadoc jansson-devel-doc jarjar jarjar-javadoc jarjar-maven-plugin jasper jasper-utils java-1.6.0-openjdk java-1.6.0-openjdk-demo java-1.6.0-openjdk-devel java-1.6.0-openjdk-javadoc java-1.6.0-openjdk-src java-1.7.0-openjdk java-1.7.0-openjdk-accessibility java-1.7.0-openjdk-demo java-1.7.0-openjdk-devel java-1.7.0-openjdk-headless java-1.7.0-openjdk-javadoc java-1.7.0-openjdk-src java-1.8.0-openjdk-accessibility-debug java-1.8.0-openjdk-debug java-1.8.0-openjdk-demo-debug java-1.8.0-openjdk-devel-debug java-1.8.0-openjdk-headless-debug java-1.8.0-openjdk-javadoc-debug java-1.8.0-openjdk-javadoc-zip-debug java-1.8.0-openjdk-src-debug java-11-openjdk-debug java-11-openjdk-demo-debug java-11-openjdk-devel-debug java-11-openjdk-headless-debug java-11-openjdk-javadoc-debug java-11-openjdk-javadoc-zip-debug java-11-openjdk-jmods-debug java-11-openjdk-src-debug javamail jaxen jboss-ejb-3.1-api jboss-ejb-3.1-api-javadoc jboss-el-2.2-api jboss-el-2.2-api-javadoc jboss-jaxrpc-1.1-api jboss-jaxrpc-1.1-api-javadoc jboss-servlet-2.5-api jboss-servlet-2.5-api-javadoc jboss-servlet-3.0-api jboss-servlet-3.0-api-javadoc jboss-specs-parent jboss-transaction-1.1-api jboss-transaction-1.1-api-javadoc jdom jettison jettison-javadoc jetty-annotations jetty-ant jetty-artifact-remote-resources jetty-assembly-descriptors jetty-build-support jetty-build-support-javadoc jetty-client jetty-continuation jetty-deploy jetty-distribution-remote-resources jetty-http jetty-io jetty-jaas jetty-jaspi jetty-javadoc jetty-jmx jetty-jndi jetty-jsp jetty-jspc-maven-plugin jetty-maven-plugin jetty-monitor jetty-parent jetty-plus jetty-project jetty-proxy jetty-rewrite jetty-runner jetty-security jetty-server jetty-servlet jetty-servlets jetty-start jetty-test-policy jetty-test-policy-javadoc jetty-toolchain jetty-util jetty-util-ajax jetty-version-maven-plugin jetty-version-maven-plugin-javadoc jetty-webapp jetty-websocket-api jetty-websocket-client jetty-websocket-common jetty-websocket-parent jetty-websocket-server jetty-websocket-servlet jetty-xml jing jing-javadoc jline-demo jna jna-contrib jna-javadoc joda-convert joda-convert-javadoc js js-devel jsch-demo json-glib-tests jsr-311 jsr-311-javadoc juk junit junit-demo jvnet-parent k3b k3b-common k3b-devel k3b-libs kaccessible kaccessible-libs kactivities kactivities-devel kamera kate kate-devel kate-libs kate-part kcalc kcharselect kcm_colors kcm_touchpad kcm-gtk kcolorchooser kcoloredit kde-base-artwork kde-baseapps kde-baseapps-devel kde-baseapps-libs kde-filesystem kde-l10n kde-l10n-Arabic kde-l10n-Basque kde-l10n-Bosnian kde-l10n-British kde-l10n-Bulgarian kde-l10n-Catalan kde-l10n-Catalan-Valencian kde-l10n-Croatian kde-l10n-Czech kde-l10n-Danish kde-l10n-Dutch kde-l10n-Estonian kde-l10n-Farsi kde-l10n-Finnish kde-l10n-Galician kde-l10n-Greek kde-l10n-Hebrew kde-l10n-Hungarian kde-l10n-Icelandic kde-l10n-Interlingua kde-l10n-Irish kde-l10n-Kazakh kde-l10n-Khmer kde-l10n-Latvian kde-l10n-Lithuanian kde-l10n-LowSaxon kde-l10n-Norwegian kde-l10n-Norwegian-Nynorsk kde-l10n-Polish kde-l10n-Portuguese kde-l10n-Romanian kde-l10n-Serbian kde-l10n-Slovak kde-l10n-Slovenian kde-l10n-Swedish kde-l10n-Tajik kde-l10n-Thai kde-l10n-Turkish kde-l10n-Ukrainian kde-l10n-Uyghur kde-l10n-Vietnamese kde-l10n-Walloon kde-plasma-networkmanagement kde-plasma-networkmanagement-libreswan kde-plasma-networkmanagement-libs kde-plasma-networkmanagement-mobile kde-print-manager kde-runtime kde-runtime-devel kde-runtime-drkonqi kde-runtime-libs kde-settings kde-settings-ksplash kde-settings-minimal kde-settings-plasma kde-settings-pulseaudio kde-style-oxygen kde-style-phase kde-wallpapers kde-workspace kde-workspace-devel kde-workspace-ksplash-themes kde-workspace-libs kdeaccessibility kdeadmin kdeartwork kdeartwork-screensavers kdeartwork-sounds kdeartwork-wallpapers kdeclassic-cursor-theme kdegraphics kdegraphics-devel kdegraphics-libs kdegraphics-strigi-analyzer kdegraphics-thumbnailers kdelibs kdelibs-apidocs kdelibs-common kdelibs-devel kdelibs-ktexteditor kdemultimedia kdemultimedia-common kdemultimedia-devel kdemultimedia-libs kdenetwork kdenetwork-common kdenetwork-devel kdenetwork-fileshare-samba kdenetwork-kdnssd kdenetwork-kget kdenetwork-kget-libs kdenetwork-kopete kdenetwork-kopete-devel kdenetwork-kopete-libs kdenetwork-krdc kdenetwork-krdc-devel kdenetwork-krdc-libs kdenetwork-krfb kdenetwork-krfb-libs kdepim kdepim-devel kdepim-libs kdepim-runtime kdepim-runtime-libs kdepimlibs kdepimlibs-akonadi kdepimlibs-apidocs kdepimlibs-devel kdepimlibs-kxmlrpcclient kdeplasma-addons kdeplasma-addons-devel kdeplasma-addons-libs kdesdk kdesdk-cervisia kdesdk-common kdesdk-devel kdesdk-dolphin-plugins kdesdk-kapptemplate kdesdk-kapptemplate-template kdesdk-kcachegrind kdesdk-kioslave kdesdk-kmtrace kdesdk-kmtrace-devel kdesdk-kmtrace-libs kdesdk-kompare kdesdk-kompare-devel kdesdk-kompare-libs kdesdk-kpartloader kdesdk-kstartperf kdesdk-kuiviewer kdesdk-lokalize kdesdk-okteta kdesdk-okteta-devel kdesdk-okteta-libs kdesdk-poxml kdesdk-scripts kdesdk-strigi-analyzer kdesdk-thumbnailers kdesdk-umbrello kdeutils kdeutils-common kdeutils-minimal kdf kernel-rt-doc kernel-rt-trace kernel-rt-trace-devel kernel-rt-trace-kvm keytool-maven-plugin keytool-maven-plugin-javadoc kgamma kgpg kgreeter-plugins khotkeys khotkeys-libs kiconedit kinfocenter kio_sysinfo kmag kmenuedit kmix kmod-oracleasm kolourpaint kolourpaint-libs konkretcmpi konkretcmpi-devel konkretcmpi-python konsole konsole-part kross-interpreters kross-python kross-ruby kruler ksaneplugin kscreen ksnapshot ksshaskpass ksysguard ksysguard-libs ksysguardd ktimer kwallet kwin kwin-gles kwin-gles-libs kwin-libs kwrite kxml kxml-javadoc lapack64-devel lapack64-static langtable-data lasso-devel latrace lcms2-utils ldns-doc ldns-python libabw-devel libabw-doc libabw-tools libappindicator libappindicator-devel libappindicator-docs libappstream-glib-builder libappstream-glib-builder-devel libart_lgpl libart_lgpl-devel libasan-static libavc1394-devel libbase-javadoc libblockdev-btrfs libblockdev-btrfs-devel libblockdev-crypto-devel libblockdev-devel libblockdev-dm-devel libblockdev-fs-devel libblockdev-kbd-devel libblockdev-loop-devel libblockdev-lvm-devel libblockdev-mdraid-devel libblockdev-mpath-devel libblockdev-nvdimm-devel libblockdev-part-devel libblockdev-swap-devel libblockdev-utils-devel libblockdev-vdo-devel libbluedevil libbluedevil-devel libbluray-devel libbonobo libbonobo-devel libbonoboui libbonoboui-devel libbytesize-devel libcacard-tools libcap-ng-python libcdr-devel libcdr-doc libcdr-tools libcgroup-devel libchamplain-demos libchewing libchewing-devel libchewing-python libcmis-devel libcmis-tools libcryptui libcryptui-devel libdb-devel-static libdb-java libdb-java-devel libdb-tcl libdb-tcl-devel libdbi libdbi-dbd-mysql libdbi-dbd-pgsql libdbi-dbd-sqlite libdbi-devel libdbi-drivers libdbusmenu-doc libdbusmenu-gtk2 libdbusmenu-gtk2-devel libdbusmenu-gtk3-devel libdhash-devel libdmapsharing-devel libdmmp-devel libdmx-devel libdnet-progs libdnet-python libdnf-devel libdv-tools libdvdnav-devel libeasyfc-devel libeasyfc-gobject-devel libee libee-devel libee-utils libesmtp libesmtp-devel libestr-devel libetonyek-doc libetonyek-tools libevdev-utils libexif-doc libexttextcat-devel libexttextcat-tools libfastjson-devel libfdt libfonts-javadoc libformula-javadoc libfprint-devel libfreehand-devel libfreehand-doc libfreehand-tools libgcab1-devel libgccjit libgdither-devel libgee06 libgee06-devel libgepub libgepub-devel libgfortran-static libgfortran4 libgfortran5 libgit2-devel libglade2 libglade2-devel libGLEWmx libgnat libgnat-devel libgnat-static libgnome libgnome-devel libgnome-keyring-devel libgnomecanvas libgnomecanvas-devel libgnomeui libgnomeui-devel libgo libgo-devel libgo-static libgovirt-devel libgudev-devel libgxim libgxim-devel libgxps-tools libhangul-devel libhbaapi-devel libhif-devel libical-glib libical-glib-devel libical-glib-doc libid3tag libid3tag-devel libiec61883-utils libieee1284-python libimobiledevice-python libimobiledevice-utils libindicator libindicator-devel libindicator-gtk3-devel libindicator-tools libinvm-cim libinvm-cim-devel libinvm-cli libinvm-cli-devel libinvm-i18n libinvm-i18n-devel libiodbc libiodbc-devel libipa_hbac-devel libiptcdata-devel libiptcdata-python libitm-static libixpdimm-cim libixpdimm-core libjpeg-turbo-static libkcddb libkcddb-devel libkcompactdisc libkcompactdisc-devel libkdcraw libkdcraw-devel libkexiv2 libkexiv2-devel libkipi libkipi-devel libkkc-devel libkkc-tools libksane libksane-devel libkscreen libkscreen-devel libkworkspace liblayout-javadoc libloader-javadoc liblognorm-devel liblouis-devel liblouis-doc liblouis-utils libmatchbox-devel libmbim-devel libmediaart-devel libmediaart-tests libmnl-static libmodman-devel libmodulemd-devel libmpc-devel libmsn libmsn-devel libmspub-devel libmspub-doc libmspub-tools libmtp-examples libmudflap libmudflap-devel libmudflap-static libmwaw-devel libmwaw-doc libmwaw-tools libmx libmx-devel libmx-docs libndp-devel libnetfilter_cthelper-devel libnetfilter_cttimeout-devel libnftnl-devel libnl libnl-devel libnm-gtk libnm-gtk-devel libntlm libntlm-devel libobjc libodfgen-doc libofa libofa-devel liboil liboil-devel libopenraw-pixbuf-loader liborcus-devel liborcus-doc liborcus-tools libosinfo-devel libosinfo-vala libotf-devel libpagemaker-devel libpagemaker-doc libpagemaker-tools libpinyin-devel libpinyin-tools libpipeline-devel libplist-python libpng-static libpng12-devel libproxy-kde libpst libpst-devel libpst-devel-doc libpst-doc libpst-python libpurple-perl libpurple-tcl libqmi-devel libquadmath-static LibRaw-static librelp-devel libreoffice libreoffice-bsh libreoffice-gdb-debug-support libreoffice-glade libreoffice-gtk2 libreoffice-librelogo libreoffice-nlpsolver libreoffice-officebean libreoffice-officebean-common libreoffice-postgresql libreoffice-rhino libreofficekit-devel librepo-devel libreport-compat libreport-devel libreport-gtk-devel libreport-web-devel librepository-javadoc librevenge-doc librsvg2-tools libseccomp-devel libselinux-static libsemanage-devel libsemanage-static libserializer-javadoc libsexy libsexy-devel libsmbios-devel libsmi-devel libsndfile-utils libsolv-demo libsolv-devel libsolv-tools libspiro-devel libss-devel libssh2 libsss_certmap-devel libsss_idmap-devel libsss_nss_idmap-devel libsss_simpleifp-devel libstaroffice-devel libstaroffice-doc libstaroffice-tools libstdc++-static libstoragemgmt-devel libstoragemgmt-targetd-plugin libtar-devel libteam-devel libtheora-devel-docs libtiff-static libtimezonemap-devel libtnc libtnc-devel libtranslit libtranslit-devel libtranslit-icu libtranslit-m17n libtsan-static libudisks2-devel libuninameslist-devel libunwind libunwind-devel libusal-devel libusb-static libusbmuxd-utils libuser-devel libvdpau-docs libverto-glib libverto-glib-devel libverto-libevent-devel libverto-tevent libverto-tevent-devel libvirt-cim libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-gconfig-devel libvirt-glib-devel libvirt-gobject-devel libvirt-java libvirt-java-devel libvirt-java-javadoc libvirt-login-shell libvirt-snmp libvisio-doc libvisio-tools libvma-devel libvma-utils libvoikko-devel libvpx-utils libwebp-java libwebp-tools libwpd-tools libwpg-tools libwps-tools libwsman-devel libwvstreams libwvstreams-devel libwvstreams-static libxcb-doc libXevie libXevie-devel libXfont libXfont-devel libxml2-static libxslt-python libXvMC-devel libzapojit libzapojit-devel libzmf-devel libzmf-doc libzmf-tools lldpad-devel log4cxx log4cxx-devel log4j-manual lpsolve-devel lua-devel lua-static lvm2-cluster lvm2-python-libs lvm2-sysvinit lz4-static m17n-contrib m17n-contrib-extras m17n-db-devel m17n-db-extras m17n-lib-devel m17n-lib-tools m2crypto malaga-devel man-pages-cs man-pages-es man-pages-es-extra man-pages-fr man-pages-it man-pages-ja man-pages-ko man-pages-pl man-pages-ru man-pages-zh-CN mariadb-bench marisa-devel marisa-perl marisa-python marisa-ruby marisa-tools maven-changes-plugin maven-changes-plugin-javadoc maven-deploy-plugin maven-deploy-plugin-javadoc maven-doxia-module-fo maven-ear-plugin maven-ear-plugin-javadoc maven-ejb-plugin maven-ejb-plugin-javadoc maven-error-diagnostics maven-gpg-plugin maven-gpg-plugin-javadoc maven-istack-commons-plugin maven-jarsigner-plugin maven-jarsigner-plugin-javadoc maven-javadoc-plugin maven-javadoc-plugin-javadoc maven-jxr maven-jxr-javadoc maven-osgi maven-osgi-javadoc maven-plugin-jxr maven-project-info-reports-plugin maven-project-info-reports-plugin-javadoc maven-release maven-release-javadoc maven-release-manager maven-release-plugin maven-reporting-exec maven-repository-builder maven-repository-builder-javadoc maven-scm maven-scm-javadoc maven-scm-test maven-shared-jar maven-shared-jar-javadoc maven-site-plugin maven-site-plugin-javadoc maven-verifier-plugin maven-verifier-plugin-javadoc maven-wagon-provider-test maven-wagon-scm maven-war-plugin maven-war-plugin-javadoc mdds-devel meanwhile-devel meanwhile-doc memcached-devel memstomp mesa-demos mesa-libxatracker-devel mesa-private-llvm mesa-private-llvm-devel metacity-devel mgetty mgetty-sendfax mgetty-viewfax mgetty-voice migrationtools minizip minizip-devel mkbootdisk mobile-broadband-provider-info-devel mod_auth_kerb mod_auth_mellon-diagnostics mod_nss mod_revocator ModemManager-vala mono-icon-theme mozjs17 mozjs17-devel mozjs24 mozjs24-devel mpich-3.0-autoload mpich-3.0-doc mpich-3.2-autoload mpich-3.2-doc mpitests-compat-openmpi16 msv-demo msv-msv msv-rngconv msv-xmlgen mvapich2-2.0-devel mvapich2-2.0-doc mvapich2-2.0-psm-devel mvapich2-2.2-devel mvapich2-2.2-doc mvapich2-2.2-psm-devel mvapich2-2.2-psm2-devel mvapich23-devel mvapich23-doc mvapich23-psm-devel mvapich23-psm2-devel nagios-plugins-bacula nasm nasm-doc nasm-rdoff ncurses-static nekohtml nekohtml-demo nekohtml-javadoc nepomuk-core nepomuk-core-devel nepomuk-core-libs nepomuk-widgets nepomuk-widgets-devel net-snmp-gui net-snmp-perl net-snmp-python net-snmp-sysvinit netsniff-ng NetworkManager-glib NetworkManager-glib-devel newt-static nfsometer nfstest nhn-nanum-brush-fonts nhn-nanum-fonts-common nhn-nanum-myeongjo-fonts nhn-nanum-pen-fonts nmap-frontend nss_compat_ossl nss_compat_ossl-devel nss-pem nss-pkcs11-devel ntp-doc ntp-perl nuvola-icon-theme nuxwdog nuxwdog-client-java nuxwdog-client-perl nuxwdog-devel objectweb-anttask objectweb-anttask-javadoc objectweb-asm ocaml-brlapi ocaml-calendar ocaml-calendar-devel ocaml-csv ocaml-csv-devel ocaml-curses ocaml-curses-devel ocaml-docs ocaml-emacs ocaml-fileutils ocaml-fileutils-devel ocaml-gettext ocaml-gettext-devel ocaml-libvirt ocaml-libvirt-devel ocaml-ocamlbuild-doc ocaml-source ocaml-x11 ocaml-xml-light ocaml-xml-light-devel oci-register-machine okular okular-devel okular-libs okular-part opa-libopamgt-devel opal opal-devel open-vm-tools-devel open-vm-tools-test opencc-tools openchange-client openchange-devel openchange-devel-docs opencv-devel-docs opencv-python OpenEXR openhpi-devel openjade openjpeg-devel openjpeg-libs openldap-servers openldap-servers-sql openlmi openlmi-account openlmi-account-doc openlmi-fan openlmi-fan-doc openlmi-hardware openlmi-hardware-doc openlmi-indicationmanager-libs openlmi-indicationmanager-libs-devel openlmi-journald openlmi-journald-doc openlmi-logicalfile openlmi-logicalfile-doc openlmi-networking openlmi-networking-doc openlmi-pcp openlmi-powermanagement openlmi-powermanagement-doc openlmi-providers openlmi-providers-devel openlmi-python-base openlmi-python-providers openlmi-python-test openlmi-realmd openlmi-realmd-doc openlmi-service openlmi-service-doc openlmi-software openlmi-software-doc openlmi-storage openlmi-storage-doc openlmi-tools openlmi-tools-doc openobex openobex-apps openobex-devel openscap-containers openscap-engine-sce-devel openslp-devel openslp-server opensm-static opensp openssh-server-sysvinit openssl-static openssl098e openwsman-perl openwsman-ruby oprofile-devel oprofile-gui oprofile-jit optipng ORBit2 ORBit2-devel orc-doc ortp ortp-devel oscilloscope oxygen-cursor-themes oxygen-gtk oxygen-gtk2 oxygen-gtk3 oxygen-icon-theme PackageKit-yum-plugin pakchois-devel pam_krb5 pam_pkcs11 pam_snapper pango-tests paps-devel passivetex pax pciutils-devel-static pcp-collector pcp-monitor pcre-tools pcre2-static pcre2-tools pentaho-libxml-javadoc pentaho-reporting-flow-engine-javadoc perl-AppConfig perl-Archive-Extract perl-B-Keywords perl-Browser-Open perl-Business-ISBN perl-Business-ISBN-Data perl-CGI-Session perl-Class-Load perl-Class-Load-XS perl-Class-Singleton perl-Config-Simple perl-Config-Tiny perl-Convert-ASN1 perl-CPAN-Changes perl-CPANPLUS perl-CPANPLUS-Dist-Build perl-Crypt-CBC perl-Crypt-DES perl-Crypt-OpenSSL-Bignum perl-Crypt-OpenSSL-Random perl-Crypt-OpenSSL-RSA perl-Crypt-PasswdMD5 perl-Crypt-SSLeay perl-CSS-Tiny perl-Data-Peek perl-DateTime perl-DateTime-Format-DateParse perl-DateTime-Locale perl-DateTime-TimeZone perl-DBD-Pg-tests perl-DBIx-Simple perl-Devel-Cover perl-Devel-Cycle perl-Devel-EnforceEncapsulation perl-Devel-Leak perl-Devel-Symdump perl-Digest-SHA1 perl-Email-Address perl-FCGI perl-File-Find-Rule-Perl perl-File-Inplace perl-Font-AFM perl-Font-TTF perl-FreezeThaw perl-GD perl-GD-Barcode perl-Hook-LexWrap perl-HTML-Format perl-HTML-FormatText-WithLinks perl-HTML-FormatText-WithLinks-AndTables perl-HTML-Tree perl-HTTP-Daemon perl-Image-Base perl-Image-Info perl-Image-Xbm perl-Image-Xpm perl-Inline perl-Inline-Files perl-IO-CaptureOutput perl-IO-stringy perl-JSON-tests perl-LDAP perl-libxml-perl perl-List-MoreUtils perl-Locale-Maketext-Gettext perl-Locale-PO perl-Log-Message perl-Log-Message-Simple perl-Mail-DKIM perl-Mixin-Linewise perl-Module-Implementation perl-Module-Manifest perl-Module-Signature perl-Net-Daemon perl-Net-DNS-Nameserver perl-Net-DNS-Resolver-Programmable perl-Net-LibIDN perl-Net-Telnet perl-Newt perl-Object-Accessor perl-Object-Deadly perl-Package-Constants perl-Package-DeprecationManager perl-Package-Stash perl-Package-Stash-XS perl-PAR-Dist perl-Parallel-Iterator perl-Params-Validate perl-Parse-CPAN-Meta perl-Parse-RecDescent perl-Perl-Critic perl-Perl-Critic-More perl-Perl-MinimumVersion perl-Perl4-CoreLibs perl-PlRPC perl-Pod-Coverage perl-Pod-Coverage-TrustPod perl-Pod-Eventual perl-Pod-POM perl-Pod-Spell perl-PPI perl-PPI-HTML perl-PPIx-Regexp perl-PPIx-Utilities perl-Probe-Perl perl-Readonly-XS perl-SGMLSpm perl-Sort-Versions perl-String-Format perl-String-Similarity perl-Syntax-Highlight-Engine-Kate perl-Task-Weaken perl-Template-Toolkit perl-Term-UI perl-Test-ClassAPI perl-Test-CPAN-Meta perl-Test-DistManifest perl-Test-EOL perl-Test-HasVersion perl-Test-Inter perl-Test-Manifest perl-Test-Memory-Cycle perl-Test-MinimumVersion perl-Test-MockObject perl-Test-NoTabs perl-Test-Object perl-Test-Output perl-Test-Perl-Critic perl-Test-Perl-Critic-Policy perl-Test-Pod perl-Test-Pod-Coverage perl-Test-Portability-Files perl-Test-Script perl-Test-Spelling perl-Test-SubCalls perl-Test-Synopsis perl-Test-Tester perl-Test-Vars perl-Test-Without-Module perl-Text-CSV_XS perl-Text-Iconv perl-Tree-DAG_Node perl-Unicode-Map8 perl-Unicode-String perl-UNIVERSAL-can perl-UNIVERSAL-isa perl-Version-Requirements perl-WWW-Curl perl-XML-Dumper perl-XML-Filter-BufferText perl-XML-Grove perl-XML-Handler-YAWriter perl-XML-LibXSLT perl-XML-SAX-Writer perl-XML-TreeBuilder perl-XML-Twig perl-XML-Writer perl-XML-XPathEngine perl-YAML-Tiny perltidy phonon phonon-backend-gstreamer phonon-devel php-pecl-memcache php-pspell pidgin-perl pinentry-qt pinentry-qt4 pki-javadoc plasma-scriptengine-python plasma-scriptengine-ruby plexus-digest plexus-digest-javadoc plexus-mail-sender plexus-mail-sender-javadoc plexus-tools-pom plymouth-devel pm-utils pm-utils-devel pngcrush pngnq polkit-kde polkit-qt polkit-qt-devel polkit-qt-doc poppler-demos poppler-qt poppler-qt-devel popt-static postfix-sysvinit pothana2000-fonts powerpc-utils-python pprof pps-tools pptp-setup procps-ng-devel protobuf-emacs protobuf-emacs-el protobuf-java protobuf-javadoc protobuf-lite-devel protobuf-lite-static protobuf-python protobuf-static protobuf-vim psutils psutils-perl pth-devel ptlib ptlib-devel publican publican-common-db5-web publican-common-web publican-doc publican-redhat pulseaudio-esound-compat pulseaudio-module-gconf pulseaudio-module-zeroconf pulseaudio-qpaeq pygpgme pygtk2-libglade pykde4 pykde4-akonadi pykde4-devel pyldb-devel pyliblzma PyOpenGL PyOpenGL-Tk pyOpenSSL-doc pyorbit pyorbit-devel PyPAM pyparsing-doc PyQt4 PyQt4-devel pytalloc-devel python-appindicator python-beaker python-cffi-doc python-cherrypy python-criu python-debug python-deltarpm python-dtopt python-fpconst python-gpod python-gudev python-inotify-examples python-ipaddr python-IPy python-isodate python-isomd5sum python-kerberos python-kitchen python-kitchen-doc python-krbV python-libteam python-lxml-docs python-matplotlib python-matplotlib-doc python-matplotlib-qt4 python-matplotlib-tk python-memcached python-mutagen python-paramiko python-paramiko-doc python-paste python-pillow-devel python-pillow-doc python-pillow-qt python-pillow-sane python-pillow-tk python-rados python-rbd python-reportlab-docs python-requests-kerberos python-rtslib-doc python-setproctitle python-slip-gtk python-smbc python-smbc-doc python-smbios python-sphinx-doc python-tempita python-tornado python-tornado-doc python-twisted-core python-twisted-core-doc python-twisted-web python-twisted-words python-urlgrabber python-volume_key python-webob python-webtest python-which python-zope-interface python2-caribou python2-futures python2-gexiv2 python2-smartcols python2-solv python2-subprocess32 qca-ossl qca2 qca2-devel qdox qimageblitz qimageblitz-devel qimageblitz-examples qjson qjson-devel qpdf-devel qt qt-assistant qt-config qt-demos qt-devel qt-devel-private qt-doc qt-examples qt-mysql qt-odbc qt-postgresql qt-qdbusviewer qt-qvfb qt-settings qt-x11 qt3 qt3-config qt3-designer qt3-devel qt3-devel-docs qt3-MySQL qt3-ODBC qt3-PostgreSQL qt5-qt3d-doc qt5-qtbase-doc qt5-qtcanvas3d-doc qt5-qtconnectivity-doc qt5-qtdeclarative-doc qt5-qtenginio qt5-qtenginio-devel qt5-qtenginio-doc qt5-qtenginio-examples qt5-qtgraphicaleffects-doc qt5-qtimageformats-doc qt5-qtlocation-doc qt5-qtmultimedia-doc qt5-qtquickcontrols-doc qt5-qtquickcontrols2-doc qt5-qtscript-doc qt5-qtsensors-doc qt5-qtserialbus-devel qt5-qtserialbus-doc qt5-qtserialport-doc qt5-qtsvg-doc qt5-qttools-doc qt5-qtwayland-doc qt5-qtwebchannel-doc qt5-qtwebsockets-doc qt5-qtx11extras-doc qt5-qtxmlpatterns-doc quagga quagga-contrib quota-devel qv4l2 rarian-devel rcs rdate rdist readline-static realmd-devel-docs Red_Hat_Enterprise_Linux-Release_Notes-7-as-IN Red_Hat_Enterprise_Linux-Release_Notes-7-bn-IN Red_Hat_Enterprise_Linux-Release_Notes-7-de-DE Red_Hat_Enterprise_Linux-Release_Notes-7-en-US Red_Hat_Enterprise_Linux-Release_Notes-7-es-ES Red_Hat_Enterprise_Linux-Release_Notes-7-fr-FR Red_Hat_Enterprise_Linux-Release_Notes-7-gu-IN Red_Hat_Enterprise_Linux-Release_Notes-7-hi-IN Red_Hat_Enterprise_Linux-Release_Notes-7-it-IT Red_Hat_Enterprise_Linux-Release_Notes-7-ja-JP Red_Hat_Enterprise_Linux-Release_Notes-7-kn-IN Red_Hat_Enterprise_Linux-Release_Notes-7-ko-KR Red_Hat_Enterprise_Linux-Release_Notes-7-ml-IN Red_Hat_Enterprise_Linux-Release_Notes-7-mr-IN Red_Hat_Enterprise_Linux-Release_Notes-7-or-IN Red_Hat_Enterprise_Linux-Release_Notes-7-pa-IN Red_Hat_Enterprise_Linux-Release_Notes-7-pt-BR Red_Hat_Enterprise_Linux-Release_Notes-7-ru-RU Red_Hat_Enterprise_Linux-Release_Notes-7-ta-IN Red_Hat_Enterprise_Linux-Release_Notes-7-te-IN Red_Hat_Enterprise_Linux-Release_Notes-7-zh-CN Red_Hat_Enterprise_Linux-Release_Notes-7-zh-TW redhat-access-plugin-ipa redhat-bookmarks redhat-lsb-supplemental redhat-lsb-trialuse redhat-upgrade-dracut redhat-upgrade-dracut-plymouth redhat-upgrade-tool redland-mysql redland-pgsql redland-virtuoso regexp relaxngcc rest-devel resteasy-base-jettison-provider resteasy-base-tjws rhdb-utils rhino rhino-demo rhino-javadoc rhino-manual rhythmbox-devel rngom rngom-javadoc rp-pppoe rrdtool-php rrdtool-python rsh rsh-server rsyslog-libdbi rsyslog-udpspoof rtcheck rtctl rteval-common ruby-tcltk rubygem-net-http-persistent rubygem-net-http-persistent-doc rubygem-thor rubygem-thor-doc rusers rusers-server rwho sac-javadoc samba-dc samba-devel satyr-devel satyr-python saxon saxon-demo saxon-javadoc saxon-manual saxon-scripts sbc-devel sblim-cim-client2 sblim-cim-client2-javadoc sblim-cim-client2-manual sblim-cmpi-base sblim-cmpi-base-devel sblim-cmpi-base-test sblim-cmpi-fsvol sblim-cmpi-fsvol-devel sblim-cmpi-fsvol-test sblim-cmpi-network sblim-cmpi-network-devel sblim-cmpi-network-test sblim-cmpi-nfsv3 sblim-cmpi-nfsv3-test sblim-cmpi-nfsv4 sblim-cmpi-nfsv4-test sblim-cmpi-params sblim-cmpi-params-test sblim-cmpi-sysfs sblim-cmpi-sysfs-test sblim-cmpi-syslog sblim-cmpi-syslog-test sblim-gather sblim-gather-devel sblim-gather-provider sblim-gather-test sblim-indication_helper sblim-indication_helper-devel sblim-smis-hba sblim-testsuite sblim-wbemcli scannotation scannotation-javadoc scpio screen SDL-static seahorse-nautilus seahorse-sharing sendmail-sysvinit setools-devel setools-gui setools-libs-tcl setuptool shared-desktop-ontologies shared-desktop-ontologies-devel shim-unsigned-ia32 shim-unsigned-x64 sisu sisu-parent slang-slsh slang-static smbios-utils smbios-utils-bin smbios-utils-python snakeyaml snakeyaml-javadoc snapper snapper-devel snapper-libs sntp SOAPpy soprano soprano-apidocs soprano-devel source-highlight-devel sox sox-devel speex-tools spice-xpi sqlite-tcl squid-migration-script squid-sysvinit sssd-libwbclient-devel sssd-polkit-rules stax2-api stax2-api-javadoc strigi strigi-devel strigi-libs strongimcv subversion-kde subversion-python subversion-ruby sudo-devel suitesparse-doc suitesparse-static supermin-helper svgpart svrcore svrcore-devel sweeper syslinux-devel syslinux-perl system-config-date system-config-date-docs system-config-firewall system-config-firewall-base system-config-firewall-tui system-config-keyboard system-config-keyboard-base system-config-language system-config-printer system-config-users-docs system-switch-java systemd-sysv t1lib t1lib-apps t1lib-devel t1lib-static t1utils taglib-doc talk talk-server tang-nagios targetd tcl-pgtcl tclx tclx-devel tcp_wrappers tcp_wrappers-devel tcp_wrappers-libs teamd-devel teckit-devel telepathy-farstream telepathy-farstream-devel telepathy-filesystem telepathy-gabble telepathy-glib telepathy-glib-devel telepathy-glib-vala telepathy-haze telepathy-logger telepathy-logger-devel telepathy-mission-control telepathy-mission-control-devel telepathy-salut tex-preview texinfo texlive-collection-documentation-base texlive-mh texlive-mh-doc texlive-misc texlive-thailatex texlive-thailatex-doc tix-doc tncfhh tncfhh-devel tncfhh-examples tncfhh-libs tncfhh-utils tog-pegasus-test tokyocabinet-devel-doc tomcat tomcat-admin-webapps tomcat-docs-webapp tomcat-el-2.2-api tomcat-javadoc tomcat-jsp-2.2-api tomcat-jsvc tomcat-lib tomcat-servlet-3.0-api tomcat-webapps totem-devel totem-pl-parser-devel tracker-devel tracker-docs tracker-needle tracker-preferences trang trousers-static txw2 txw2-javadoc unique3 unique3-devel unique3-docs uriparser uriparser-devel usbguard-devel usbredir-server ustr ustr-debug ustr-debug-static ustr-devel ustr-static uuid-c++ uuid-c++-devel uuid-dce uuid-dce-devel uuid-perl uuid-php v4l-utils v4l-utils-devel-tools vala-doc valadoc valadoc-devel valgrind-openmpi velocity-demo velocity-javadoc velocity-manual vemana2000-fonts vigra vigra-devel virtuoso-opensource virtuoso-opensource-utils vlgothic-p-fonts vsftpd-sysvinit vte3 vte3-devel wayland-doc webkit2gtk3-plugin-process-gtk2 webkitgtk3 webkitgtk3-devel webkitgtk3-doc webkitgtk4-doc webrtc-audio-processing-devel weld-parent whois woodstox-core woodstox-core-javadoc wordnet wordnet-browser wordnet-devel wordnet-doc ws-commons-util ws-commons-util-javadoc ws-jaxme ws-jaxme-javadoc ws-jaxme-manual wsdl4j wsdl4j-javadoc wvdial x86info xchat-tcl xdg-desktop-portal-devel xerces-c xerces-c-devel xerces-c-doc xerces-j2-demo xerces-j2-javadoc xferstats xguest xhtml2fo-style-xsl xhtml2ps xisdnload xml-commons-apis-javadoc xml-commons-apis-manual xml-commons-apis12 xml-commons-apis12-javadoc xml-commons-apis12-manual xml-commons-resolver-javadoc xmlgraphics-commons xmlgraphics-commons-javadoc xmlrpc-c-apps xmlrpc-client xmlrpc-common xmlrpc-javadoc xmlrpc-server xmlsec1-gcrypt-devel xmlsec1-nss-devel xmlto-tex xmlto-xhtml xmltoman xorg-x11-apps xorg-x11-drv-intel-devel xorg-x11-drv-keyboard xorg-x11-drv-mouse xorg-x11-drv-mouse-devel xorg-x11-drv-openchrome xorg-x11-drv-openchrome-devel xorg-x11-drv-synaptics xorg-x11-drv-synaptics-devel xorg-x11-drv-vmmouse xorg-x11-drv-void xorg-x11-server-source xorg-x11-xkb-extras xpp3 xpp3-javadoc xpp3-minimal xsettings-kde xstream xstream-javadoc xulrunner xulrunner-devel xz-compat-libs yelp-xsl-devel yum-langpacks yum-NetworkManager-dispatcher yum-plugin-filter-data yum-plugin-fs-snapshot yum-plugin-keys yum-plugin-list-data yum-plugin-local yum-plugin-merge-conf yum-plugin-ovl yum-plugin-post-transaction-actions yum-plugin-pre-transaction-actions yum-plugin-protectbase yum-plugin-ps yum-plugin-rpm-warm-cache yum-plugin-show-leaves yum-plugin-upgrade-helper yum-plugin-verify yum-updateonboot 9.2. Deprecated Device Drivers The following device drivers continue to be supported until the end of life of Red Hat Enterprise Linux 7 but will likely not be supported in future major releases of this product and are not recommended for new deployments. 3w-9xxx 3w-sas aic79xx aoe arcmsr ata drivers: acard-ahci sata_mv sata_nv sata_promise sata_qstor sata_sil sata_sil24 sata_sis sata_svw sata_sx4 sata_uli sata_via sata_vsc bfa cxgb3 cxgb3i e1000 floppy hptiop initio isci iw_cxgb3 mptbase mptctl mptsas mptscsih mptspi mthca mtip32xx mvsas mvumi OSD drivers: osd libosd osst pata drivers: pata_acpi pata_ali pata_amd pata_arasan_cf pata_artop pata_atiixp pata_atp867x pata_cmd64x pata_cs5536 pata_hpt366 pata_hpt37x pata_hpt3x2n pata_hpt3x3 pata_it8213 pata_it821x pata_jmicron pata_marvell pata_netcell pata_ninja32 pata_oldpiix pata_pdc2027x pata_pdc202xx_old pata_piccolo pata_rdc pata_sch pata_serverworks pata_sil680 pata_sis pata_via pdc_adma pm80xx(pm8001) pmcraid qla3xxx qlcnic qlge stex sx8 tulip ufshcd wireless drivers: carl9170 iwl4965 iwl3945 mwl8k rt73usb rt61pci rtl8187 wil6210 9.3. Deprecated Adapters The following adapters continue to be supported until the end of life of Red Hat Enterprise Linux 7 but will likely not be supported in future major releases of this product and are not recommended for new deployments. Other adapters from the mentioned drivers that are not listed here remain unchanged. PCI IDs are in the format of vendor:device:subvendor:subdevice . If the subdevice or subvendor:subdevice entry is not listed, devices with any values of such missing entries have been deprecated. To check the PCI IDs of the hardware on your system, run the lspci -nn command. The following adapters from the aacraid driver have been deprecated: PERC 2/Si (Iguana/PERC2Si), PCI ID 0x1028:0x0001:0x1028:0x0001 PERC 3/Di (Opal/PERC3Di), PCI ID 0x1028:0x0002:0x1028:0x0002 PERC 3/Si (SlimFast/PERC3Si), PCI ID 0x1028:0x0003:0x1028:0x0003 PERC 3/Di (Iguana FlipChip/PERC3DiF), PCI ID 0x1028:0x0004:0x1028:0x00d0 PERC 3/Di (Viper/PERC3DiV), PCI ID 0x1028:0x0002:0x1028:0x00d1 PERC 3/Di (Lexus/PERC3DiL), PCI ID 0x1028:0x0002:0x1028:0x00d9 PERC 3/Di (Jaguar/PERC3DiJ), PCI ID 0x1028:0x000a:0x1028:0x0106 PERC 3/Di (Dagger/PERC3DiD), PCI ID 0x1028:0x000a:0x1028:0x011b PERC 3/Di (Boxster/PERC3DiB), PCI ID 0x1028:0x000a:0x1028:0x0121 catapult, PCI ID 0x9005:0x0283:0x9005:0x0283 tomcat, PCI ID 0x9005:0x0284:0x9005:0x0284 Adaptec 2120S (Crusader), PCI ID 0x9005:0x0285:0x9005:0x0286 Adaptec 2200S (Vulcan), PCI ID 0x9005:0x0285:0x9005:0x0285 Adaptec 2200S (Vulcan-2m), PCI ID 0x9005:0x0285:0x9005:0x0287 Legend S220 (Legend Crusader), PCI ID 0x9005:0x0285:0x17aa:0x0286 Legend S230 (Legend Vulcan), PCI ID 0x9005:0x0285:0x17aa:0x0287 Adaptec 3230S (Harrier), PCI ID 0x9005:0x0285:0x9005:0x0288 Adaptec 3240S (Tornado), PCI ID 0x9005:0x0285:0x9005:0x0289 ASR-2020ZCR SCSI PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028a ASR-2025ZCR SCSI SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285:0x9005:0x028b ASR-2230S + ASR-2230SLP PCI-X (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028c ASR-2130S (Lancer), PCI ID 0x9005:0x0286:0x9005:0x028d AAR-2820SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029b AAR-2620SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029c AAR-2420SA (Intruder), PCI ID 0x9005:0x0286:0x9005:0x029d ICP9024RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029e ICP9014RO (Lancer), PCI ID 0x9005:0x0286:0x9005:0x029f ICP9047MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a0 ICP9087MA (Lancer), PCI ID 0x9005:0x0286:0x9005:0x02a1 ICP5445AU (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a3 ICP9085LI (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x02a4 ICP5085BR (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x02a5 ICP9067MA (Intruder-6), PCI ID 0x9005:0x0286:0x9005:0x02a6 Themisto Jupiter Platform, PCI ID 0x9005:0x0287:0x9005:0x0800 Themisto Jupiter Platform, PCI ID 0x9005:0x0200:0x9005:0x0200 Callisto Jupiter Platform, PCI ID 0x9005:0x0286:0x9005:0x0800 ASR-2020SA SATA PCI-X ZCR (Skyhawk), PCI ID 0x9005:0x0285:0x9005:0x028e ASR-2025SA SATA SO-DIMM PCI-X ZCR (Terminator), PCI ID 0x9005:0x0285:0x9005:0x028f AAR-2410SA PCI SATA 4ch (Jaguar II), PCI ID 0x9005:0x0285:0x9005:0x0290 CERC SATA RAID 2 PCI SATA 6ch (DellCorsair), PCI ID 0x9005:0x0285:0x9005:0x0291 AAR-2810SA PCI SATA 8ch (Corsair-8), PCI ID 0x9005:0x0285:0x9005:0x0292 AAR-21610SA PCI SATA 16ch (Corsair-16), PCI ID 0x9005:0x0285:0x9005:0x0293 ESD SO-DIMM PCI-X SATA ZCR (Prowler), PCI ID 0x9005:0x0285:0x9005:0x0294 AAR-2610SA PCI SATA 6ch, PCI ID 0x9005:0x0285:0x103C:0x3227 ASR-2240S (SabreExpress), PCI ID 0x9005:0x0285:0x9005:0x0296 ASR-4005, PCI ID 0x9005:0x0285:0x9005:0x0297 IBM 8i (AvonPark), PCI ID 0x9005:0x0285:0x1014:0x02F2 IBM 8i (AvonPark Lite), PCI ID 0x9005:0x0285:0x1014:0x0312 IBM 8k/8k-l8 (Aurora), PCI ID 0x9005:0x0286:0x1014:0x9580 IBM 8k/8k-l4 (Aurora Lite), PCI ID 0x9005:0x0286:0x1014:0x9540 ASR-4000 (BlackBird), PCI ID 0x9005:0x0285:0x9005:0x0298 ASR-4800SAS (Marauder-X), PCI ID 0x9005:0x0285:0x9005:0x0299 ASR-4805SAS (Marauder-E), PCI ID 0x9005:0x0285:0x9005:0x029a ASR-3800 (Hurricane44), PCI ID 0x9005:0x0286:0x9005:0x02a2 Perc 320/DC, PCI ID 0x9005:0x0285:0x1028:0x0287 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0365 Adaptec 5400S (Mustang), PCI ID 0x1011:0x0046:0x9005:0x0364 Dell PERC2/QC, PCI ID 0x1011:0x0046:0x9005:0x1364 HP NetRAID-4M, PCI ID 0x1011:0x0046:0x103c:0x10c2 Dell Catchall, PCI ID 0x9005:0x0285:0x1028 Legend Catchall, PCI ID 0x9005:0x0285:0x17aa Adaptec Catch All, PCI ID 0x9005:0x0285 Adaptec Rocket Catch All, PCI ID 0x9005:0x0286 Adaptec NEMER/ARK Catch All, PCI ID 0x9005:0x0288 The following Mellanox Gen2 and ConnectX-2 adapters from the mlx4_core driver have been deprecated: PCI ID 0x15B3:0x1002 PCI ID 0x15B3:0x676E PCI ID 0x15B3:0x6746 PCI ID 0x15B3:0x6764 PCI ID 0x15B3:0x675A PCI ID 0x15B3:0x6372 PCI ID 0x15B3:0x6750 PCI ID 0x15B3:0x6368 PCI ID 0x15B3:0x673C PCI ID 0x15B3:0x6732 PCI ID 0x15B3:0x6354 PCI ID 0x15B3:0x634A PCI ID 0x15B3:0x6340 The following adapters from the mpt2sas driver have been deprecated: SAS2004, PCI ID 0x1000:0x0070 SAS2008, PCI ID 0x1000:0x0072 SAS2108_1, PCI ID 0x1000:0x0074 SAS2108_2, PCI ID 0x1000:0x0076 SAS2108_3, PCI ID 0x1000:0x0077 SAS2116_1, PCI ID 0x1000:0x0064 SAS2116_2, PCI ID 0x1000:0x0065 SSS6200, PCI ID 0x1000:0x007E The following adapters from the megaraid_sas driver have been deprecated: Dell PERC5, PCI ID 0x1028:0x0015 SAS1078R, PCI ID 0x1000:0x0060 SAS1078DE, PCI ID 0x1000:0x007C SAS1064R, PCI ID 0x1000:0x0411 VERDE_ZCR, PCI ID 0x1000:0x0413 SAS1078GEN2, PCI ID 0x1000:0x0078 SAS0079GEN2, PCI ID 0x1000:0x0079 SAS0073SKINNY, PCI ID 0x1000:0x0073 SAS0071SKINNY, PCI ID 0x1000:0x0071 The following adapters from the qla2xxx driver have been deprecated: ISP24xx, PCI ID 0x1077:0x2422 ISP24xx, PCI ID 0x1077:0x2432 ISP2422, PCI ID 0x1077:0x5422 QLE220, PCI ID 0x1077:0x5432 QLE81xx, PCI ID 0x1077:0x8001 QLE10000, PCI ID 0x1077:0xF000 QLE84xx, PCI ID 0x1077:0x8044 QLE8000, PCI ID 0x1077:0x8432 QLE82xx, PCI ID 0x1077:0x8021 The following adapters from the qla4xxx driver have been deprecated: QLOGIC_ISP8022, PCI ID 0x1077:0x8022 QLOGIC_ISP8324, PCI ID 0x1077:0x8032 QLOGIC_ISP8042, PCI ID 0x1077:0x8042 The following adapters from the be2iscsi driver have been deprecated: BladeEngine 2 (BE2) Devices BladeEngine2 10Gb iSCSI Initiator (generic), PCI ID 0x19a2:0x212 OneConnect OCe10101, OCm10101, OCe10102, OCm10102 BE2 adapter family, PCI ID 0x19a2:0x702 OCe10100 BE2 adapter family, PCI ID 0x19a2:0x703 BladeEngine 3 (BE3) Devices OneConnect TOMCAT iSCSI, PCI ID 0x19a2:0x0712 BladeEngine3 iSCSI, PCI ID 0x19a2:0x0222 The following Ethernet adapters controlled by the be2net driver have been deprecated: BladeEngine 2 (BE2) Devices OneConnect TIGERSHARK NIC, PCI ID 0x19a2:0x0700 BladeEngine2 Network Adapter, PCI ID 0x19a2:0x0211 BladeEngine 3 (BE3) Devices OneConnect TOMCAT NIC, PCI ID 0x19a2:0x0710 BladeEngine3 Network Adapter, PCI ID 0x19a2:0x0221 The following adapters from the lpfc driver have been deprecated: BladeEngine 2 (BE2) Devices OneConnect TIGERSHARK FCoE, PCI ID 0x19a2:0x0704 BladeEngine 3 (BE3) Devices OneConnect TOMCAT FCoE, PCI ID 0x19a2:0x0714 Fibre Channel (FC) Devices FIREFLY, PCI ID 0x10df:0x1ae5 PROTEUS_VF, PCI ID 0x10df:0xe100 BALIUS, PCI ID 0x10df:0xe131 PROTEUS_PF, PCI ID 0x10df:0xe180 RFLY, PCI ID 0x10df:0xf095 PFLY, PCI ID 0x10df:0xf098 LP101, PCI ID 0x10df:0xf0a1 TFLY, PCI ID 0x10df:0xf0a5 BSMB, PCI ID 0x10df:0xf0d1 BMID, PCI ID 0x10df:0xf0d5 ZSMB, PCI ID 0x10df:0xf0e1 ZMID, PCI ID 0x10df:0xf0e5 NEPTUNE, PCI ID 0x10df:0xf0f5 NEPTUNE_SCSP, PCI ID 0x10df:0xf0f6 NEPTUNE_DCSP, PCI ID 0x10df:0xf0f7 FALCON, PCI ID 0x10df:0xf180 SUPERFLY, PCI ID 0x10df:0xf700 DRAGONFLY, PCI ID 0x10df:0xf800 CENTAUR, PCI ID 0x10df:0xf900 PEGASUS, PCI ID 0x10df:0xf980 THOR, PCI ID 0x10df:0xfa00 VIPER, PCI ID 0x10df:0xfb00 LP10000S, PCI ID 0x10df:0xfc00 LP11000S, PCI ID 0x10df:0xfc10 LPE11000S, PCI ID 0x10df:0xfc20 PROTEUS_S, PCI ID 0x10df:0xfc50 HELIOS, PCI ID 0x10df:0xfd00 HELIOS_SCSP, PCI ID 0x10df:0xfd11 HELIOS_DCSP, PCI ID 0x10df:0xfd12 ZEPHYR, PCI ID 0x10df:0xfe00 HORNET, PCI ID 0x10df:0xfe05 ZEPHYR_SCSP, PCI ID 0x10df:0xfe11 ZEPHYR_DCSP, PCI ID 0x10df:0xfe12 Lancer FCoE CNA Devices OCe15104-FM, PCI ID 0x10df:0xe260 OCe15102-FM, PCI ID 0x10df:0xe260 OCm15108-F-P, PCI ID 0x10df:0xe260 9.4. Other Deprecated Functionality Python 2 has been deprecated In the major release, RHEL 8, Python 3.6 is the default Python implementation, and only limited support for Python 2.7 is provided. See the Conservative Python 3 Porting Guide for information on how to migrate large code bases to Python 3 . LVM libraries and LVM Python bindings have been deprecated The lvm2app library and LVM Python bindings, which are provided by the lvm2-python-libs package, have been deprecated. Red Hat recommends the following solutions instead: The LVM D-Bus API in combination with the lvm2-dbusd service. This requires using Python version 3. The LVM command-line utilities with JSON formatting. This formatting has been available since the lvm2 package version 2.02.158. The libblockdev library for C and C++. LVM mirror is deprecated The LVM mirror segment type is now deprecated. Support for mirror will be removed in a future major release of RHEL. Red Hat recommends that you use LVM RAID 1 devices with a segment type of raid1 instead of mirror . The raid1 segment type is the default RAID configuration type and replaces mirror as the recommended solution. To convert mirror devices to raid1 , see Converting a Mirrored LVM Device to a RAID1 Device . Mirrored mirror log has been deprecated in LVM The mirrored mirror log feature of mirrored LVM volumes has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support creating or activating LVM volumes with a mirrored mirror log. The recommended replacements are: RAID1 LVM volumes. The main advantage of RAID1 volumes is their ability to work even in degraded mode and to recover after a transient failure. For information on converting mirrored volumes to RAID1, see the Converting a Mirrored LVM Device to a RAID1 Device section in the LVM Administration guide. Disk mirror log. To convert a mirrored mirror log to disk mirror log, use the following command: lvconvert --mirrorlog disk my_vg/my_lv . The clvmd daemon has been deprecated The clvmd daemon for managing shared storage devices has been deprecated. A future major release of Red Hat Enterprise linux will instead use the lvmlockd daemon. The lvmetad daemon has been deprecated The lvmetad daemon for caching metadata has been deprecated. In a future major release of Red Hat Enterprise Linux, LVM will always read metadata from disk. Previously, autoactivation of logical volumes was indirectly tied to the use_lvmetad setting in the lvm.conf configuration file. The correct way to disable autoactivation continues to be setting auto_activation_volume_list=[] (an empty list) in the lvm.conf file. The sap-hana-vmware Tuned profile has been deprecated The sap-hana-vmware Tuned profile has been deprecated. For backward compatibility, this profile is still provided in the tuned-profiles-sap-hana package, but the profile will be removed in future major release of Red Hat Enterprise Linux. The recommended replacement is the sap-hana Tuned profile. Deprecated packages related to Identity Management and security The following packages have been deprecated and will not be included in a future major release of Red Hat Enterprise Linux: Deprecated packages Proposed replacement package or product authconfig authselect pam_pkcs11 sssd [a] pam_krb5 sssd openldap-servers Depending on the use case, migrate to Identity Management included in Red Hat Enterprise Linux; or to Red Hat Directory Server. [b] mod_auth_kerb mod_auth_gssapi python-kerberos python-krbV python-gssapi python-requests-kerberos python-requests-gssapi hesiod No replacement available. mod_nss mod_ssl mod_revocator No replacement available. [a] System Security Services Daemon (SSSD) contains enhanced smart card functionality. [b] Red Hat Directory Server requires a valid Directory Server subscription. For details, see also What is the support status of the LDAP-server shipped with Red Hat Enterprise Linux? in Red Hat Knowledgebase. The Clevis HTTP pin has been deprecated The Clevis HTTP pin has been deprecated and this feature will not be included in the major version of Red Hat Enterprise Linux and will remain out of the distribution until a further notice. crypto-utils has been deprecated The crypto-utils packages have been deprecated, and they will not be available in a future major version of Red Hat Enterprise Linux. You can use tools provided by the openssl , gnutls-utils , and nss-tools packages instead. NSS SEED ciphers have been deprecated The Mozilla Network Security Services ( NSS ) library will not support Transport Layer Security (TLS) cipher suites that use a SEED cipher in a future release. For deployments that rely on SEED ciphers, Red Hat recommends enabling support for other cipher suites. This way, you ensure smooth transitions when NSS will remove support for them. Note that the SEED ciphers are already disabled by default in RHEL. All-numeric user and group names in shadow-utils have been deprecated Creating user and group names consisting purely of numeric characters using the useradd and groupadd commands has been deprecated and will be removed from the system with the major release. Such names can potentially confuse many tools that work with user and group names and user and group ids (which are numbers). 3DES is removed from the Python SSL default cipher list The Triple Data Encryption Standard ( 3DES ) algorithm has been removed from the Python SSL default cipher list. This enables Python applications using SSL to be PCI DSS-compliant. sssd-secrets has been deprecated The sssd-secrets component of the System Security Services Daemon (SSSD) has been deprecated in Red Hat Enterprise Linux 7.6. This is because Custodia, a secrets service provider, available as a Technology Preview, is no longer actively developed. Use other Identity Management tools to store secrets, for example the Vaults. Support for earlier IdM servers and for IdM replicas at domain level 0 will be limited Red Hat does not plan to support using Identity Management (IdM) servers running Red Hat Enterprise Linux (RHEL) 7.3 and earlier with IdM clients of the major release of RHEL. If you plan to introduce client systems running on the major version of RHEL into a deployment that is currently managed by IdM servers running on RHEL 7.3 or earlier, be aware that you will need to upgrade the servers, moving them to RHEL 7.4 or later. In the major release of RHEL, only domain level 1 replicas will be supported. Before introducing IdM replicas running on the major version of RHEL into an existing deployment, be aware that you will need to upgrade all IdM servers to RHEL 7.4 or later, and change the domain level to 1. Consider planning the upgrade in advance if your deployment will be affected. Bug-fix only support for the nss-pam-ldapd and NIS packages in the major release of Red Hat Enterprise Linux The nss-pam-ldapd packages and packages related to the NIS server will be released in the future major release of Red Hat Enterprise Linux but will receive a limited scope of support. Red Hat will accept bug reports but no new requests for enhancements. Customers are advised to migrate to the following replacement solutions: Affected packages Proposed replacement package or product nss-pam-ldapd sssd ypserv ypbind portmap yp-tools Identity Management in Red Hat Enterprise Linux Use the Go Toolset instead of golang The golang package, previously available in the Optional repository, will no longer receive updates in Red Hat Enterprise Linux 7. Developers are encouraged to use the Go Toolset instead, which is available through the Red Hat Developer program . mesa-private-llvm will be replaced with llvm-private The mesa-private-llvm package, which contains the LLVM-based runtime support for Mesa , will be replaced in a future minor release of Red Hat Enterprise Linux 7 with the llvm-private package. libdbi and libdbi-drivers have been deprecated The libdbi and libdbi-drivers packages will not be included in the Red Hat Enterprise Linux (RHEL) major release. Ansible deprecated in the Extras repository Ansible and its dependencies will no longer be updated through the Extras repository. Instead, the Red Hat Ansible Engine product has been made available to Red Hat Enterprise Linux subscriptions and will provide access to the official Ansible Engine channel. Customers who have previously installed Ansible and its dependencies from the Extras repository are advised to enable and update from the Ansible Engine channel, or uninstall the packages as future errata will not be provided from the Extras repository. Ansible was previously provided in Extras (for AMD64 and Intel 64 architectures, and IBM POWER, little endian) as a runtime dependency of, and limited in support to, the Red Hat Enterprise Linux (RHEL) System Roles. Ansible Engine is available today for AMD64 and Intel 64 architectures, with IBM POWER, little endian availability coming soon. Note that Ansible in the Extras repository was not a part of the Red Hat Enterprise Linux FIPS validation process. The following packages have been deprecated from the Extras repository: ansible(-doc) libtomcrypt libtommath(-devel) python2-crypto python2-jmespath python-httplib2 python-paramiko(-doc) python-passlib sshpass For more information and guidance, see the Knowledgebase article at https://access.redhat.com/articles/3359651 . Note that Red Hat Enterprise Linux System Roles continue to be distributed though the Extras repository. Although Red Hat Enterprise Linux System Roles no longer depend on the ansible package, installing ansible from the Ansible Engine repository is still needed to run playbooks which use Red Hat Enterprise Linux System Roles. signtool has been deprecated and moved to unsupported-tools The signtool tool from the nss packages, which uses insecure signature algorithms, has been deprecated. The signtool executable has been moved to the /usr/lib64/nss/unsupported-tools/ or /usr/lib/nss/unsupported-tools/ directory, depending on the platform. SSL 3.0 and RC4 are disabled by default in NSS Support for the RC4 ciphers in the TLS protocols and the SSL 3.0 protocol is disabled by default in the NSS library. Applications that require RC4 ciphers or SSL 3.0 protocol for interoperability do not work in default system configuration. It is possible to re-enable those algorithms by editing the /etc/pki/nss-legacy/nss-rhel7.config file. To re-enable RC4, remove the :RC4 string from the disallow= list. To re-enable SSL 3.0 change the TLS-VERSION-MIN=tls1.0 option to ssl3.0 . TLS compression support has been removed from nss To prevent security risks, such as the CRIME attack, support for TLS compression in the NSS library has been removed for all TLS versions. This change preserves the API compatibility. Public web CAs are no longer trusted for code signing by default The Mozilla CA certificate trust list distributed with Red Hat Enterprise Linux 7.5 no longer trusts any public web CAs for code signing. As a consequence, any software that uses the related flags, such as NSS or OpenSSL , no longer trusts these CAs for code signing by default. The software continues to fully support code signing trust. Additionally, it is still possible to configure CA certificates as trusted for code signing using system configuration. Sendmail has been deprecated Sendmail has been deprecated in Red Hat Enterprise Linux 7. Customers are advised to use Postfix , which is configured as the default Mail Transfer Agent (MTA). dmraid has been deprecated Since Red Hat Enterprise Linux 7.5, the dmraid packages have been deprecated. It will stay available in Red Hat Enterprise Linux 7 releases but a future major release will no longer support legacy hybrid combined hardware and software RAID host bus adapter (HBA). Automatic loading of DCCP modules through socket layer is now disabled by default For security reasons, automatic loading of the Datagram Congestion Control Protocol (DCCP) kernel modules through socket layer is now disabled by default. This ensures that userspace applications can not maliciously load any modules. All DCCP related modules can still be loaded manually through the modprobe program. The /etc/modprobe.d/dccp-blacklist.conf configuration file for blacklisting the DCCP modules is included in the kernel package. Entries included there can be cleared by editing or removing this file to restore the behavior. Note that any re-installation of the same kernel package or of a different version does not override manual changes. If the file is manually edited or removed, these changes persist across package installations. rsyslog-libdbi has been deprecated The rsyslog-libdbi sub-package, which contains one of the less used rsyslog module, has been deprecated and will not be included in a future major release of Red Hat Enterprise Linux. Removing unused or rarely used modules helps users to conveniently find a database output to use. The inputname option of the rsyslog imudp module has been deprecated The inputname option of the imudp module for the rsyslog service has been deprecated. Use the name option instead. SMBv1 is no longer installed with Microsoft Windows 10 and 2016 (updates 1709 and later) Microsoft announced that the Server Message Block version 1 (SMBv1) protocol will no longer be installed with the latest versions of Microsoft Windows and Microsoft Windows Server. Microsoft also recommends users to disable SMBv1 on earlier versions of these products. This update impacts Red Hat customers who operate their systems in a mixed Linux and Windows environment. Red Hat Enterprise Linux 7.1 and earlier support only the SMBv1 version of the protocol. Support for SMBv2 was introduced in Red Hat Enterprise Linux 7.2. For details on how this change affects Red Hat customers, see SMBv1 no longer installed with latest Microsoft Windows 10 and 2016 update (version 1709) in Red Hat Knowledgebase. The -ok option of the tc command has been deprecated The -ok option of the tc command has been deprecated and this feature will not be included in the major version of Red Hat Enterprise Linux. FedFS has been deprecated Federated File System (FedFS) has been deprecated because the upstream FedFS project is no longer being actively maintained. Red Hat recommends migrating FedFS installations to use autofs , which provides more flexible functionality. Btrfs has been deprecated The Btrfs file system has been in Technology Preview state since the initial release of Red Hat Enterprise Linux 6. Red Hat will not be moving Btrfs to a fully supported feature and it will be removed in a future major release of Red Hat Enterprise Linux. The Btrfs file system did receive numerous updates from the upstream in Red Hat Enterprise Linux 7.4 and will remain available in the Red Hat Enterprise Linux 7 series. However, this is the last planned update to this feature. tcp_wrappers deprecated The tcp_wrappers package has been deprecated. tcp_wrappers provides a library and a small daemon program that can monitor and filter incoming requests for audit , cyrus-imap , dovecot , nfs-utils , openssh , openldap , proftpd , sendmail , stunnel , syslog-ng , vsftpd , and various other network services. nautilus-open-terminal replaced with gnome-terminal-nautilus Since Red Hat Enterprise Linux 7.3, the nautilus-open-terminal package has been deprecated and replaced with the gnome-terminal-nautilus package. This package provides a Nautilus extension that adds the Open in Terminal option to the right-click context menu in Nautilus. nautilus-open-terminal is replaced by gnome-terminal-nautilus during the system upgrade. sslwrap() removed from Python The sslwrap() function has been removed from Python 2.7 . After the 466 Python Enhancement Proposal was implemented, using this function resulted in a segmentation fault. The removal is consistent with upstream. Red Hat recommends using the ssl.SSLContext class and the ssl.SSLContext.wrap_socket() function instead. Most applications can simply use the ssl.create_default_context() function, which creates a context with secure default settings. The default context uses the system's default trust store, too. Symbols from libraries linked as dependencies no longer resolved by ld Previously, the ld linker resolved any symbols present in any linked library, even if some libraries were linked only implicitly as dependencies of other libraries. This allowed developers to use symbols from the implicitly linked libraries in application code and omit explicitly specifying these libraries for linking. For security reasons, ld has been changed to not resolve references to symbols in libraries linked implicitly as dependencies. As a result, linking with ld fails when application code attempts to use symbols from libraries not declared for linking and linked only implicitly as dependencies. To use symbols from libraries linked as dependencies, developers must explicitly link against these libraries as well. To restore the behavior of ld , use the -copy-dt-needed-entries command-line option. (BZ# 1292230 ) Windows guest virtual machine support limited As of Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC). libnetlink is deprecated The libnetlink library contained in the iproute-devel package has been deprecated. The user should use the libnl and libmnl libraries instead. S3 and S4 power management states for KVM have been deprecated Native KVM support for the S3 (suspend to RAM) and S4 (suspend to disk) power management states has been discontinued. This feature was previously available as a Technology Preview. The Certificate Server plug-in udnPwdDirAuth is discontinued The udnPwdDirAuth authentication plug-in for the Red Hat Certificate Server was removed in Red Hat Enterprise Linux 7.3. Profiles using the plug-in are no longer supported. Certificates created with a profile using the udnPwdDirAuth plug-in are still valid if they have been approved. Red Hat Access plug-in for IdM is discontinued The Red Hat Access plug-in for Identity Management (IdM) was removed in Red Hat Enterprise Linux 7.3. During the update, the redhat-access-plugin-ipa package is automatically uninstalled. Features previously provided by the plug-in, such as Knowledgebase access and support case engagement, are still available through the Red Hat Customer Portal. Red Hat recommends to explore alternatives, such as the redhat-support-tool tool. The Ipsilon identity provider service for federated single sign-on The ipsilon packages were introduced as Technology Preview in Red Hat Enterprise Linux 7.2. Ipsilon links authentication providers and applications or utilities to allow for single sign-on (SSO). Red Hat does not plan to upgrade Ipsilon from Technology Preview to a fully supported feature. The ipsilon packages will be removed from Red Hat Enterprise Linux in a future minor release. Red Hat has released Red Hat Single Sign-On as a web SSO solution based on the Keycloak community project. Red Hat Single Sign-On provides greater capabilities than Ipsilon and is designated as the standard web SSO solution across the Red Hat product portfolio. Several rsyslog options deprecated The rsyslog utility version in Red Hat Enterprise Linux 7.4 has deprecated a large number of options. These options no longer have any effect and cause a warning to be displayed. The functionality previously provided by the options -c , -u , -q , -x , -A , -Q , -4 , and -6 can be achieved using the rsyslog configuration. There is no replacement for the functionality previously provided by the options -l and -s Deprecated symbols from the memkind library The following symbols from the memkind library have been deprecated: memkind_finalize() memkind_get_num_kind() memkind_get_kind_by_partition() memkind_get_kind_by_name() memkind_partition_mmap() memkind_get_size() MEMKIND_ERROR_MEMALIGN MEMKIND_ERROR_MALLCTL MEMKIND_ERROR_GETCPU MEMKIND_ERROR_PMTT MEMKIND_ERROR_TIEDISTANCE MEMKIND_ERROR_ALIGNMENT MEMKIND_ERROR_MALLOCX MEMKIND_ERROR_REPNAME MEMKIND_ERROR_PTHREAD MEMKIND_ERROR_BADPOLICY MEMKIND_ERROR_REPPOLICY Options of Sockets API Extensions for SCTP (RFC 6458) deprecated The options SCTP_SNDRCV , SCTP_EXTRCV and SCTP_DEFAULT_SEND_PARAM of Sockets API Extensions for the Stream Control Transmission Protocol have been deprecated per the RFC 6458 specification. New options SCTP_SNDINFO , SCTP_NXTINFO , SCTP_NXTINFO and SCTP_DEFAULT_SNDINFO have been implemented as a replacement for the deprecated options. Managing NetApp ONTAP using SSLv2 and SSLv3 is no longer supported by libstorageMgmt The SSLv2 and SSLv3 connections to the NetApp ONTAP storage array are no longer supported by the libstorageMgmt library. Users can contact NetApp support to enable the Transport Layer Security (TLS) protocol. dconf-dbus-1 has been deprecated and dconf-editor is now delivered separately With this update, the dconf-dbus-1 API has been removed. However, the dconf-dbus-1 library has been backported to preserve binary compatibility. Red Hat recommends using the GDBus library instead of dconf-dbus-1 . The dconf-error.h file has been renamed to dconf-enums.h . In addition, the dconf Editor is now delivered in the separate dconf-editor package. FreeRADIUS no longer accepts Auth-Type := System The FreeRADIUS server no longer accepts the Auth-Type := System option for the rlm_unix authentication module. This option has been replaced by the use of the unix module in the authorize section of the configuration file. The libcxgb3 library and the cxgb3 firmware package have been deprecated The libcxgb3 library provided by the libibverbs package and the cxgb3 firmware package have been deprecated. They continue to be supported in Red Hat Enterprise Linux 7 but will likely not be supported in the major releases of this product. This change corresponds with the deprecation of the cxgb3 , cxgb3i , and iw_cxgb3 drivers listed above. SFN4XXX adapters have been deprecated Starting with Red Hat Enterprise Linux 7.4, SFN4XXX Solarflare network adapters have been deprecated. Previously, Solarflare had a single driver sfc for all adapters. Recently, support of SFN4XXX was split from sfc and moved into a new SFN4XXX-only driver, called sfc-falcon . Both drivers continue to be supported at this time, but sfc-falcon and SFN4XXX support is scheduled for removal in a future major release. Software-initiated-only FCoE storage technologies have been deprecated The software-initiated-only type of the Fibre Channel over Ethernet (FCoE) storage technology has been deprecated due to limited customer adoption. The software-initiated-only storage technology will remain supported for the life of Red Hat Enterprise Linux 7. The deprecation notice indicates the intention to remove software-initiated-based FCoE support in a future major release of Red Hat Enterprise Linux. It is important to note that the hardware support and the associated user-space tools (such as drivers, libfc , or libfcoe ) are unaffected by this deprecation notice. For details regarding changes to FCoE support in RHEL 8, see Considerations in adopting RHEL 8 . Target mode in Software FCoE and Fibre Channel has been deprecated Software FCoE: The NIC Software FCoE target functionality has been deprecated and will remain supported for the life of Red Hat Enterprise Linux 7. The deprecation notice indicates the intention to remove the NIC Software FCoE target functionality support in a future major release of Red Hat Enterprise Linux. For more information regarding changes to FCoE support in RHEL 8, see Considerations in adopting RHEL 8 . Fibre Channel: Target mode in Fibre Channel has been deprecated and will remain supported for the life of Red Hat Enterprise Linux 7. Target mode will be disabled for the tcm_fc and qla2xxx drivers in a future major release of Red Hat Enterprise Linux. Containers using the libvirt-lxc tooling have been deprecated The following libvirt-lxc packages are deprecated since Red Hat Enterprise Linux 7.1: libvirt-daemon-driver-lxc libvirt-daemon-lxc libvirt-login-shell Future development on the Linux containers framework is now based on the docker command-line interface. libvirt-lxc tooling may be removed in a future release of Red Hat Enterprise Linux (including Red Hat Enterprise Linux 7) and should not be relied upon for developing custom container management applications. For more information, see the Red Hat KnowledgeBase article . The Perl and shell scripts for Directory Server have been deprecated The Perl and shell scripts, which are provided by the 389-ds-base package, have been deprecated. The scripts will be replaced by new utilities in the major release of Red Hat Enterprise Linux. libguestfs can no longer inspect ISO installer files The libguestfs library does no longer support inspecting ISO installer files, for example using the guestfish or virt-inspector utilities. Use the osinfo-detect command for inspecting ISO files instead. This command can be obtained from the libosinfo package. Creating internal snapshots of virtual machines has been deprecated Due to their lack of optimization and stability, internal virtual machine snapshots are now deprecated. In their stead, external snapshots are recommended for use. For more information, including instructions for creating external snapshots, see the Virtualization Deployment and Admnistration Guide . IVSHMEM has been deprecated The inter-VM shared memory device (IVSHMEM) feature has been deprecated. Therefore, in a future major release of RHEL, if a virtual machine (VM) is configured to share memory between multiple virtual machines in the form of a PCI device that exposes memory to guests, the VM will fail to boot. The gnome-shell-browser-plugin subpackage has been deprecated Since the Firefox Extended Support Release (ESR 60), Firefox no longer supports the Netscape Plugin Application Programming Interface (NPAPI) that was used by the gnome-shell-browser-plugin subpackage. The subpackage, which provided the functionality to install GNOME Shell Extensions, has thus been deprecated. The installation of GNOME Shell Extensions is now handled directly in the gnome-software package. The VDO read cache has been deprecated The read cache functionality in Virtual Data Optimizer (VDO) has been deprecated. The read cache is disabled by default on new VDO volumes. In the major Red Hat Enterprise Linux release, the read cache functionality will be removed, and you will no longer be able to enable it using the --readCache option of the vdo utility. cpuid has been deprecated The cpuid command has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support using cpuid to dump the information about CPUID instruction for each CPU. To obtain similar information, use the lscpu command instead. KDE has been deprecated KDE Plasma Workspaces (KDE), which has been provided as an alternative to the default GNOME desktop environment has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support using KDE instead of the default GNOME desktop environment. Using virt-install with NFS locations is deprecated With a future major version of Red Hat Enterprise Linux, the virt-install utility will not be able to mount NFS locations. As a consequence, attempting to install a virtual machine using virt-install with a NFS address as a value of the --location option will fail. To work around this change, mount your NFS share prior to using virt-install , or use a HTTP location. The lwresd daemon has been deprecated The lwresd daemon, which is a part of the bind package, has been deprecated. A future major release of Red Hat Enterprise Linux will no longer support providing name lookup services to clients that use the BIND 9 lightweight resolver library with lwresd . The recommended replacements are: The systemd-resolved daemon and nss-resolve API, provided by the systemd package The unbound library API and daemon, provided by the unbound and unbound-libs packages The getaddrinfo and related glibc library calls The /etc/sysconfig/nfs file and legacy NFS service names have been deprecated A future major Red Hat Enterprise Linux release will move the NFS configuration from the /etc/sysconfig/nfs file to /etc/nfs.conf . Red Hat Enterprise Linux 7 currently supports both of these files. Red Hat recommends that you use the new /etc/nfs.conf file to make NFS configuration in all versions of Red Hat Enterprise Linux compatible with automated configuration systems. Additionally, the following NFS service aliases will be removed and replaced by their upstream names: nfs.service , replaced by nfs-server.service nfs-secure.service , replaced by rpc-gssd.service rpcgssd.service , replaced by rpc-gssd.service nfs-idmap.service , replaced by nfs-idmapd.service rpcidmapd.service , replaced by nfs-idmapd.service nfs-lock.service , replaced by rpc-statd.service nfslock.service , replaced by rpc-statd.service The JSON export functionality has been removed from the nft utility Previously, the nft utility provided an export feature, but the exported content could contain internal ruleset representation details, which was likely to change without further notice. For this reason, the deprecated export functionality has been removed from nft starting with RHEL 7.7. Future versions of nft , such as provided by RHEL 8, contain a high-level JSON API. However, this API not available in RHEL 7.7. The openvswitch-2.0.0-7 package in the RHEL 7 Optional repository has been deprecated RHEL 7.5 introduced the openvswitch-2.0.0-7.el7 package in the RHEL 7 Optional repository as a dependency of the NetworkManager-ovs package. This dependency no longer exists and, as a result, openvswitch-2.0.0-7.el7 is now deprecated. Note that Red Hat does not support packages in the RHEL 7 Optional repository and that openvswitch-2.0.0-7.el7 will not be updated in the future. For this reason, do not use this package in production environments. Deprecated PHP extensions The following PHP extensions have been deprecated: aspell mysql memcache Deprecated Apache HTTP Server modules The following modules of the Apache HTTP Server have been deprecated: mod_file_cache mod_nss mod_perl Apache Tomcat has been deprecated The Apache Tomcat server, a servlet container for the Java Servlet and JavaServer Pages (JSP) technologies, has been deprecated. Red Hat recommends that users requiring a servlet container use the JBoss Web Server. The DES algorithm is deprecated in IdM Due to security reasons, the Data Encryption Standard (DES) algorithm is deprecated in Identity Management (IdM). The MIT Kerberos libraries provided by the krb5-libs package do not support using the Data Encryption Standard (DES) in new deployments. Use DES only for compatibility reasons if your environment does not support any newer algorithm. Red Hat also recommends to avoid using RC4 ciphers over Kerberos. While DES is deprecated, the Server Message Block (SMB) protocol still uses RC4. However, the SMB protocol can also use the secure AES algorithms. For further details, see: MIT Kerberos Documentation - Retiring DES RFC6649: Deprecate DES, RC4-HMAC-EXP, and Other Weak Cryptographic Algorithms in Kerberos real(kind=16) type support has been removed from libquadmath library real(kind=16) type support has been removed from the libquadmath library in the compat-libgfortran-41 package in order to preserve ABI compatibility. Deprecated glibc features The following features of the GNU C library provided by the glibc packages have been deprecated: the librtkaio library Sun RPC and NIS interfaces Deprecated features of the GDB debugger The following features and capabilities of the GDB debugger have been deprecated: debugging Java programs built with the gcj compiler HP-UX XDB compatibility mode and the -xdb option Sun version of the stabs format Development headers and static libraries from valgrind-devel have been deprecated The valgrind-devel sub-package includes development files for developing custom Valgrind tools. These files do not have a guaranteed API, have to be linked statically, are unsupported, and thus have been deprecated. Red Hat recommends to use the other development files and header files for valgrind-aware programs from the valgrind-devel package such as valgrind.h , callgrind.h , drd.h , helgrind.h , and memcheck.h , which are stable and well supported. The nosegneg libraries for 32-bit Xen have been deprecated The glibc i686 packages contain an alternative glibc build, which avoids the use of the thread descriptor segment register with negative offsets ( nosegneg ). This alternative build is only used in the 32-bit version of the Xen Project hypervisor without hardware virtualization support, as an optimization to reduce the cost of full paravirtualization. This alternative build is deprecated. Ada, Go, and Objective C/C++ build capability in GCC has been deprecated Capability for building code in the Ada (GNAT), GCC Go, and Objective C/C++ languages using the GCC compiler has been deprecated. To build Go code, use the Go Toolset instead. Deprecated Kickstart commands and options The following Kickstart commands and options have been deprecated: upgrade btrfs part btrfs and partition btrfs part --fstype btrfs and partition --fstype btrfs logvol --fstype btrfs raid --fstype btrfs unsupported_hardware Where only specific options and values are listed, the base command and its other options are not deprecated. The env option in virt-who has become deprecated With this update, the virt-who utility no longer uses the env option for hypervisor detection. As a consequence, Red Hat discourages the use of env in your virt-who configurations, as the option will not have the intended effect. AGP graphics card have been deprecatd Graphics cards using the Accelerated Graphics Port (AGP) bus have been deprecated and are not supported in RHEL 8. AGP graphics cards are rarely used in 64-bit machines and the bus has been replaced by PCI-Express. The copy_file_range() call has been disabled on local file systems and in NFS The copy_file_range() system call on local file systems contains multiple issues that are difficult to fix. To avoid file corruptions, copy_file_range() support on local file systems has been disabled in RHEL 7.8. If an application uses the call in this case, copy_file_range() now returns an ENOSYS error. For the same reason, the server-side-copy feature has been disabled in the NFS server. However, the NFS client still supports copy_file_range() when accessing a server that supports server-side-copy. The ipv6 , netmask , gateway , and hostname kernel parameters have been deprecated The ipv6 , netmask , gateway , and hostname parameters to set the network configuration in the kernel command line have been deprecated. RHEL 8 supports only the consolidated ip parameter that accepts different formats, such as the following: For further details about the individual fields and other formats this parameter accepts, see the description of the ip parameter in the dracut.cmdline(7) man page. Note that you can already use the consolidated ip parameter in RHEL 7. The hidepid=n mount option is not recommended in RHEL 7 The mount option hidepid=n , which controls who can access information in /proc/[pid] directories, is not compatible with systemd provided in RHEL 7 and newer. In addition, using this option might cause certain services started by systemd to produce SELinux AVC denial messages and prevent other operations from completing. For more information, see the related Is mounting /proc with "hidepid=2" recommended with RHEL7 and RHEL8? . The -s split option is no longer supported with the -f option When providing files to Red Hat Support by uploading them to Red Hat Secure FTP , you can run the redhat-support-tool addattachment -f command. Due to infrastructure changes introduced in the RHBA-2022:0623 advisory, you can no longer use the -s option with this command for splitting big files into parts and uploading them to Red Hat Secure FTP . The redhat-support-tool diagnose <file_or_directory> command has been deprecated With the release of the RHBA-2022:0623 advisory, the Red Hat Support Tool no longer supports the redhat-support-tool diagnose <file_or_directory> command previously used for advanced diagnostic services for files or directories. The redhat-support-tool diagnose command continues to support the plain text analysis.
[ "ip=__IP_address__:__peer__:__gateway_IP_address__:__net_mask__:__host_name__:__interface_name__:__configuration_method__" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.9_release_notes/deprecated_functionality
Chapter 11. Configuring a System for Accessibility
Chapter 11. Configuring a System for Accessibility Accessibility in Red Hat Enterprise Linux 7 is ensured by the Orca screen reader, which is included in the default installation of the operating system. This chapter explains how a system administrator can configure a system to support users with a visual impairment. Orca reads information from the screen and communicates it to the user using: a speech synthesizer, which provides a speech output a braille display, which provides a tactile output For more information on Orca settings, see its help page . In order that Orca 's communication outputs function properly, the system administrator needs to: configure the brltty service, as described in Section 11.1, "Configuring the brltty Service" switch on the Always Show Universal Access Menu , as described in Section 11.2, "Switch On Always Show Universal Access Menu " enable the Festival speech synthesizer, as described in Section 11.3, "Enabling the Festival Speech Synthesis System " 11.1. Configuring the brltty Service The Braille display uses the brltty service to provide tactile output for visually impaired users. Enable the brltty Service The braille display cannot work unless brltty is running. By default, brltty is disabled. Enable brltty to be started on boot: Authorize Users to Use the Braille Display To set the users who are authorized to use the braille display, choose one of the following procedures, which have an equal effect. The procedure using the /etc/brltty.conf file is suitable even for the file systems where users or groups cannot be assigned to a file. The procedure using the /etc/brlapi.key file is suitable only for the file systems where users or groups can be assigned to a file. Setting Access to Braille Display by Using /etc/brltty.conf Open the /etc/brltty.conf file, and find the section called Application Programming Interface Parameters . Specify the users. To specify one or more individual users, list the users on the following line: To specify a user group, enter its name on the following line: Setting Access to Braille Display by Using /etc/brlapi.key Create the /etc/brlapi.key file. Change ownership of the /etc/brlapi.key to particular user or group. To specify an individual user: To specify a group: Adjust the content of /etc/brltty.conf to include this: Set the Braille Driver The braille-driver directive in /etc/brltty.conf specifies a two-letter driver identification code of the driver for the braille display. Setting the Braille Driver Decide whether you want to use the autodetection for finding the appropriate braille driver. If you want to use autodetection, leave braille driver specified to auto , which is the default option. Warning Autodetection tries all drivers. Therefore, it might take a long time or even fail. For this reason, setting up a particular braille driver is recommended. If you do not want to use the autodetection, specify the identification code of the required braille driver in the braille-driver directive. Choose the identification code of required braille driver from the list provided in /etc/brltty.conf , for example: You can also set multiple drivers, separated by commas, and autodetection is then performed among them. Set the Braille Device The braille-device directive in /etc/brltty.conf specifies the device to which the braille display is connected. The following device types are supported (see Table 11.1, "Braille Device Types and the Corresponding Syntax" ): Table 11.1. Braille Device Types and the Corresponding Syntax Braille Device Type Syntax of the Type serial device serial:path [a] USB device [serial-number] [b] Bluetooth device bluetooth:address [a] Relative paths are at /dev . [b] The brackets here indicate optionality. Examples of settings for particular devices: You can also set multiple devices, separated by commas, and each of them will be probed in turn. Warning If the device is connected by a serial-to-USB adapter, setting braille-device to usb: does not work. In this case, identify the virtual serial device that the kernel has created for the adapter. The virtual serial device can look like this: Set Specific Parameters for Particular Braille Displays If you need to set specific parameters for particular braille displays, use the braille-parameters directive in /etc/brltty.conf . The braille-parameters directive passes non-generic parameters through to the braille driver. Choose the required parameters from the list in /etc/brltty.conf . Set the Text Table The text-table directive in /etc/brltty.conf specifies which text table is used to encode the symbols. Relative paths to text tables are in the /etc/brltty/Text/ directory. Setting the Text Table Decide whether you want to use the autoselection for finding the appropriate text table. If you want to use the autoselection, leave text-table specified to auto , which is the default option. This ensures that local-based autoselection with fallback to en-nabcc is performed. If you do not want to use the autoselection, choose the required text-table from the list in /etc/brltty.conf . For example, to use the text table for American English: Set the Contraction Table The contraction-table directive in /etc/brltty.conf specifies which table is used to encode the abbreviations. Relative paths to particular contraction tables are in the /etc/brltty/Contraction/ directory. Choose the required contraction-table from the list in /etc/brltty.conf . For example, to use the contraction table for American English, grade 2: Warning If not specified, no contraction table is used. 11.2. Switch On Always Show Universal Access Menu To switch on the Orca screen reader, press the Super + Alt + S key combination. As a result, the Universal Access Menu icon is displayed on the top bar. Warning The icon disappears in case that the user switches off all of the provided options from the Universal Access Menu. Missing icon can cause difficulties to users with a visual impairment. System administrators can prevent the inaccessibility of the icon by switching on the Always Show Universal Access Menu . When the Always Show Universal Access Menu is switched on, the icon is displayed on the top bar even in the situation when all options from this menu are switched off. Switching On Always Show Universal Access Menu Open the Gnome settings menu, and click Universal Access . Switch on Always Show Universal Access Menu . Optional: Verify that the Universal Access Menu icon is displayed on the top bar even if all options from this menu are switched off. 11.3. Enabling the Festival Speech Synthesis System By default, Orca uses the eSpeak speech synthesizer, but it also supports the Festival Speech Synthesis System . Both eSpeak and Festival Speech Synthesis System (Festival) synthesize voice differently. Some users might prefer Festival to the default eSpeak synthesizer. To enable Festival, follow these steps: Installing Festival and Making it Running on Boot Install Festival: Make Festival running on boot: Create a new systemd unit file: Create a file in the /etc/systemd/system/ directory and make it executable. Ensure that the script in the /usr/bin/festival_server file is used to run Festival. Add the following content to the /etc/systemd/system/festival.service file: Notify systemd that a new festival.service file exists: Enable festival.service : Choose a Voice for Festival Festival provides multiples voices. To make a voice available, install the relevant package from the following list: festvox-awb-arctic-hts festvox-bdl-arctic-hts festvox-clb-arctic-hts festvox-kal-diphone festvox-ked-diphone festvox-rms-arctic-hts festvox-slt-arctic-hts hispavoces-pal-diphone hispavoces-sfl-diphone To see detailed information about a particular voice: To make the required voice available, install the package with this voice and then reboot:
[ "~]# systemctl enable brltty.service", "api-parameters Auth=user: user_1, user_2, ... # Allow some local user", "api-parameters Auth=group: group # Allow some local group", "~]# mcookie > /etc/brlapi.key", "~]# chown user_1 /etc/brlapi.key", "~]# chown group_1 /etc/brlapi.key", "api-parameters Auth=keyfile: /etc/brlapi.key", "braille-driver auto # autodetect", "braille-driver xw # XWindow", "braille-device serial:ttyS0 # First serial device braille-device usb: # First USB device matching braille driver braille-device usb:nnnnn # Specific USB device by serial number braille-device bluetooth:xx:xx:xx:xx:xx:xx # Specific Bluetooth device by address", "serial:ttyUSB0", "You can find the actual device name in the kernel messages on the device plug with the following command:", "~]# dmesg | fgrep ttyUSB0", "text-table auto # locale-based autoselection", "text-table en_US # English (United States)", "contraction-table en-us-g2 # English (US, grade 2)", "~]# yum install festival festival-freebsoft-utils", "~]# touch /etc/systemd/system/festival.service ~]# chmod 664 /etc/systemd/system/festival.service", "[Unit] Description=Festival speech synthesis server [Service] ExecStart=/usr/bin/festival_server Type=simple", "~]# systemctl daemon-reload ~]# systemctl start festival.service", "~]# systemctl enable festival.service", "~]# yum info package_name", "~]# yum install package_name ~]# reboot" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/system_administrators_guide/ch-Accessbility
21.4. Additional Resources
21.4. Additional Resources This chapter discusses the basics of using NFS. For more detailed information, refer to the following resources. 21.4.1. Installed Documentation The man pages for nfsd , mountd , exports , auto.master , and autofs (in manual sections 5 and 8) - These man pages show the correct syntax for the NFS and autofs configuration files.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/Network_File_System_NFS-Additional_Resources
Chapter 4. Operator topologies
Chapter 4. Operator topologies The Ansible Automation Platform Operator uses Red Hat OpenShift Operators to deploy Ansible Automation Platform within Red Hat OpenShift. Customers manage the product and infrastructure lifecycle. Important You can only install a single instance of the Ansible Automation Platform Operator into a single namespace. Installing multiple instances in the same namespace can lead to improper operation for both Operator instances. 4.1. Operator growth topology The growth topology is intended for organizations that are getting started with Ansible Automation Platform and do not require redundancy or higher compute for large volumes of automation. This topology allows for smaller footprint deployments. 4.1.1. Infrastructure topology The following diagram outlines the infrastructure topology that Red Hat has tested with this deployment model that customers can use when self-managing Ansible Automation Platform: Figure 4.1. Infrastructure topology diagram A Single Node OpenShift (SNO) cluster has been tested with the following requirements: 32 GB RAM, 16 CPUs, 128 GB local disk, and 3000 IOPS. Table 4.1. Infrastructure topology Count Component 1 Automation controller web pod 1 Automation controller task pod 1 Automation hub API pod 2 Automation hub content pod 2 Automation hub worker pod 1 Automation hub Redis pod 1 Event-Driven Ansible API pod 1 Event-Driven Ansible activation worker pod 1 Event-Driven Ansible default worker pod 1 Event-Driven Ansible event stream pod 1 Event-Driven Ansible scheduler pod 1 Platform gateway pod 1 Database pod 1 Redis pod Note You can deploy multiple isolated instances of Ansible Automation Platform into the same Red Hat OpenShift Container Platform cluster by using a namespace-scoped deployment model. This approach allows you to use the same cluster for several deployments. 4.1.2. Tested system configurations Red Hat has tested the following configurations to install and run Red Hat Ansible Automation Platform: Table 4.2. Tested system configurations Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Operating system Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9 CPU architecture x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) Red Hat OpenShift Version: 4.14 num_of_control_nodes: 1 num_of_worker_nodes: 1 Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome. Database PostgreSQL 15 4.1.3. Example custom resource file Use the following example custom resource (CR) to add your Ansible Automation Platform instance to your project: apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: <aap instance name> spec: eda: automation_server_ssl_verify: 'no' hub: storage_type: 's3' object_storage_s3_secret: '<name of the Secret resource holding s3 configuration>' 4.1.4. Nonfunctional requirements Ansible Automation Platform's performance characteristics and capacity are impacted by its resource allocation and configuration. With OpenShift, each Ansible Automation Platform component is deployed as a pod. You can specify resource requests and limits for each pod. Use the Ansible Automation Platform Custom Resource (CR) to configure resource allocation for OpenShift installations. Each configurable item has default settings. These settings are the minimum requirements for an installation, but might not meet your production workload needs. By default, each component's deployments are set for minimum resource requests but no resource limits. OpenShift only schedules the pods with available resource requests, but the pods are allowed to consume unlimited RAM or CPU provided that the OpenShift worker node itself is not under node pressure. In the Operator growth topology, Ansible Automation Platform is deployed on a Single Node OpenShift (SNO) with 32 GB RAM, 16 CPUs, 128 GB Local disk, and 3000 IOPS. This is not a shared environment, so Ansible Automation Platform pods have full access to all of the compute resources of the OpenShift SNO. In this scenario, the capacity calculation for the automation controller task pods is derived from the underlying OpenShift Container Platform node that runs the pod. It does not have access to the entire node. This capacity calculation influences how many concurrent jobs automation controller can run. OpenShift manages storage distinctly from VMs. This impacts how automation hub stores its artifacts. In the Operator growth topology, we use S3 storage because automation hub requires a ReadWriteMany type storage, which is not a default storage type in OpenShift. 4.1.5. Network ports Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server for it to work. Ensure that these ports are available and are not blocked by the server firewall. Table 4.3. Network ports and protocols Port number Protocol Service Source Destination 80/443 HTTP/HTTPS Receptor Execution node OpenShift Container Platform ingress 80/443 HTTP/HTTPS Receptor Hop node OpenShift Container Platform ingress 80/443 HTTP/HTTPS Platform Customer clients OpenShift Container Platform ingress 27199 TCP Receptor OpenShift Container Platform cluster Execution node 27199 TCP Receptor OpenShift Container Platform cluster Hop node 4.2. Operator enterprise topology The enterprise topology is intended for organizations that require Ansible Automation Platform to be deployed with redundancy or higher compute for large volumes of automation. 4.2.1. Infrastructure topology The following diagram outlines the infrastructure topology that Red Hat has tested with this deployment model that customers can use when self-managing Ansible Automation Platform: Figure 4.2. Infrastructure topology diagram The following infrastructure topology describes an OpenShift Cluster with 3 primary nodes and 2 worker nodes. Each OpenShift Worker node has been tested with the following component requirements: 16 GB RAM, 4 CPUs, 128 GB local disk, and 3000 IOPS. Table 4.4. Infrastructure topology Count Component 1 Automation controller web pod 1 Automation controller task pod 1 Automation hub API pod 2 Automation hub content pod 2 Automation hub worker pod 1 Automation hub Redis pod 1 Event-Driven Ansible API pod 2 Event-Driven Ansible activation worker pod 2 Event-Driven Ansible default worker pod 2 Event-Driven Ansible event stream pod 1 Event-Driven Ansible scheduler pod 1 Platform gateway pod 2 Mesh ingress pod N/A Externally managed database service N/A Externally managed Redis N/A Externally managed object storage service (for automation hub) 4.2.2. Tested system configurations Red Hat has tested the following configurations to install and run Red Hat Ansible Automation Platform: Table 4.5. Tested system configurations Type Description Subscription Valid Red Hat Ansible Automation Platform subscription Operating system Red Hat Enterprise Linux 9.2 or later minor versions of Red Hat Enterprise Linux 9 CPU architecture x86_64, AArch64, s390x (IBM Z), ppc64le (IBM Power) Red Hat OpenShift Red Hat OpenShift on AWS Hosted Control Planes 4.15.16 2 worker nodes in different availability zones (AZs) at t3.xlarge Ansible-core Ansible-core version 2.16 or later Browser A currently supported version of Mozilla Firefox or Google Chrome. AWS RDS PostgreSQL service engine: "postgres" engine_version: 15" parameter_group_name: "default.postgres15" allocated_storage: 20 max_allocated_storage: 1000 storage_type: "gp2" storage_encrypted: true instance_class: "db.t4g.small" multi_az: true backup_retention_period: 5 database: must have ICU support AWS Memcached Service engine: "redis" engine_version: "6.2" auto_minor_version_upgrade: "false" node_type: "cache.t3.micro" parameter_group_name: "default.redis6.x.cluster.on" transit_encryption_enabled: "true" num_node_groups: 2 replicas_per_node_group: 1 automatic_failover_enabled: true s3 storage HTTPS only accessible through AWS Role assigned to automation hub SA at runtime by using AWS Pod Identity 4.2.3. Nonfunctional requirements Ansible Automation Platform's performance characteristics and capacity are impacted by its resource allocation and configuration. With OpenShift, each Ansible Automation Platform component is deployed as a pod. You can specify resource requests and limits for each pod. Use the Ansible Automation Platform custom resource to configure resource allocation for OpenShift installations. Each configurable item has default settings. These settings are the exact configuration used within the context of this reference deployment architecture and presumes that the environment is being deployed and managed by an Enterprise IT organization for production purposes. By default, each component's deployments are set for minimum resource requests but no resource limits. OpenShift only schedules the pods with available resource requests, but the pods are allowed to consume unlimited RAM or CPU provided that the OpenShift worker node itself is not under node pressure. In the Operator enterprise topology, Ansible Automation Platform is deployed on a Red Hat OpenShift on AWS (ROSA) Hosted Control Plane (HCP) cluster with 2 t3.xlarge worker nodes spread across 2 AZs within a single AWS Region. This is not a shared environment, so Ansible Automation Platform pods have full access to all of the compute resources of the ROSA HCP cluster. In this scenario, the capacity calculation for the automation controller task pods is derived from the underlying HCP worker node that runs the pod. It does not have access to the CPU or memory resources of the entire node. This capacity calculation influences how many concurrent jobs automation controller can run. OpenShift manages storage distinctly from VMs. This impacts how automation hub stores its artifacts. In the Operator enterprise topology, we use S3 storage because automation hub requires a ReadWriteMany type storage, which is not a default storage type in OpenShift. Externally provided Redis, PostgreSQL, and object storage for automation hub are specified. This provides the Ansible Automation Platform deployment with additional scalability and reliability features, including specialized backup, restore, and replication services and scalable storage. 4.2.4. Network ports Red Hat Ansible Automation Platform uses several ports to communicate with its services. These ports must be open and available for incoming connections to the Red Hat Ansible Automation Platform server for it to work. Ensure that these ports are available and are not blocked by the server firewall. Table 4.6. Network ports and protocols Port number Protocol Service Source Destination 80/443 HTTP/HTTPS Object storage OpenShift Container Platform cluster External object storage service 80/443 HTTP/HTTPS Receptor Execution node OpenShift Container Platform ingress 80/443 HTTP/HTTPS Receptor Hop node OpenShift Container Platform ingress 5432 TCP PostgreSQL OpenShift Container Platform cluster External database service 6379 TCP Redis OpenShift Container Platform cluster External Redis service 27199 TCP Receptor OpenShift Container Platform cluster Execution node 27199 TCP Receptor OpenShift Container Platform cluster Hop node
[ "apiVersion: aap.ansible.com/v1alpha1 kind: AnsibleAutomationPlatform metadata: name: <aap instance name> spec: eda: automation_server_ssl_verify: 'no' hub: storage_type: 's3' object_storage_s3_secret: '<name of the Secret resource holding s3 configuration>'" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/tested_deployment_models/ocp-topologies
Appendix B. Changes in version 4 of the API
Appendix B. Changes in version 4 of the API This section enumerates the backwards compatibility breaking changes that have been introduced in version 4 of the API. B.1. Removed YAML support The support for YAML has been completely removed. B.2. Renamed complex types The following XML schema complex types have been renamed: Version 3 Version 4 API Api CPU Cpu CPUs Cpus CdRom Cdrom CdRoms Cdroms DNS Dns GuestNicConfiguration NicConfiguration GuestNicsConfiguration NicConfigurations HostNICStates HostNicStates HostNIC HostNic HostStorage HostStorages IO Io IP Ip IPs Ips KSM Ksm MAC Mac NIC Nic PreviewVMs PreviewVms QoS Qos QoSs Qoss RSDL Rsdl SELinux SeLinux SPM Spm SSHPublicKey SshPublicKey SSHPublicKeys SshPublicKeys SSH Ssh SkipIfSDActive SkipIfSdActive Slaves HostNics Storage HostStorage SupportedVersions Versions VCpuPin VcpuPin VLAN Vlan VM Vm VMs Vms VirtIO_SCSI VirtioScsi WatchDog Watchdog WatchDogs Watchdogs B.3. Replaced the Status type with enum types Currently the status of different objects is reported using the Status type, which contains a state string describing the status and another detail string for additional details. For example, the status of a virtual machine that is paused due to an IO error is currently reported as follows: <vm> ... <status> <state>paused</state> <detail>eio</detail> </status> ... </vm> In version 4 of the API this Status type has been removed and replaced by enum types. When the additional detail string is needed it has been replaced with an additional status_detail attribute. So, for example, the status of the same virtual machine will now be reported as follows: <vm> ... <status>paused</status> <status_detail>eio</status_detail> ... </vm> B.4. Remove the NIC network and port_mirroring properties The NIC network and port_mirroring elements have been replaced by the vnic_profile element, so when creating or updating a NIC instead of specifying the network and port mirroring configuration, these are previusly specified creating a vNIC profile: <vnic_profile> <name>myprofile</name> <network id="..."/> <port_mirroring>true</port_mirroring> </vnic_profile> And then the NIC is created or referencing the existing vNIC profile: <nic> <vnic_profile id="/vnicprofiles/..."> </nic> The old elements and their meaning were preserved for backwards compatibility, but they have now been completely removed. Note that the network element hasn't been removed from the XML schema because it is still used by the initialization element, but it will be completely ignored if provided when creating or updating a NIC. B.5. Remove the NIC active property The NIC active property was replaced by plugged some time ago. It has been completely removed now. B.6. Remove the disk type property The type property of disks has been removed, but kept in the XML schema and ignored. It has been completely removed now. B.7. Remove the disk size property The disk size property has been replaced by provisioned_size long ago. It has been completely removed now. B.8. Removed support for pinning a VM to a single host Before version 3.6 the API had the possibility to pin a VM to a single host, using the placement_policy element of the VM entity: <vm> <placement_policy> <host id="456"/> </placement_policy> <vm> In version 3.6 this capability was enhanced to support multiple hosts, and to do so a new hosts element was added: <vm> <placement_policy> <hosts> <host id="456"/> <host id="789"/> ... </hosts> </placement_policy> <vm> To preserve backwards compatibility the single host element was preserved. In 4.0 this has been removed, so applications will need to use the hosts element even if when pinning to a single host. B.9. Removed the capabilities.permits element The list of permits is potentiall different for each cluster level, and it has been added to the version element long ago, but it has been kept into the capabilities element as well, just for backwards compatibility. In 4.0 it the capabilities service has been completely removed, and replaced by the new clusterlevels service. To find the permits supported by cluster level 4.0 a request like this should be used: The result will be a document containing the information specific to that cluster level, in particular the set of supported permits: <cluster_level id="4.0" href="/clusterlevels/4.0"> ... <permits> <permit id="1"> <name>create_vm</name> <administrative>false</administrative> </permit> ... </permits> </cluster_level> B.10. Removed the storage_manager element The storage_manager element was replaced by the spm element some time ago. The old one was kept for backwards compatibility, but it has been completely removed now. B.11. Removed the data center storage_type element Data centers used to be associated with a specific storage type (NFS, Fiber Channel, iSCSI, etc.) but they have been changed, and now there are only two types: those with local storage and those with shared storage. A new local element was introduced to indicate this, and the old storage_type was element was preserved for backwards compatibility. This old element has now been completely removed. B.12. Remove the timezone element The VM resource used to contain a timezone element to represent the time zone. This element only allowed a string: <vm> <timezone>Europe/Madrid</timezone> </vm> This doesn't allow extension, and as a it was necessary to add the UTC offset, it was replaced with a new structured time_zone element: <vm> <time_zone> <name>Europe/Madrid</name> <utc_offset>GMT+1</utc_offset> </time_zone> </vm> The old timezone element was preserved, but it has been completely removed now. B.13. Removed the guest_info element The guest_info element was used to hold information gathered by the guest agent, like the IP addresses and the fully qualified host name. This information is also available in other places. For example, the IP addresses are available within VM resource: <vm> <guest_info> <ips> <ip address="192.168.122.30"/> </ips> <fqdn>myvm.example.com</fqdn> </guest_info> </vm> And also within the NIC resource, using the newer reported_devices element: <nic> <reported_devices> <reported_device> <name>eth0</name> <mac address="00:1a:4a:b5:4c:94"/> <ips> <ip address="192.168.1.115" version="v4"/> <ip address="fe80::21a:4aff:feb5:4c94" version="v6"/> <ip address="::1:21a:4aff:feb5:4c94" version="v6"/> </ips> </reported_device> </reported_devices> </nic> In addition this newer reported_devices element provides more complete information, like multiple IP addresses, MAC addresses, etc. To remove this duplication the guest_info element has been removed. To support the fully qualified domain name a new fqdn element has been added to the VM resource: This will contain the same information that guest_info.fqdn used to contain. B.14. Replaced CPU id attribute with type element The cpu element used to have an id attribute that indicates the type of CPU: <cpu id="Intel Conroe Family"> <architecture>X86_64</architecture> ... </cpu> This is in contradiction with the rest of the elements of the API model, where the id attribute is used for opaque identifiers. This id attribute has been replaced with a new type element: <cpu> <type>Intel Conroe Family</type> <architecture>X86_64</architecture> </cpu> B.15. Use elements instead of attributes in CPU topology In the past the CPU topology element used attributes for its properties: <cpu> <topology sockets="1" cores="1" threads="1"/> ... </cpu> This is contrary to the common practice in the API. They have been replaced by inner elements: <cpu> <topology> <sockets>1<sockets> <cores>1<cores> <threads>1<threads> </topology> ... </cpu> B.16. Use elements instead of attributes in VCPU pin In the past the VCPU pin element used attributes for its properties: <cpu_tune> <vcpu_pin vcpu="0" cpu_set="0"/> </cpu_tune> This is contrary to the common practice in the API. They have been replaced by inner elements: <cpu_tune> <vcpu_pin> <vcpu>0</vcpu> <cpu_set>0</cpu_set> </vcpu_pin> </cpu_tune> B.17. Use elements instead of attributes in VCPU pin In the past the version element used attributes for its properties: <version major="3" minor="5" ../> This is contrary to the common practice in the API. They have been replaced by inner elements: <version> <major>3</minor> <minor>5</minor> ... </version> B.18. Use elements instead of attributes in memory overcommit In the past the overcommit element used attributes for its properties: <memory_policy> <overcommit percent="100"/> ... </memory_policy> This is contrary to the common practice in the API. They have been replaced by inner elements: <memory_policy> <overcommit> <percent>100</percent> </overcommit> ... </memory_policy> B.19. Use elements instead of attributes in console In the past the console element used attributes for its properties: <console enabled="true"/> This is contrary to the common practice in the API. They have been replaced by inner elements: <console> <enabled>true</enabled> </console> B.20. Use elements instead of attributes in VIRTIO SCSI In the past the VIRTIO ISCSI element used attributes for its properties: <virtio_scsi enabled="true"/> This is contrary to the common practice in the API. They have been replaced by inner elements: <virtio_scsi> <enabled>true</enabled> </virtio_scsi> B.21. Use element instead of attribute for power management agent type The power management type property was represented as an attribute: <agent type="apc"> <username>myuser</username> ... </agent> This is contrary to the common practice in the API. It has been replaced with an inner element: <agent> <type>apc</type> <username>myuser</username> ... </agent> B.22. Use elements instead of attributes in power management agent options In the past the power management agent options element used attributes for its properties: <options> <option name="port" value="22"/> <option name="slot" value="5"/> ... </options> This is contrary to the common practice in the API. They have been replaced with inner elements: <options> <option> <name>port</name> <value>22</value> </option> <option> <name>slot</name> <value>5</value> </option> ... </options> B.23. Use elements instead of attributes in IP address: In the past the IP address element used attributes for its properties: <ip address="192.168.122.1" netmask="255.255.255.0"/> This is contrary to the common practice in the API. They have been replaced with inner elements: <ip> <address>192.168.122.1</address> <netmask>255.255.255.0</netmask> </ip> B.24. Use elements instead of attributes in MAC address: In the past the MAC address element used attributes for its properties: <mac address="66:f2:c5:5f:bb:8d"/> This is contrary to the common practice in the API. They have been replaced by inner elements: <mac> <address>66:f2:c5:5f:bb:8d</address> </mac> B.25. Use elements instead of attributes in boot device: In the past the boot device element used attributes for its properties: <boot dev="cdrom"/> This is contrary to the common practice in the API. They have been replaced by inner elements: <boot> <dev>cdrom</dev> </boot> B.26. Use element instead of attribute for operating system type The operating system type property was represented as an attribute: <os type="other"> ... </os> This is contrary to the common practice in the API. It has been replaced with an inner element: <os> <type>other</type> ... </os> B.27. Removed the force parameter from the request to retrieve a host The request to retrieve a host used to support a force matrix parameter to indicate that the data of the host should be refreshed (calling VDSM to reload host capabilities and devices) before retrieving it from the database: This force parameter has been superseded by the host refresh action, but kept for backwards compatibility. It has been completely removed now. Applications that require this functionality should perform two requests, first one to refresh the host: <action/> And then one to retrieve it, without the force parameter: B.28. Removed deprecated host power management configuration The host power management configuration used to be part of the host resource, using embedded configuration elements: <power_management type="apc"> <enabled>true</enabled> <address>myaddress</address> <username>myaddress</username> <options> <option name="port" value="22/> </option name="slot" value="5/> </options> ... </power_management> This has been changed some time ago, in order to support multiple power management agents, introducing a new /hosts/123/fenceagents collection. The old type attribute, the old address , username and password elements, and the inner agents element directly inside power_management were preserved for backwards compatibility. All these elements have been completely removed, so the only way to query or modify the power management agents is now the /hosts/123/fenceagents sub-collection. B.29. Use multiple boot.devices.device instead of multiple boot In the past the way to specify the boot sequence when starting a virtual machine was to use multiple boot elements, each containing a dev element. For example, to specify that the virtual machine should first try to boot from CDROM and then from hard disk the following request was used: <action> <vm> ... <boot> <dev>cdrom</dev> </boot> <boot> <dev>hd</dev> </boot> </vm> </action> The common practice in other parts of the API is to represent arrays with a wrapper element. In that case that wrapper element could be named boots , but that doesn't make much sense, as what can have multiple values here is the boot device, not the boot sequence. To fix this inconsistence this has been replaced with a single boot element that can contain multiple devices: <action> <vm> ... <boot> <devices> <device>cdrom</device> <device>hd</device> </devices> </boot> </vm> </action> B.30. Removed the disks.clone and disks.detach_only elements These elements aren't really part of the representation of disks, but parameters of the operations to add and remove virtual machines. The disks.clone element was used to indicate that the disks of a new virtual machine have to be cloned: <vm> ... <disks> <clone>true</clone> </disks> <vm> This has been now removed, and replaced by a new clone query parameter: <vm> ... </vm> The disks.detach_only element was used to indicate that when removing a virtual machine the disks don't have to be removed, but just detached from the virtual machine: <action> <vm> <disks> <detach_only>true</detach_only> </disks> </vm> </action> This has been now removed, and replaced by a new detach_only query parameter: B.31. Rename element vmpool to vm_pool The names of the elements that represent pools of virtual machines used to be vmpool and vmpools . They have been renamed to vm_pool and vm_pools in order to have a consistent correspondence between names of complex types ( VmPool and VmPools in this case) and elements. B.32. Use logical_units instead of multiple logical_unit The logical units that are part of a volume group used to be reported as an unbounded number of logical_unit elements. For example, when reporting the details of a storage domain: <storage_domain> ... <storage> ... <volume_group> <logical_unit> <!-- First LU --> </logical_unit> <logical_unit> <!-- Second LU --> </logical_unit> ... </volume_group> </storage> </storage_domain> This is contrary to the usual practice in the API, as list of elements are always wrapped with an element. This has been fixed now, so the list of logical units will be wrapped with the logical_units element: <storage_domain> ... <storage> ... <volume_group> <logical_units> <logical_unit> <!-- First LU --> </logical_unit> <logical_unit> <!-- Second LU --> </logical_unit> ... </logical_units> </volume_group> </storage> </storage_domain> B.33. Removed the snapshots.collapse_snapshots element This element isn't really part of the representation of snapshots, but a parameter of the operation that imports a virtual machine from an export storage domain: <action> <vm> <snapshots> <collapse_snapshots>true</collapse_snapshots> </snapshots> </vm> </action> This has been now removed, and replaced by a new collapse_snapshots query parameter: <action/> B.34. Renamed storage and host_storage elements The host storage collection used the storage and host_storage elements and the Storage and HostStorage complex types to report the storage associated to a host: <host_storage> <storage> ... </storage> <storage> ... </storage> ... </host_storage> This doesn't follow the pattern used in the rest of the API, where the outer element is a plural name and the inner element is the same name but in singular. This has now been changed to use host_storages as the outer element and host_storage as the inner element: <host_storages> <host_storage> ... </host_storage> <host_storage> ... </host_storage> ... </host_storage> B.35. Removed the permissions.clone element This element isn't really part of the representation of permissions, but a parameter of the operations to create virtual machines or templates: <vm> <template id="..."> <permissions> <clone>true</clone> </permissions> </template> </action> <template> <vm id="..."> <permissions> <clone>true</clone> </permissions> </vm> </template> This has been now removed, and replaced by a new clone_permissions query parameter: <vm> <template id="..."/> </vm> <template> <vm id="..."/> </template> B.36. Renamed the random number generator source elements The random number generator sources used to be reported using a collection of source elements wrapped by an element with a name reflecting its use. For example, the required random number generator sources of a cluster used to be reported as follows: <cluster> ... <required_rng_sources> <source>random</source> </required_rng_sources> ... </cluster> And the random number generator sources suported by a host used to be reported as follows: <host> ... <hardware_information> <supported_rng_sources> <source>random</source> </supported_rng_sources> </hardware_information> ... </host> This isn't consistent with the rest of the API, where collections are wrapped by a name in plural and elements by the same name in singular. This has been now fixed. The required random number generator sources will now be reported as follows: <cluster> <required_rng_sources> <required_rng_sources>random</required_rng_source> </required_rng_sources> ... </cluster> And the random number generator sources supported by a host will be reported as follows: <host> ... <hardware_information> <supported_rng_sources> <supported_rng_source>random</supported_rng_source> </supported_rng_sources> </hardware_information> ... </host> Note the use of required_rng_source and supported_rng_source instead of just source . B.37. Removed the intermediate tag.parent element The relationship bettween a tag and it's parent tag used to be represented using an intermedite parent tag, that in turn contains another tag element: <tag> <name>mytag</name> <parent> <tag id="..." href="..."/> </parent> </tag> This structure has been simplified so that only one parent element is used now: <tag> <name>mytag</name> <parent id="..." href="..."/> </tag> B.38. Remove scheduling built-in names and thresholds In the past the specification of scheduling policies for clusters was based in built-in names and thresholds. For example a cluster that used the evenly distributed scheduling policy was represented as follows: <cluster> <name>mycluster</name> <scheduling_policy> <policy>evenly_distributed</policy> <thresholds high="80" duration="120"/> </scheduling_policy> ... </cluster> This mechanism was replaced with a top level /schedulingpolicies collection where scheduling policies can be defined with arbitrary names and properties. For example, the same scheduling policy is represented as follows in that top level collection: <scheduling_policy> <name>evenly_distributed</name> <properties> <property> <name>CpuOverCommitDurationMinutes</name> <value>2</value> </property> <property> <name>HighUtilization</name> <value>80</value> </property> </properties> </scheduling_policy> The representation of the cluster references the scheduling policy with its identifier: <cluster> <name>mycluster</name> <scheduling_policy id="..."/> ... </cluster> To preserve backwards compatibility the old policy and thresholds elements were preserved. The scheduling policy representation embedded within the cluster was also preserved. All these things have been completely removed now, so the only way to reference a scheduling policy when retrieving, creating or updating a cluster is to reference an existing one using its identifier. For example, when retrieving a cluster only the id (and href ) will be populated: <cluster> ... <scheduling_policy id="..." href="..."/> ... </cluster> When creating or updating a cluster only the id will be accepted. B.39. Removed the bricks.replica_count and bricks.stripe_count elements These elements aren't really part of the representation of a collection of bricks, but parameters of the operations to add and remove bricks. They have now been removed, and replaced by new replica_count and stripe_count parameters: B.40. Renamed the statistics type property to kind The statistics used to be represented using a type element that indicates the kind of statistic (gauge, counter, etc) and also a type attribute that indicates the type of the values (integer, string, etc): <statistic> <type>GAUGE</type> <values type="INTEGER"> <value>...</value> <value>...</value> ... </values> </statistic> To avoid the use of the type concept for both things the first has been replaced by kind , and both kind and type are now elements: <statistic> <kind>gauge</kind> <type>integer</type> <values> <value>...</value> <value>...</value> ... </values> </statistic> B.41. Use multiple vcpu_pins.vcpu_pin instead of multiple vcpu_pin In the past the way to specify the virtual to physical CPU pinning of a virtual machine was to use multiple vcpu_pin elements: <vm> <cpu> <cpu_tune> <vcpu_pin>...</vcpu_pin> <vcpu_pin>...</vcpu_pin> ... </cpu_tune> </cpu> </vm> In order to conform to the common practice in other parts of the API this has been changed to use a wrapper element, in this case vcpu_pins : <vm> <cpu> <cpu_tune> <vcpu_pins> <vcpu_pin>...</vcpu_pin> <vcpu_pin>...</vcpu_pin> ... </vcpu_pins> </cpu_tune> </cpu> </vm> B.42. Use force parameter to force remove a data center The operation that removes a data center supports a force parameter. In order to use it the DELETE operation used to support an optional action parameter: <action> <force>true</force> </action> This optional action parameter has been replaced with an optional parameter: B.43. Use force parameter to force remove a host The operation that removes a host supports a force parameter. In order to use it the DELETE operation used to support an optional action parameter: <action> <force>true</force> </action> This optional action parameter has been replaced with an optional parameter: B.44. Use parameters for force remove storage domain The operation that removes a storage domain supports the force , destroy and host parameters. These parameters were passed to the DELETE method using the representation of the storage domain as the body: <storage_domain> <force>...</force> <destroy>...</destroy> <host id="..."> <name>...</name> </host> </storage_domain> This was problematic, as the HTTP DELETE parameters shouldn't have a body, and the representation of the storage domain shouldn't include things that aren't attributes of the storage domain, rather parameters of the operation. The force , delete and host attributes have been replaced by equivalent parameters, and the operation doesn't now accept a body. For example, now the correct way to delete a storage domain with the force parameter is the following: To delete with the destroy parameter: B.45. Use host parameter to remove storage server connection The operation that removes a storage server connection supports a host parameter. In order to use it the DELETE method used to support an optional action parameter: <action> <host id="..."> <name>...</name> </host> </action> This optional action parameter has been replaced with an optional parameter: B.46. Use force and storage_domain parameters to remove template disks The operation that removes a template disk supports the force and storage_domain parameters. In order to use it them the DELETE method used to support an optional action parameter: <action> <force>...</force> <storage_domain id="..."/> </action> In version 4 of the API this operation has been moved to the new diskattachments collection, and the request body has been replaced with the query parameters force and storage_domain : B.47. Don't remove disks via the VM disk API Removing an entity by deleting /vms/123/disks/456 means removing the relationship between the VM and the disk - i.e., this operation should just detach the disk from the VM. This operation is no longer able to remove disks completely from the system, which was prone to user errors and had unreverseable consequences. To remove a disk, instead use the /disk/456 API: B.48. Use force query parameter to force remove a virtual machine The operation that removes a virtual machine supports a force parameter. In order to use it the DELETE method used to support an optional action parameter: <action> <force>true</force> </action> This optional action parameter has been replaced with an optional query parameter: B.49. Use POST instead of DELETE to remove multiple bricks The operation that removes multiple Gluster bricks was implemented using the DELETE method and passing the list of bricks as the body of the request: <bricks> <bricks id="..."/> <bricks id="..."/> ... </bricks> This is problematic because the DELETE method shouldn't have a body, so it has been replaced with a new remove action that uses the POST method: <bricks> <bricks id="..."/> <bricks id="..."/> ... </bricks> B.50. Removed the scheduling_policy.policy element The element was kept for backward compatibility. Use scheduling_policy.name instead. <scheduling_policy> ... <name>policy_name</name> ... </scheduling_policy> <scheduling_policy> ... <name>policy_name</name> ... </scheduling_policy> B.51. Added snapshot.snapshot_type Enums are being gradually introduces to the API. Some fields which were string until now, are replaced with an appropriate enum. One such field is vm.type. But this field is inherited by snapshot, and snapshot type is different than vm type. So a new field has been added to snapshot entity: snapshot.snapshot_type . <snapshot> ... <snapshot_type>regular|active|stateless|preview</snapshot_type> ... </snapshot> B.52. Removed move action from VM The deprecated move action of the VM entity has been removed. Instead, you can move inidividual disks. B.53. Moved reported_configurations.in_sync to network_attachment In version 3 of the API the XML schema type ReportedConfigurations had a in_sync property: <network_attachment> <reported_configurations> <in_sync>true</in_sync> <reported_configuration> ... </reported_configuration> ... </reported_configurations> </network_attachment> In the specification mechanism used by version 4 of the API this can't be expressed, because list types (the list of reported configurations) can't have attributes. To be able to represent it the attribute has been moved to the enclosing network_attachment : <network_attachment> <in_sync>true</in_sync> <reported_configurations> <reported_configuration> ... </reported_configuration> ... </reported_configurations> </network_attachment> B.54. Replaced capabilities with clusterlevels The top level capabilities collection has been replaced by the new clusterlevels collection. This new collection will contain the information that isn't available in the model, like the list of CPU types available for each cluster level: This will return a list of ClusterLevel objects containing the details for all the cluster levels supported by the system: <cluster_levels> <cluster_level id="3.6" href="/clusterlevels/3.6"> <cpu_types> <cpu_type> <name>Intel Conroe Family</name> <level>2</level> <architecture>x86_64</architecture> </cpu_type> ... </cpu_types> ... </cluster_level> </cluster_levels> Each specific cluster level has it's own subresource, identified by the version itself: This will return the details of that version: <cluster_level id="3.6" href="/clusterlevels/3.6"> <cpu_types> <cpu_type> <name>Intel Conroe Family</name> <level>2</level> <architecture>x86_64</architecture> </cpu_type> ... </cpu_types> ... </cluster_level> B.55. Replaced disks with diskattachments In version 3 of the API virtual machines and templates had a disks collection containing all the information of the disks attached to them. In version 4 of the API these disks collections have been removed and replaced with a new diskattachments collection that will contain only the references to the disk and the attributes that are specific of the relationship between disks and the virtual machine or template that they are attached to: interface and bootable . To find what disks are attached to a virtual machine, for example, send a request like this: That will return a response like this: <disk_attachments> <disk_attachment href="/vms/123/diskattachments/456" id="456"> <bootable>false</bootable> <interface>virtio</interface> <disk href="/disks/456" id="456"/> <vm href="/vms/123" id="123"/> </disk_attachment> ... <disk_attachments> To find the rest of the details of the disk, follow the link provided. Adding disks to a virtual machine or template uses the new disk_attachment element as well: request like this: With the following body if the disk doesn't exist and you want to create it: <disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <disk> <description>My disk</description> <format>cow</format> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <storage_domains> <storage_domain> <name>mydata</name> </storage_domain> </storage_domains> </disk> </disk_attachment> Or with the following body if the disk already exists, and you just want to attach it to the virtual machine: <disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <disk id="456"/> </disk_attachment> Take into account that the vm.disks and template.disks attribtes have disk_attachments for all usages. For example, when creating a template the vm.disks element was used to indicate in which storage domain to create the disks of the template. This usage has also been replaced by vm.disk_attachments , so the request to creaate a template with disks in specific storage domains will now look like this: <template> <name>mytemplate</name> <vm id="123"> <disk_attachments> <disk_attachment> <disk id="456"> <storage_domains> <storage_domain id="789"/> </storage_domains> </disk> </disk_attachment> ... </disk_attachments> </vm> </template> B.56. Use iscsi_targets element to discover unregistered storage In version 3 of the API the operation to discover unregistered storage domains used to receive a list of iSCSI targets, using multiple iscsi_target elements: <action> <iscsi> <address>myiscsiserver</address> </iscsi> <iscsi_target>iqn.2016-07.com.example:mytarget1</iscsi_target> <iscsi_target>iqn.2016-07.com.example:mytarget2</iscsi_target> </action> In version 4 of the API all repeating elements, like iscsi_target in this case, are wrapped with another element, iscsi_targets in case. So the same request should now look like this: <action> <iscsi> <address>myiscsiserver</address> </iscsi> <iscsi_targets> <iscsi_target>iqn.2016-07.com.example:mytarget1</iscsi_target> <iscsi_target>iqn.2016-07.com.example:mytarget2</iscsi_target> </iscsi_targets> </action>
[ "<vm> <status> <state>paused</state> <detail>eio</detail> </status> </vm>", "<vm> <status>paused</status> <status_detail>eio</status_detail> </vm>", "POST /ovirt-engine/api/vnicprofiles", "<vnic_profile> <name>myprofile</name> <network id=\"...\"/> <port_mirroring>true</port_mirroring> </vnic_profile>", "PUT /ovirt-engine/api/vms/123/nics/456", "<nic> <vnic_profile id=\"/vnicprofiles/...\"> </nic>", "PUT /ovirt-engine/api/vms/123", "<vm> <placement_policy> <host id=\"456\"/> </placement_policy> <vm>", "PUT /ovirt-engine/api/vms/123", "<vm> <placement_policy> <hosts> <host id=\"456\"/> <host id=\"789\"/> </hosts> </placement_policy> <vm>", "GET /ovirt-engine/api/clusterlevels/4.0", "<cluster_level id=\"4.0\" href=\"/clusterlevels/4.0\"> <permits> <permit id=\"1\"> <name>create_vm</name> <administrative>false</administrative> </permit> </permits> </cluster_level>", "<vm> <timezone>Europe/Madrid</timezone> </vm>", "<vm> <time_zone> <name>Europe/Madrid</name> <utc_offset>GMT+1</utc_offset> </time_zone> </vm>", "GET /ovirt-engine/api/vms/123", "<vm> <guest_info> <ips> <ip address=\"192.168.122.30\"/> </ips> <fqdn>myvm.example.com</fqdn> </guest_info> </vm>", "GET /ovirt-engine/api/vms/{vm:id}/nics/{nic:id}", "<nic> <reported_devices> <reported_device> <name>eth0</name> <mac address=\"00:1a:4a:b5:4c:94\"/> <ips> <ip address=\"192.168.1.115\" version=\"v4\"/> <ip address=\"fe80::21a:4aff:feb5:4c94\" version=\"v6\"/> <ip address=\"::1:21a:4aff:feb5:4c94\" version=\"v6\"/> </ips> </reported_device> </reported_devices> </nic>", "GET /ovirt-engine/api/vms/123", "<vm> <fqdn>myvm.example.com</fqdn> </vms>", "<cpu id=\"Intel Conroe Family\"> <architecture>X86_64</architecture> </cpu>", "<cpu> <type>Intel Conroe Family</type> <architecture>X86_64</architecture> </cpu>", "<cpu> <topology sockets=\"1\" cores=\"1\" threads=\"1\"/> </cpu>", "<cpu> <topology> <sockets>1<sockets> <cores>1<cores> <threads>1<threads> </topology> </cpu>", "<cpu_tune> <vcpu_pin vcpu=\"0\" cpu_set=\"0\"/> </cpu_tune>", "<cpu_tune> <vcpu_pin> <vcpu>0</vcpu> <cpu_set>0</cpu_set> </vcpu_pin> </cpu_tune>", "<version major=\"3\" minor=\"5\" ../>", "<version> <major>3</minor> <minor>5</minor> </version>", "<memory_policy> <overcommit percent=\"100\"/> </memory_policy>", "<memory_policy> <overcommit> <percent>100</percent> </overcommit> </memory_policy>", "<console enabled=\"true\"/>", "<console> <enabled>true</enabled> </console>", "<virtio_scsi enabled=\"true\"/>", "<virtio_scsi> <enabled>true</enabled> </virtio_scsi>", "<agent type=\"apc\"> <username>myuser</username> </agent>", "<agent> <type>apc</type> <username>myuser</username> </agent>", "<options> <option name=\"port\" value=\"22\"/> <option name=\"slot\" value=\"5\"/> </options>", "<options> <option> <name>port</name> <value>22</value> </option> <option> <name>slot</name> <value>5</value> </option> </options>", "<ip address=\"192.168.122.1\" netmask=\"255.255.255.0\"/>", "<ip> <address>192.168.122.1</address> <netmask>255.255.255.0</netmask> </ip>", "<mac address=\"66:f2:c5:5f:bb:8d\"/>", "<mac> <address>66:f2:c5:5f:bb:8d</address> </mac>", "<boot dev=\"cdrom\"/>", "<boot> <dev>cdrom</dev> </boot>", "<os type=\"other\"> </os>", "<os> <type>other</type> </os>", "GET /ovirt-engine/api/hosts/123;force", "POST /ovirt-engine/api/hosts/123/refresh", "<action/>", "GET /ovirt-engine/api/hosts/123", "<power_management type=\"apc\"> <enabled>true</enabled> <address>myaddress</address> <username>myaddress</username> <options> <option name=\"port\" value=\"22/> </option name=\"slot\" value=\"5/> </options> </power_management>", "POST /ovirt-engine/api/vms/123/start", "<action> <vm> <boot> <dev>cdrom</dev> </boot> <boot> <dev>hd</dev> </boot> </vm> </action>", "POST /ovirt-engine/api/vms/123/start", "<action> <vm> <boot> <devices> <device>cdrom</device> <device>hd</device> </devices> </boot> </vm> </action>", "POST /ovirt-engine/api/vms", "<vm> <disks> <clone>true</clone> </disks> <vm>", "POST /ovirt-engine/api/vms?clone=true", "<vm> </vm>", "DELETE /ovirt-engine/api/vms/123", "<action> <vm> <disks> <detach_only>true</detach_only> </disks> </vm> </action>", "DELETE /ovirt-engine/api/vms/123?detach_only=true", "GET /ovirt-engine/api/storagedomains/123", "<storage_domain> <storage> <volume_group> <logical_unit> <!-- First LU --> </logical_unit> <logical_unit> <!-- Second LU --> </logical_unit> </volume_group> </storage> </storage_domain>", "GET /ovirt-engine/api/storagedomains/123", "<storage_domain> <storage> <volume_group> <logical_units> <logical_unit> <!-- First LU --> </logical_unit> <logical_unit> <!-- Second LU --> </logical_unit> </logical_units> </volume_group> </storage> </storage_domain>", "POST /ovirt-engine/api/storagedomains/123/vms/456/import", "<action> <vm> <snapshots> <collapse_snapshots>true</collapse_snapshots> </snapshots> </vm> </action>", "POST /ovirt-engine/api/storagedomains/123/vms/456/import?collapse_snapshots=true", "<action/>", "GET /ovirt-engine/api/hosts/123/storage", "<host_storage> <storage> </storage> <storage> </storage> </host_storage>", "GET /ovirt-engine/api/hosts/123/storage", "<host_storages> <host_storage> </host_storage> <host_storage> </host_storage> </host_storage>", "POST /ovirt-engine/api/vms", "<vm> <template id=\"...\"> <permissions> <clone>true</clone> </permissions> </template> </action>", "POST /ovirt-engine/api/templates", "<template> <vm id=\"...\"> <permissions> <clone>true</clone> </permissions> </vm> </template>", "POST /ovirt-engine/api/vms?clone_permissions=true", "<vm> <template id=\"...\"/> </vm>", "POST /ovirt-engine/api/templates?clone_permissions=true", "<template> <vm id=\"...\"/> </template>", "GET /ovirt-engine/api/clusters/123", "<cluster> <required_rng_sources> <source>random</source> </required_rng_sources> </cluster>", "GET /ovirt-engine/api/hosts/123", "<host> <hardware_information> <supported_rng_sources> <source>random</source> </supported_rng_sources> </hardware_information> </host>", "GET /ovirt-engine/api/clusters/123", "<cluster> <required_rng_sources> <required_rng_sources>random</required_rng_source> </required_rng_sources> </cluster>", "GET /ovirt-engine/api/hosts/123", "<host> <hardware_information> <supported_rng_sources> <supported_rng_source>random</supported_rng_source> </supported_rng_sources> </hardware_information> </host>", "<tag> <name>mytag</name> <parent> <tag id=\"...\" href=\"...\"/> </parent> </tag>", "<tag> <name>mytag</name> <parent id=\"...\" href=\"...\"/> </tag>", "<cluster> <name>mycluster</name> <scheduling_policy> <policy>evenly_distributed</policy> <thresholds high=\"80\" duration=\"120\"/> </scheduling_policy> </cluster>", "<scheduling_policy> <name>evenly_distributed</name> <properties> <property> <name>CpuOverCommitDurationMinutes</name> <value>2</value> </property> <property> <name>HighUtilization</name> <value>80</value> </property> </properties> </scheduling_policy>", "<cluster> <name>mycluster</name> <scheduling_policy id=\"...\"/> </cluster>", "GET /ovirt-engine/api/clusters/123", "<cluster> <scheduling_policy id=\"...\" href=\"...\"/> </cluster>", "POST .../bricks?replica_count=3&stripe_count=2", "DELETE .../bricks?replica_count=3", "<statistic> <type>GAUGE</type> <values type=\"INTEGER\"> <value>...</value> <value>...</value> </values> </statistic>", "<statistic> <kind>gauge</kind> <type>integer</type> <values> <value>...</value> <value>...</value> </values> </statistic>", "<vm> <cpu> <cpu_tune> <vcpu_pin>...</vcpu_pin> <vcpu_pin>...</vcpu_pin> </cpu_tune> </cpu> </vm>", "<vm> <cpu> <cpu_tune> <vcpu_pins> <vcpu_pin>...</vcpu_pin> <vcpu_pin>...</vcpu_pin> </vcpu_pins> </cpu_tune> </cpu> </vm>", "DELETE /ovirt-engine/api/datacenters/123", "<action> <force>true</force> </action>", "DELETE /ovirt-engine/api/datacenters/123?force=true", "DELETE /ovirt-engine/api/host/123", "<action> <force>true</force> </action>", "DELETE /ovirt-engine/api/host/123?force=true", "DELETE /ovirt-engine/api/storagedomains/123", "<storage_domain> <force>...</force> <destroy>...</destroy> <host id=\"...\"> <name>...</name> </host> </storage_domain>", "DELETE /ovirt-engine/api/storagedomain/123?host=myhost&force=true", "DELETE /ovirt-engine/api/storagedomain/123?host=myhost&destroy=true", "DELETE /ovirt-engine/api/storageconnections/123", "<action> <host id=\"...\"> <name>...</name> </host> </action>", "DELETE /ovirt-engine/api/storageconnections/123?host=myhost", "DELETE /ovirt-engine/api/templates/123/disks/456", "<action> <force>...</force> <storage_domain id=\"...\"/> </action>", "DELETE /ovirt-engine/api/templates/123/disksattachments/456?force=true", "DELETE /ovirt-engine/api/templates/123/disksattachments/456?storage_domain=123", "DELETE /ovirt-engine/api/disks/456", "DELETE /ovirt-engine/api/vms/123", "<action> <force>true</force> </action>", "DELETE /ovirt-engine/api/vms/123?force=true", "DELETE /ovirt-engine/api/clusters/123/glustervolumes/456/bricks", "<bricks> <bricks id=\"...\"/> <bricks id=\"...\"/> </bricks>", "POST /ovirt-engine/api/clusters/123/glustervolumes/456/bricks/remove", "<bricks> <bricks id=\"...\"/> <bricks id=\"...\"/> </bricks>", "POST /ovirt-engine/api/schedulingpolicies", "<scheduling_policy> <name>policy_name</name> </scheduling_policy>", "PUT /ovirt-engine/api/schedulingpolicies/123", "<scheduling_policy> <name>policy_name</name> </scheduling_policy>", "<snapshot> <snapshot_type>regular|active|stateless|preview</snapshot_type> </snapshot>", "<network_attachment> <reported_configurations> <in_sync>true</in_sync> <reported_configuration> </reported_configuration> </reported_configurations> </network_attachment>", "<network_attachment> <in_sync>true</in_sync> <reported_configurations> <reported_configuration> </reported_configuration> </reported_configurations> </network_attachment>", "GET /ovirt-engine/api/clusterlevels", "<cluster_levels> <cluster_level id=\"3.6\" href=\"/clusterlevels/3.6\"> <cpu_types> <cpu_type> <name>Intel Conroe Family</name> <level>2</level> <architecture>x86_64</architecture> </cpu_type> </cpu_types> </cluster_level> </cluster_levels>", "GET /ovirt-engine/api/clusterlevels/3.6", "<cluster_level id=\"3.6\" href=\"/clusterlevels/3.6\"> <cpu_types> <cpu_type> <name>Intel Conroe Family</name> <level>2</level> <architecture>x86_64</architecture> </cpu_type> </cpu_types> </cluster_level>", "GET /ovirt-engine/api/vms/123/diskattachments", "<disk_attachments> <disk_attachment href=\"/vms/123/diskattachments/456\" id=\"456\"> <bootable>false</bootable> <interface>virtio</interface> <disk href=\"/disks/456\" id=\"456\"/> <vm href=\"/vms/123\" id=\"123\"/> </disk_attachment> <disk_attachments>", "POST /ovirt-engine/api/vms/123/diskattachments", "<disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <disk> <description>My disk</description> <format>cow</format> <name>mydisk</name> <provisioned_size>1048576</provisioned_size> <storage_domains> <storage_domain> <name>mydata</name> </storage_domain> </storage_domains> </disk> </disk_attachment>", "<disk_attachment> <bootable>false</bootable> <interface>virtio</interface> <disk id=\"456\"/> </disk_attachment>", "<template> <name>mytemplate</name> <vm id=\"123\"> <disk_attachments> <disk_attachment> <disk id=\"456\"> <storage_domains> <storage_domain id=\"789\"/> </storage_domains> </disk> </disk_attachment> </disk_attachments> </vm> </template>", "POST /ovirt-engine/api/hosts/123/unregisteredstoragedomaindiscover", "<action> <iscsi> <address>myiscsiserver</address> </iscsi> <iscsi_target>iqn.2016-07.com.example:mytarget1</iscsi_target> <iscsi_target>iqn.2016-07.com.example:mytarget2</iscsi_target> </action>", "POST /ovirt-engine/api/hosts/123/unregisteredstoragedomaindiscover", "<action> <iscsi> <address>myiscsiserver</address> </iscsi> <iscsi_targets> <iscsi_target>iqn.2016-07.com.example:mytarget1</iscsi_target> <iscsi_target>iqn.2016-07.com.example:mytarget2</iscsi_target> </iscsi_targets> </action>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/rest_api_guide/documents-a02_changes_in_v4
30.8. Disabling and Enabling sudo Rules
30.8. Disabling and Enabling sudo Rules Disabling a sudo rule temporarily deactivates it. A disabled rule is not removed from IdM and can be enabled again. Disabling and Enabling sudo Rules from the Web UI Under the Policy tab, click Sudo Sudo Rule . Select the rule to disable and click Disable or Enable . Figure 30.15. Disabling or Enabling a sudo Rule Disabling and Enabling sudo Rules from the Command Line To disable a rule, use the ipa sudo-rule-disable command. To re-enable a rule, use the ipa sudorule-enable command.
[ "ipa sudorule-disable sudo_rule_name ----------------------------------- Disabled Sudo Rule \"sudo_rule_name\" -----------------------------------", "ipa sudorule-enable sudo_rule_name ----------------------------------- Enabled Sudo Rule \"sudo_rule_name\" -----------------------------------" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/suspending-sudo
Streams for Apache Kafka API Reference
Streams for Apache Kafka API Reference Red Hat Streams for Apache Kafka 2.7 Configure a deployment of Streams for Apache Kafka 2.7 on OpenShift Container Platform
[ "config: ssl.cipher.suites: TLS_AES_256_GCM_SHA384, TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 1 ssl.enabled.protocols: TLSv1.3, TLSv1.2 2 ssl.protocol: TLSv1.3 3 ssl.endpoint.identification.algorithm: HTTPS 4", "create secret generic MY-SECRET --from-file= MY-TLS-CERTIFICATE-FILE.crt", "tls: trustedCertificates: - secretName: my-cluster-cluster-cert certificate: ca.crt - secretName: my-cluster-cluster-cert certificate: ca2.crt", "tls: trustedCertificates: []", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # resources: requests: memory: 64Gi cpu: \"8\" limits: memory: 64Gi cpu: \"12\" entityOperator: # topicOperator: # resources: requests: memory: 512Mi cpu: \"1\" limits: memory: 512Mi cpu: \"1\"", "resources: requests: memory: 512Mi limits: memory: 2Gi", "resources: requests: cpu: 500m limits: cpu: 2.5", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # image: my-org/my-image:latest # zookeeper: #", "readinessProbe: initialDelaySeconds: 15 timeoutSeconds: 5 livenessProbe: initialDelaySeconds: 15 timeoutSeconds: 5", "kind: ConfigMap apiVersion: v1 metadata: name: my-configmap data: my-key: | lowercaseOutputName: true rules: # Special cases and very specific rules - pattern: kafka.server<type=(.+), name=(.+), clientId=(.+), topic=(.+), partition=(.*)><>Value name: kafka_server_USD1_USD2 type: GAUGE labels: clientId: \"USD3\" topic: \"USD4\" partition: \"USD5\" # further configuration", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # metricsConfig: type: jmxPrometheusExporter valueFrom: configMapKeyRef: name: my-config-map key: my-key # zookeeper: #", "jvmOptions: \"-Xmx\": \"2g\" \"-Xms\": \"2g\"", "jvmOptions: \"-XX\": \"UseG1GC\": \"true\" \"MaxGCPauseMillis\": \"20\" \"InitiatingHeapOccupancyPercent\": \"35\" \"ExplicitGCInvokesConcurrent\": \"true\"", "-XX:+UseG1GC -XX:MaxGCPauseMillis=20 -XX:InitiatingHeapOccupancyPercent=35 -XX:+ExplicitGCInvokesConcurrent -XX:-UseParNewGC", "jvmOptions: javaSystemProperties: - name: javax.net.debug value: ssl", "jvmOptions: gcLoggingEnabled: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false # zookeeper: #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # config: num.partitions: 1 num.recovery.threads.per.data.dir: 1 default.replication.factor: 3 offsets.topic.replication.factor: 3 transaction.state.log.replication.factor: 3 transaction.state.log.min.isr: 1 log.retention.hours: 168 log.segment.bytes: 1073741824 log.retention.check.interval.ms: 300000 num.network.threads: 3 num.io.threads: 8 socket.send.buffer.bytes: 102400 socket.receive.buffer.bytes: 102400 socket.request.max.bytes: 104857600 group.initial.rebalance.delay.ms: 0 zookeeper.connection.timeout.ms: 6000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone brokerRackInitImage: my-org/my-image:latest #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # kafka: # logging: type: inline loggers: kafka.root.logger.level: INFO log4j.logger.kafka.coordinator.transaction: TRACE log4j.logger.kafka.log.LogCleanerManager: DEBUG log4j.logger.kafka.request.logger: DEBUG log4j.logger.io.strimzi.kafka.oauth: DEBUG log4j.logger.org.openpolicyagents.kafka.OpaAuthorizer: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: kafka-log4j.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # listeners: - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls - name: external1 port: 9094 type: route tls: true - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "listeners: - name: plain port: 9092 type: internal tls: false", "get kafka <kafka_cluster_name> -o=jsonpath='{.status.listeners[?(@.name==\" <listener_name> \")].bootstrapServers}{\"\\n\"}'", "# spec: kafka: # listeners: # - name: plain port: 9092 type: internal tls: false - name: tls port: 9093 type: internal tls: true authentication: type: tls #", "# spec: kafka: # listeners: # - name: external1 port: 9094 type: route tls: true #", "# spec: kafka: # listeners: # - name: external2 port: 9095 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com #", "# spec: kafka: # listeners: - name: external3 port: 9094 type: loadbalancer tls: true configuration: loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "# spec: kafka: # listeners: # - name: external4 port: 9095 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "# spec: kafka: # listeners: - name: clusterip type: cluster-ip tls: false port: 9096 #", "listeners: # - name: plain port: 9092 type: internal tls: true authentication: type: scram-sha-512 networkPolicyPeers: - podSelector: matchLabels: app: kafka-sasl-consumer - podSelector: matchLabels: app: kafka-sasl-producer - name: tls port: 9093 type: internal tls: true authentication: type: tls networkPolicyPeers: - namespaceSelector: matchLabels: project: myproject - namespaceSelector: matchLabels: project: myproject2", "spec: kafka: config: principal.builder.class: SimplePrincipal.class listeners: - name: oauth-bespoke port: 9093 type: internal tls: true authentication: type: custom sasl: true listenerConfig: oauthbearer.sasl.client.callback.handler.class: client.class oauthbearer.sasl.server.callback.handler.class: server.class oauthbearer.sasl.login.callback.handler.class: login.class oauthbearer.connections.max.reauth.ms: 999999999 sasl.enabled.mechanisms: oauthbearer oauthbearer.sasl.jaas.config: | org.apache.kafka.common.security.oauthbearer.OAuthBearerLoginModule required ; secrets: - name: example", "public final class CustomKafkaPrincipalBuilder implements KafkaPrincipalBuilder { public KafkaPrincipalBuilder() {} @Override public KafkaPrincipal build(AuthenticationContext context) { if (context instanceof SslAuthenticationContext) { SSLSession sslSession = ((SslAuthenticationContext) context).session(); try { return new KafkaPrincipal( KafkaPrincipal.USER_TYPE, sslSession.getPeerPrincipal().getName()); } catch (SSLPeerUnverifiedException e) { throw new IllegalArgumentException(\"Cannot use an unverified peer for authentication\", e); } } // Create your own KafkaPrincipal here } }", "listeners: # - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: brokerCertChainAndKey: secretName: my-secret certificate: my-listener-certificate.crt key: my-listener-key.key", "listeners: # - name: external3 port: 9094 type: loadbalancer tls: false configuration: externalTrafficPolicy: Local loadBalancerSourceRanges: - 10.0.0.0/8 - 88.208.76.87/32 #", "listeners: # - name: external2 port: 9094 type: ingress tls: true configuration: class: nginx-internal #", "listeners: # - name: external4 port: 9094 type: nodeport tls: false configuration: preferredNodePortAddressType: InternalDNS #", "listeners: # - name: plain port: 9092 type: internal tls: false configuration: useServiceDnsDomain: true #", "listeners: # - name: external1 port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: alternativeNames: - example.hostname1 - example.hostname2", "listeners: # - name: external2 port: 9094 type: ingress tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myingress.com brokers: - broker: 0 host: broker-0.myingress.com - broker: 1 host: broker-1.myingress.com - broker: 2 host: broker-2.myingress.com", "listeners: # - name: external1 port: 9094 type: route tls: true authentication: type: tls configuration: bootstrap: host: bootstrap.myrouter.com brokers: - broker: 0 host: broker-0.myrouter.com - broker: 1 host: broker-1.myrouter.com - broker: 2 host: broker-2.myrouter.com", "listeners: # - name: external4 port: 9094 type: nodeport tls: true authentication: type: tls configuration: bootstrap: nodePort: 32100 brokers: - broker: 0 nodePort: 32000 - broker: 1 nodePort: 32001 - broker: 2 nodePort: 32002", "listeners: # - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: loadBalancerIP: 172.29.3.10 brokers: - broker: 0 loadBalancerIP: 172.29.3.1 - broker: 1 loadBalancerIP: 172.29.3.2 - broker: 2 loadBalancerIP: 172.29.3.3", "listeners: # - name: external3 port: 9094 type: loadbalancer tls: true authentication: type: tls configuration: bootstrap: annotations: external-dns.alpha.kubernetes.io/hostname: kafka-bootstrap.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" brokers: - broker: 0 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-0.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 1 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-1.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\" - broker: 2 annotations: external-dns.alpha.kubernetes.io/hostname: kafka-broker-2.mydomain.com. external-dns.alpha.kubernetes.io/ttl: \"60\"", "listeners: # - name: external1 port: 9094 type: route tls: true authentication: type: tls configuration: brokers: - broker: 0 advertisedHost: example.hostname.0 advertisedPort: 12340 - broker: 1 advertisedHost: example.hostname.1 advertisedPort: 12341 - broker: 2 advertisedHost: example.hostname.2 advertisedPort: 12342", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: simple superUsers: - CN=client_1 - user_2 - CN=client_3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: opa url: http://opa:8181/v1/data/kafka/allow allowOnError: false initialCacheCapacity: 1000 maximumCacheSize: 10000 expireAfterMs: 60000 superUsers: - CN=fred - sam - CN=edward #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster namespace: myproject spec: kafka: # authorization: type: custom authorizerClass: io.mycompany.CustomAuthorizer superUsers: - CN=client_1 - user_2 - CN=client_3 # config: authorization.custom.property1=value1 authorization.custom.property2=value2 #", "FROM registry.redhat.io/amq-streams/kafka-37-rhel9:2.7.0 USER root:root COPY ./ my-authorizer / /opt/kafka/libs/ USER 1001", "public List<AuthorizationResult> authorize(AuthorizableRequestContext requestContext, List<Action> actions) { KafkaPrincipal principal = requestContext.principal(); if (principal instanceof OAuthKafkaPrincipal) { OAuthKafkaPrincipal p = (OAuthKafkaPrincipal) principal; for (String group: p.getGroups()) { System.out.println(\"Group: \" + group); } } }", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # rack: topologyKey: topology.kubernetes.io/zone config: # replica.selector.class: org.apache.kafka.common.replica.RackAwareReplicaSelector #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 spec: # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # rack: topologyKey: topology.kubernetes.io/zone #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: authentication: type: \"password\" # zookeeper: # jmxOptions: authentication: type: \"password\" #", "\" CLUSTER-NAME -kafka-0. CLUSTER-NAME -kafka-brokers\"", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # jmxOptions: {} # zookeeper: # jmxOptions: {} #", "template: pod: metadata: labels: label1: value1 label2: value2 annotations: annotation1: value1 annotation2: value2", "template: pod: metadata: labels: label1: value1 annotations: anno1: value1 imagePullSecrets: - name: my-docker-credentials securityContext: runAsUser: 1000001 fsGroup: 0 terminationGracePeriodSeconds: 120", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect # spec: # template: pod: hostAliases: - ip: \"192.168.1.86\" hostnames: - \"my-host-1\" - \"my-host-2\" #", "template: podDisruptionBudget: metadata: labels: key1: label1 key2: label2 annotations: key1: label1 key2: label2 maxUnavailable: 1", "template: kafkaContainer: env: - name: EXAMPLE_ENV_1 value: example.env.one - name: EXAMPLE_ENV_2 value: example.env.two securityContext: runAsUser: 2000", "kafka: tieredStorage: type: custom remoteStorageManager: className: com.example.kafka.tiered.storage.s3.S3RemoteStorageManager classPath: /opt/kafka/plugins/tiered-storage-s3/* config: # A map with String keys and String values. # Key properties are automatically prefixed with `rsm.config.` # and appended to Kafka broker config. storage.bucket.name: my-bucket config: # Additional RLMM configuration can be added through the Kafka config # under `spec.kafka.config` using the `rlmm.config.` prefix. rlmm.config.remote.log.metadata.topic.replication.factor: 1", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: kafka: # zookeeper: # config: autopurge.snapRetainCount: 3 autopurge.purgeInterval: 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: inline loggers: zookeeper.root.logger: INFO log4j.logger.org.apache.zookeeper.server.FinalRequestProcessor: TRACE log4j.logger.org.apache.zookeeper.server.ZooKeeperServer: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: # zookeeper: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: zookeeper-log4j.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO logger.top.name: io.strimzi.operator.topic 1 logger.top.level: DEBUG 2 logger.toc.name: io.strimzi.operator.topic.TopicOperator 3 logger.toc.level: TRACE 4 logger.clients.level: DEBUG 5 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # topicOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: topic-operator-log4j2.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: inline loggers: rootLogger.level: INFO logger.uop.name: io.strimzi.operator.user 1 logger.uop.level: DEBUG 2 logger.abstractcache.name: io.strimzi.operator.user.operator.cache.AbstractCache 3 logger.abstractcache.level: TRACE 4 logger.jetty.level: DEBUG 5 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: kafka: # zookeeper: # entityOperator: # userOperator: watchedNamespace: my-topic-namespace reconciliationIntervalSeconds: 60 logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: user-operator-log4j2.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # entityOperator: # tlsSidecar: resources: requests: cpu: 200m memory: 64Mi limits: cpu: 500m memory: 128Mi #", "template: deployment: deploymentStrategy: Recreate", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: # Note that `default.goals` (superset) must also include all `hard.goals` (subset) default.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal, com.linkedin.kafka.cruisecontrol.analyzer.goals.ReplicaCapacityGoal hard.goals: > com.linkedin.kafka.cruisecontrol.analyzer.goals.RackAwareGoal cpu.balance.threshold: 1.1 metadata.max.age.ms: 300000 send.buffer.bytes: 131072 webserver.http.cors.enabled: true webserver.http.cors.origin: \"*\" webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # config: webserver.http.cors.enabled: true 1 webserver.http.cors.origin: \"*\" 2 webserver.http.cors.exposeheaders: \"User-Task-ID,Content-Type\" 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: config: webserver.security.enable: false webserver.ssl.enable: false", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: cpu: \"2\" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka metadata: name: my-cluster spec: # cruiseControl: # brokerCapacity: cpu: \"1\" inboundNetwork: 10000KiB/s outboundNetwork: 10000KiB/s overrides: - brokers: [0] cpu: \"2.755\" inboundNetwork: 20000KiB/s outboundNetwork: 20000KiB/s - brokers: [1, 2] cpu: 3000m inboundNetwork: 30000KiB/s outboundNetwork: 30000KiB/s", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: inline loggers: rootLogger.level: INFO logger.exec.name: com.linkedin.kafka.cruisecontrol.executor.Executor 1 logger.exec.level: TRACE 2 logger.go.name: com.linkedin.kafka.cruisecontrol.analyzer.GoalOptimizer 3 logger.go.level: DEBUG 4 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: Kafka spec: cruiseControl: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: cruise-control-log4j.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect spec: # config: group.id: my-connect-cluster offset.storage.topic: my-connect-cluster-offsets config.storage.topic: my-connect-cluster-configs status.storage.topic: my-connect-cluster-status key.converter: org.apache.kafka.connect.json.JsonConverter value.converter: org.apache.kafka.connect.json.JsonConverter key.converter.schemas.enable: true value.converter.schemas.enable: true config.storage.replication.factor: 3 offset.storage.replication.factor: 3 status.storage.replication.factor: 3 #", "curl -s http://<connect-cluster-name>-connect-api:8083/admin/loggers/", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: inline loggers: connect.root.logger.level: INFO log4j.logger.org.apache.kafka.connect.runtime.WorkerSourceTask: TRACE log4j.logger.org.apache.kafka.connect.runtime.WorkerSinkTask: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: connect-logging.log4j #", "create secret generic MY-SECRET --from-file= MY-PUBLIC-TLS-CERTIFICATE-FILE.crt --from-file= MY-PRIVATE.key", "authentication: type: tls certificateAndKey: secretName: my-secret certificate: my-public-tls-certificate-file.crt key: private.key", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm", "authentication: type: scram-sha-256 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-connect-password-field: LFTIyFRFlMmU2N2Tm", "authentication: type: scram-sha-512 username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-connect-password-field", "echo -n PASSWORD > MY-PASSWORD .txt", "create secret generic MY-CONNECT-SECRET-NAME --from-file= MY-PASSWORD-FIELD-NAME =./ MY-PASSWORD .txt", "apiVersion: v1 kind: Secret metadata: name: my-connect-secret-name type: Opaque data: my-password-field-name: LFTIyFRFlMmU2N2Tm", "authentication: type: plain username: my-connect-username passwordSecret: secretName: my-connect-secret-name password: my-password-field-name", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id clientSecret: secretName: my-client-oauth-secret key: client-secret", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token", "authentication: type: oauth accessToken: secretName: my-access-token-secret key: access-token", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-public-client-id", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token username: my-username passwordSecret: secretName: my-password-secret-name password: my-password-field-name clientId: my-confidential-client-id clientSecret: secretName: my-confidential-client-oauth-secret key: client-secret", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token tlsTrustedCertificates: - secretName: oauth-server-ca certificate: tls.crt", "authentication: type: oauth tokenEndpointUri: https://sso.myproject.svc:8443/auth/realms/internal/protocol/openid-connect/token clientId: my-client-id refreshToken: secretName: my-refresh-token-secret key: refresh-token disableTlsHostnameVerification: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: docker 1 image: my-registry.io/my-org/my-connect-cluster:latest 2 pushSecret: my-registry-credentials 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: type: imagestream 1 image: my-connect-build:latest 2 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: 1 - name: connector-1 artifacts: - type: tgz url: <url_to_download_connector_1_artifact> sha512sum: <SHA-512_checksum_of_connector_1_artifact> - name: connector-2 artifacts: - type: jar url: <url_to_download_connector_2_artifact> sha512sum: <SHA-512_checksum_of_connector_2_artifact> #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: jar 1 url: https://my-domain.tld/my-jar.jar 2 sha512sum: 589...ab4 3 - type: jar url: https://my-domain.tld/my-jar2.jar #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: tgz 1 url: https://my-domain.tld/my-connector-archive.tgz 2 sha512sum: 158...jg10 3 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: maven 1 repository: https://mvnrepository.com 2 group: <maven_group> 3 artifact: <maven_artifact> 4 version: <maven_version_number> 5 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnect metadata: name: my-connect-cluster spec: # build: output: # plugins: - name: my-plugin artifacts: - type: other 1 url: https://my-domain.tld/my-other-file.ext 2 sha512sum: 589...ab4 3 fileName: name-the-file.ext 4 #", "spec: quotas: producerByteRate: 1048576 consumerByteRate: 2097152 requestPercentage: 55 controllerMutationRate: 10", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaUser metadata: name: my-user labels: strimzi.io/cluster: my-cluster spec: authentication: type: tls template: secret: metadata: labels: label1: value1 annotations: anno1: value1 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: inline loggers: mirrormaker.root.logger: INFO log4j.logger.org.apache.kafka.clients.NetworkClient: TRACE log4j.logger.org.apache.kafka.common.network.Selector: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: mirror-maker-log4j.properties #", "logger.send.name = http.openapi.operation.send logger.send.level = DEBUG", "logger.healthy.name = http.openapi.operation.healthy logger.healthy.level = WARN logger.ready.name = http.openapi.operation.ready logger.ready.level = WARN", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: inline loggers: rootLogger.level: INFO # enabling DEBUG just for send operation logger.send.name: \"http.openapi.operation.send\" logger.send.level: DEBUG #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge spec: # logging: type: external valueFrom: configMapKeyRef: name: customConfigMap key: bridge-logj42.properties #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # http: port: 8080 cors: allowedOrigins: \"https://strimzi.io\" allowedMethods: \"GET,POST,PUT,DELETE,OPTIONS,PATCH\" #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # consumer: config: auto.offset.reset: earliest enable.auto.commit: true #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaBridge metadata: name: my-bridge spec: # producer: config: acks: 1 delivery.timeout.ms: 300000 #", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaConnector metadata: name: my-source-connector spec: autoRestart: enabled: true maxRestarts: 10", "apiVersion: kafka.strimzi.io/v1beta2 kind: KafkaMirrorMaker2 metadata: name: my-mm2-cluster spec: mirrors: - sourceConnector: autoRestart: enabled: true # heartbeatConnector: autoRestart: enabled: true # checkpointConnector: autoRestart: enabled: true #", "dnf install <package_name>", "dnf install <path_to_download_package>" ]
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.7/html-single/streams_for_apache_kafka_api_reference/index
Chapter 2. Configuring an Azure Stack Hub account
Chapter 2. Configuring an Azure Stack Hub account Before you can install OpenShift Container Platform, you must configure a Microsoft Azure account. Important All Azure resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 2.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage 2.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . 2.3. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 2.4. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources About the Cloud Credential Operator 2.5. steps Install an OpenShift Container Platform cluster: Installing a cluster on Azure Stack Hub with customizations Install an OpenShift Container Platform cluster on Azure Stack Hub with user-provisioned infrastructure by following Installing a cluster on Azure Stack Hub using ARM templates .
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_azure_stack_hub/installing-azure-stack-hub-account
Chapter 11. Provisioning virtual machines in VMware vSphere
Chapter 11. Provisioning virtual machines in VMware vSphere VMware vSphere is an enterprise-level virtualization platform from VMware. Red Hat Satellite can interact with the vSphere platform, including creating new virtual machines and controlling their power management states. 11.1. Prerequisites for VMware provisioning The requirements for VMware vSphere provisioning include: A supported version of VMware vCenter Server. The following versions have been fully tested with Satellite: vCenter Server 7.0 vCenter Server 6.7 (EOL) vCenter Server 6.5 (EOL) A Capsule Server managing a network on the vSphere environment. Ensure no other DHCP services run on this network to avoid conflicts with Capsule Server. For more information, see Chapter 3, Configuring networking . An existing VMware template if you want to use image-based provisioning. You can use synchronized content repositories for Red Hat Enterprise Linux. For more information, see Syncing Repositories in Managing content . Provide an activation key for host registration. For more information, see Creating An Activation Key in Managing content . 11.2. Creating a VMware user The VMware vSphere server requires an administration-like user for Satellite Server communication. For security reasons, do not use the administrator user for such communication. Instead, create a user with the following permissions: For VMware vCenter Server version 6.7, set the following permissions: All Privileges Datastore Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations All Privileges Network Assign Network All Privileges Resource Assign virtual machine to resource pool All Privileges Virtual Machine Change Config (All) All Privileges Virtual Machine Interaction (All) All Privileges Virtual Machine Edit Inventory (All) All Privileges Virtual Machine Provisioning (All) All Privileges Virtual Machine Guest Operations (All) Note that the same steps also apply to VMware vCenter Server version 7.0. For VMware vCenter Server version 6.5, set the following permissions: All Privileges Datastore Allocate Space, Browse datastore, Update Virtual Machine files, Low level file operations All Privileges Network Assign Network All Privileges Resource Assign virtual machine to resource pool All Privileges Virtual Machine Configuration (All) All Privileges Virtual Machine Interaction (All) All Privileges Virtual Machine Inventory (All) All Privileges Virtual Machine Provisioning (All) All Privileges Virtual Machine Guest Operations (All) 11.3. Adding a VMware connection to Satellite Server Use this procedure to add a VMware vSphere connection in Satellite Server's compute resources. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites Ensure that the host and network-based firewalls are configured to allow communication from Satellite Server to vCenter on TCP port 443. Verify that Satellite Server and vCenter can resolve each other's host names. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources , and in the Compute Resources window, click Create Compute Resource . In the Name field, enter a name for the resource. From the Provider list, select VMware . In the Description field, enter a description for the resource. In the VCenter/Server field, enter the IP address or host name of the vCenter server. In the User field, enter the user name with permission to access the vCenter's resources. In the Password field, enter the password for the user. Click Load Datacenters to populate the list of data centers from your VMware vSphere environment. From the Datacenter list, select a specific data center to manage from this list. In the Fingerprint field, ensure that this field is populated with the fingerprint from the data center. From the Display Type list, select a console type, for example, VNC or VMRC . Note that VNC consoles are unsupported on VMware ESXi 6.5 and later. Optional: In the VNC Console Passwords field, select the Set a randomly generated password on the display connection checkbox to secure console access for new hosts with a randomly generated password. You can retrieve the password for the VNC console to access guest virtual machine console from the libvirtd host from the output of the following command: The password randomly generates every time the console for the virtual machine opens, for example, with virt-manager. From the Enable Caching list, you can select whether to enable caching of compute resources. For more information, see Section 11.10, "Caching of compute resources" . Click the Locations and Organizations tabs and verify that the values are automatically set to your current context. You can also add additional contexts. Click Submit to save the connection. CLI procedure Create the connection with the hammer compute-resource create command. Select Vmware as the --provider and set the instance UUID of the data center as the --uuid : 11.4. Adding VMware images to Satellite Server VMware vSphere uses templates as images for creating new virtual machines. If using image-based provisioning to create new hosts, you need to add VMware template details to your Satellite Server. This includes access details and the template name. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your VMware compute resource. Click Create Image . In the Name field, enter a name for the image. From the Operating System list, select the base operating system of the image. From the Architecture list, select the operating system architecture. In the Username field, enter the SSH user name for image access. By default, this is set to root . If your image supports user data input such as cloud-init data, click the User data checkbox. Optional: In the Password field, enter the SSH password to access the image. From the Image list, select an image from VMware. Click Submit to save the image details. CLI procedure Create the image with the hammer compute-resource image create command. Use the --uuid field to store the relative template path on the vSphere environment: 11.5. Adding VMware details to a compute profile You can predefine certain hardware settings for virtual machines on VMware vSphere. You achieve this through adding these hardware settings to a compute profile. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Infrastructure > Compute Profiles . Select a compute profile. Select a VMware compute resource. In the CPUs field, enter the number of CPUs to allocate to the host. In the Cores per socket field, enter the number of cores to allocate to each CPU. In the Memory field, enter the amount of memory in MiB to allocate to the host. In the Firmware checkbox, select either BIOS or UEFI as firmware for the host. By default, this is set to automatic . In the Cluster list, select the name of the target host cluster on the VMware environment. From the Resource pool list, select an available resource allocations for the host. In the Folder list, select the folder to organize the host. From the Guest OS list, select the operating system you want to use in VMware vSphere. From the Virtual H/W version list, select the underlying VMware hardware abstraction to use for virtual machines. If you want to add more memory while the virtual machine is powered on, select the Memory hot add checkbox. If you want to add more CPUs while the virtual machine is powered on, select the CPU hot add checkbox. If you want to add a CD-ROM drive, select the CD-ROM drive checkbox. From the Boot order list, define the order in which the virtual machines tried to boot. Optional: In the Annotation Notes field, enter an arbitrary description. If you use image-based provisioning, select the image from the Image list. From the SCSI controller list, select the disk access method for the host. If you want to use eager zero thick provisioning, select the Eager zero checkbox. By default, the disk uses lazy zero thick provisioning. From the Network Interfaces list, select the network parameters for the host's network interface. At least one interface must point to a Capsule-managed network. Optional: Click Add Interface to create another network interfaces. Click Submit to save the compute profile. CLI procedure Create a compute profile: Set VMware details to a compute profile: 11.6. Creating hosts on VMware The VMware vSphere provisioning process provides the option to create hosts over a network connection or using an existing image. For network-based provisioning, you must create a host to access either Satellite Server's integrated Capsule or an external Capsule Server on a VMware vSphere virtual network, so that the host has access to PXE provisioning services. The new host entry triggers the VMware vSphere server to create the virtual machine. If the virtual machine detects the defined Capsule Server through the virtual network, the virtual machine boots to PXE and begins to install the chosen operating system. DHCP conflicts If you use a virtual network on the VMware vSphere server for provisioning, ensure that you select a virtual network that does not provide DHCP assignments. This causes DHCP conflicts with Satellite Server when booting new hosts. For image-based provisioning, use the pre-existing image as a basis for the new volume. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Hosts > Create Host . In the Name field, enter a name for the host. Optional: Click the Organization tab and change the organization context to match your requirement. Optional: Click the Location tab and change the location context to match your requirement. From the Host Group list, select a host group that you want to assign your host to. That host group will populate the form. From the Deploy on list, select the VMware vSphere connection. From the Compute Profile list, select a profile to use to automatically populate virtual machine-based settings. Click the Interfaces tab, and on the interface of the host, click Edit . Verify that the fields are populated with values. Note in particular: Satellite automatically assigns an IP address for the new host. Ensure that the MAC address field is blank. VMware assigns a MAC address to the host during provisioning. The Name from the Host tab becomes the DNS name . Ensure that Satellite automatically selects the Managed , Primary , and Provision options for the first interface on the host. If not, select them. In the interface window, review the VMware-specific fields that are populated with settings from our compute profile. Modify these settings to suit your needs. Click OK to save. To add another interface, click Add Interface . You can select only one interface for Provision and Primary . Click the Operating System tab, and confirm that all fields automatically contain values. Select the Provisioning Method that you want: For network-based provisioning, click Network Based . For image-based provisioning, click Image Based . For boot-disk provisioning, click Boot disk based . Click Resolve in Provisioning templates to check the new host can identify the right provisioning templates to use. Click the Virtual Machine tab and confirm that these settings are populated with details from the host group and compute profile. Modify these settings to suit your requirements. Click the Parameters tab and ensure that a parameter exists that provides an activation key. If a parameter does not exist, click + Add Parameter . In the field Name , enter kt_activation_keys . In the field Value , enter the name of the activation key used to register the Content Hosts. Click Submit to provision your host on VMware. CLI procedure Create the host from a network with the hammer host create command and include --provision-method build to use network-based provisioning: Create the host from an image with the hammer host create command and include --provision-method image to use image-based provisioning: For more information about additional host creation parameters for this compute resource, enter the hammer host create --help command. 11.7. Using VMware cloud-init and userdata templates for provisioning You can use VMware with the Cloud-init and Userdata templates to insert user data into the new virtual machine, to make further VMware customization, and to enable the VMware-hosted virtual machine to call back to Satellite. You can use the same procedures to set up a VMware compute resource within Satellite, with a few modifications to the workflow. Figure 11.1. VMware cloud-init provisioning overview When you set up the compute resource and images for VMware provisioning in Satellite, the following sequence of provisioning events occurs: The user provisions one or more virtual machines using the Satellite web UI, API, or hammer Satellite calls the VMware vCenter to clone the virtual machine template Satellite userdata provisioning template adds customized identity information When provisioning completes, the Cloud-init provisioning template instructs the virtual machine to call back to Capsule when cloud-init runs VMware vCenter clones the template to the virtual machine VMware vCenter applies customization for the virtual machine's identity, including the host name, IP, and DNS The virtual machine builds, cloud-init is invoked and calls back Satellite on port 80 , which then redirects to 443 Prerequisites Configure port and firewall settings to open any necessary connections. Because of the cloud-init service, the virtual machine always calls back to Satellite even if you register the virtual machine to Capsule. For more information, see Port and firewall requirements in Installing Satellite Server in a connected network environment and Port and firewall requirements in Installing Capsule Server . If you want to use Capsule Servers instead of your Satellite Server, ensure that you have configured your Capsule Servers accordingly. For more information, see Configuring Capsule for Host Registration and Provisioning in Installing Capsule Server . Back up the following configuration files: /etc/cloud/cloud.cfg.d/01_network.cfg /etc/cloud/cloud.cfg.d/10_datasource.cfg /etc/cloud/cloud.cfg Associating the Userdata and Cloud-init templates with the operating system In the Satellite web UI, navigate to Hosts > Templates > Provisioning Templates . Search for the CloudInit default template and click its name. Click the Association tab. Select all operating systems to which the template applies and click Submit . Repeat the steps above for the UserData open-vm-tools template. Navigate to Hosts > Provisioning Setup > Operating Systems . Select the operating system that you want to use for provisioning. Click the Templates tab. From the Cloud-init template list, select CloudInit default . From the User data template list, select UserData open-vm-tools . Click Submit to save the changes. Preparing an image to use the cloud-init template To prepare an image, you must first configure the settings that you require on a virtual machine that you can then save as an image to use in Satellite. To use the cloud-init template for provisioning, you must configure a virtual machine so that cloud-init is installed, enabled, and configured to call back to Satellite Server. For security purposes, you must install a CA certificate to use HTTPS for all communication. This procedure includes steps to clean the virtual machine so that no unwanted information transfers to the image you use for provisioning. If you have an image with cloud-init , you must still follow this procedure to enable cloud-init to communicate with Satellite because cloud-init is disabled by default. Procedure On the virtual machine that you use to create the image, install the required packages: Disable network configuration by cloud-init : Configure cloud-init to fetch data from Satellite: If you intend to provision through Capsule Server, use the URL of your Capsule Server in the seedfrom option, such as https:// capsule.example.com :9090/userdata/ . Configure modules to use in cloud-init : Enable the CA certificates for the image: Download the katello-server-ca.crt file from Satellite Server: If you intend to provision through Capsule Server, download the file from your Capsule Server, such as https:// capsule.example.com /pub/katello-server-ca.crt . Update the record of certificates: Stop the rsyslog and auditd services: Clean packages on the image: On Red Hat Enterprise Linux 8 and later: On Red Hat Enterprise Linux 7 and earlier: Reduce logspace, remove old logs, and truncate logs: Remove udev hardware rules: Remove the ifcfg scripts related to existing network configurations: Remove the SSH host keys: Remove root user's SSH history: Remove root user's shell history: Create an image from this virtual machine. Add your image to Satellite . 11.8. Deleting a VM on VMware You can delete VMs running on VMware from within Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your VMware provider. On the Virtual Machines tab, click Delete from the Actions menu. This deletes the virtual machine from the VMware compute resource while retaining any associated hosts within Satellite. If you want to delete the orphaned host, navigate to Hosts > All Hosts and delete the host manually. 11.9. Importing a virtual machine from VMware into Satellite You can import existing virtual machines running on VMware into Satellite. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select your VMware compute resource. On the Virtual Machines tab, click Import as managed Host or Import as unmanaged Host from the Actions menu. The following page looks identical to creating a host with the compute resource being already selected. For more information, see Creating a host in Satellite in Managing hosts . Click Submit to import the virtual machine into Satellite. 11.10. Caching of compute resources Caching of compute resources speeds up rendering of VMware information. 11.10.1. Enabling caching of compute resources To enable or disable caching of compute resources: Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Click the Edit button to the right of the VMware server you want to update. Select the Enable caching checkbox. 11.10.2. Refreshing the compute resources cache Refresh the cache of compute resources to update compute resources information. Procedure In the Satellite web UI, navigate to Infrastructure > Compute Resources . Select a VMware server you want to refresh the compute resources cache for and click Refresh Cache . CLI procedure Use this API call to refresh the compute resources cache: Use hammer compute-resource list to determine the ID of the VMware server you want to refresh the compute resources cache for.
[ "virsh edit your_VM_name <graphics type='vnc' port='-1' autoport='yes' listen='0.0.0.0' passwd=' your_randomly_generated_password '>", "hammer compute-resource create --datacenter \" My_Datacenter \" --description \"vSphere server at vsphere.example.com \" --locations \" My_Location \" --name \"My_vSphere\" --organizations \" My_Organization \" --password \" My_Password \" --provider \"Vmware\" --server \" vsphere.example.com \" --user \" My_User \"", "hammer compute-resource image create --architecture \" My_Architecture \" --compute-resource \" My_VMware \" --name \" My_Image \" --operatingsystem \" My_Operating_System \" --username root --uuid \" My_UUID \"", "hammer compute-profile create --name \" My_Compute_Profile \"", "hammer compute-profile values create --compute-attributes \"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-profile \" My_Compute_Profile \" --compute-resource \" My_VMware \" --interface \"compute_type=VirtualE1000,compute_network=mynetwork --volume \"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "hammer host create --build true --compute-attributes=\"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-resource \" My_VMware \" --enabled true --hostgroup \" My_Host_Group \" --interface \"managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork\" --location \" My_Location \" --managed true --name \" My_Host \" --organization \" My_Organization \" --provision-method build --volume=\"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "hammer host create --compute-attributes=\"cpus=1,corespersocket=2,memory_mb=1024,cluster=MyCluster,path=MyVMs,start=true\" --compute-resource \" My_VMware \" --enabled true --hostgroup \" My_Host_Group \" --image \" My_VMware_Image \" --interface \"managed=true,primary=true,provision=true,compute_type=VirtualE1000,compute_network=mynetwork\" --location \" My_Location \" --managed true --name \" My_Host \" --organization \" My_Organization \" --provision-method image --volume=\"size_gb=20G,datastore=Data,name=myharddisk,thin=true\"", "dnf install cloud-init open-vm-tools perl-interpreter perl-File-Temp", "cat << EOM > /etc/cloud/cloud.cfg.d/01_network.cfg network: config: disabled EOM", "cat << EOM > /etc/cloud/cloud.cfg.d/10_datasource.cfg datasource_list: [NoCloud] datasource: NoCloud: seedfrom: https://satellite.example.com/userdata/ EOM", "cat << EOM > /etc/cloud/cloud.cfg cloud_init_modules: - bootcmd - ssh cloud_config_modules: - runcmd cloud_final_modules: - scripts-per-once - scripts-per-boot - scripts-per-instance - scripts-user - phone-home system_info: distro: rhel paths: cloud_dir: /var/lib/cloud templates_dir: /etc/cloud/templates ssh_svcname: sshd EOM", "update-ca-trust enable", "wget -O /etc/pki/ca-trust/source/anchors/cloud-init-ca.crt https:// satellite.example.com /pub/katello-server-ca.crt", "update-ca-trust extract", "systemctl stop rsyslog systemctl stop auditd", "dnf remove --oldinstallonly", "package-cleanup --oldkernels --count=1 dnf clean all", "logrotate -f /etc/logrotate.conf rm -f /var/log/*-???????? /var/log/*.gz rm -f /var/log/dmesg.old rm -rf /var/log/anaconda cat /dev/null > /var/log/audit/audit.log cat /dev/null > /var/log/wtmp cat /dev/null > /var/log/lastlog cat /dev/null > /var/log/grubby", "rm -f /etc/udev/rules.d/70*", "rm -f /etc/sysconfig/network-scripts/ifcfg-ens* rm -f /etc/sysconfig/network-scripts/ifcfg-eth*", "rm -f /etc/ssh/ssh_host_*", "rm -rf ~root/.ssh/known_hosts", "rm -f ~root/.bash_history unset HISTFILE", "curl -H \"Accept:application/json\" -H \"Content-Type:application/json\" -X PUT -u username : password -k https:// satellite.example.com /api/compute_resources/ compute_resource_id /refresh_cache" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/provisioning_hosts/Provisioning_Virtual_Machines_in_VMware_vmware-provisioning
8.9. Configure a Network Team Using the Text User Interface, nmtui
8.9. Configure a Network Team Using the Text User Interface, nmtui The text user interface tool nmtui can be used to configure teaming in a terminal window. Issue the following command to start the tool: The text user interface appears. Any invalid command prints a usage message. To navigate, use the arrow keys or press Tab to step forwards and press Shift + Tab to step back through the options. Press Enter to select an option. The Space bar toggles the status of a check box. From the starting menu, select Edit a connection . Select Add , the New Connection screen opens. Figure 8.1. The NetworkManager Text User Interface Add a Team Connection menu Select Team , the Edit connection screen opens. Figure 8.2. The NetworkManager Text User Interface Configuring a Team Connection menu To add port interfaces to the team select Add , the New Connection screen opens. Once the type of Connection has been chosen select the Create button to cause the team's Edit Connection display to appear. Figure 8.3. The NetworkManager Text User Interface Configuring a new Team Port Interface Connection menu Enter the required port's device name or MAC address in the Device section. If required, enter a clone MAC address to be used as the team's MAC address by selecting Show to the right of the Ethernet label. Select the OK button. Note If the device is specified without a MAC address the Device section will be automatically populated once the Edit Connection window is reloaded, but only if it successfully finds the device. Figure 8.4. The NetworkManager Text User Interface Configuring a Team's Port Interface Connection menu The name of the teamed port appears in the Slaves section. Repeat the above steps to add further port connections. If custom port settings are to be applied select the Edit button under the JSON configuration section. This will launch a vim console where changes may be applied. Once finished write the changes from vim and then confirm that the displayed JSON string under JSON configuration matches what is intended. Review and confirm the settings before selecting the OK button. Figure 8.5. The NetworkManager Text User Interface Configuring a Team Connection menu See Section 8.13, "Configure teamd Runners" for examples of JSON strings. Note that only the relevant sections from the example strings should be used for a team or port configuration using nmtui . Do not specify the " Device " as part of the JSON string. For example, only the JSON string after " device " but before " port " should be used in the Team JSON configuration field. All JSON strings relevant to a port must only be added in the port configuration field. See Section 3.2, "Configuring IP Networking with nmtui" for information on installing nmtui .
[ "~]USD nmtui" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/networking_guide/sec-Configure_a_Network_Team_Using_the_Text_User_Interface_nmtui
Preface
Preface The Red Hat build of Cryostat is a container-native implementation of JDK Flight Recorder (JFR) that you can use to securely monitor the Java Virtual Machine (JVM) performance in workloads that run on an OpenShift Container Platform cluster. You can use Cryostat 2.4 to start, stop, retrieve, archive, import, and export JFR data for JVMs inside your containerized applications by using a web console or an HTTP API. Depending on your use case, you can store and analyze your recordings directly on your Red Hat OpenShift cluster by using the built-in tools that Cryostat provides or you can export recordings to an external monitoring application to perform a more in-depth analysis of your recorded data. Important Red Hat build of Cryostat is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope .
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/2/html/using_automated_rules_on_cryostat/preface-cryostat
Chapter 2. Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart
Chapter 2. Installing Red Hat Developer Hub on OpenShift Container Platform with the Helm chart You can install Red Hat Developer Hub on OpenShift Container Platform by using the Helm chart with one of the following methods: The OpenShift Container Platform console The Helm CLI 2.1. Deploying Developer Hub from the OpenShift Container Platform web console with the Helm Chart You can use a Helm chart to install Developer Hub on the Red Hat OpenShift Container Platform web console. Helm is a package manager on OpenShift Container Platform that provides the following features: Applies regular application updates using custom hooks Manages the installation of complex applications Provides charts that you can host on public and private servers Supports rolling back to application versions The Red Hat Developer Hub Helm chart is available in the Helm catalog on OpenShift Dedicated and OpenShift Container Platform. Prerequisites You are logged in to your OpenShift Container Platform account. A user with the OpenShift Container Platform admin role has configured the appropriate roles and permissions within your project to create an application. For more information about OpenShift Container Platform roles, see Using RBAC to define and apply permissions . You have created a project in OpenShift Container Platform. For more information about creating a project in OpenShift Container Platform, see Red Hat OpenShift Container Platform documentation . Procedure From the Developer perspective on the Developer Hub web console, click +Add . From the Developer Catalog panel, click Helm Chart . In the Filter by keyword box, enter Developer Hub and click the Red Hat Developer Hub card. From the Red Hat Developer Hub page, click Create . From your cluster, copy the OpenShift Container Platform router host (for example: apps.<clusterName>.com ). Select the radio button to configure the Developer Hub instance with either the form view or YAML view. The Form view is selected by default. Using Form view To configure the instance with the Form view, go to Root Schema global Enable service authentication within Backstage instance and paste your OpenShift Container Platform router host into the field on the form. Using YAML view To configure the instance with the YAML view, paste your OpenShift Container Platform router hostname in the global.clusterRouterBase parameter value as shown in the following example: global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations Edit the other values if needed. Note The information about the host is copied and can be accessed by the Developer Hub backend. When an OpenShift Container Platform route is generated automatically, the host value for the route is inferred and the same host information is sent to the Developer Hub. Also, if the Developer Hub is present on a custom domain by setting the host manually using values, the custom host takes precedence. Click Create and wait for the database and Developer Hub to start. Click the Open URL icon to start using the Developer Hub platform. Note Your developer-hub pod might be in a CrashLoopBackOff state if the Developer Hub container cannot access the configuration files. This error is indicated by the following log: Loaded config from app-config-from-configmap.yaml, env ... 2023-07-24T19:44:46.223Z auth info Configuring "database" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client' To resolve the error, verify the configuration files. 2.2. Deploying Developer Hub on OpenShift Container Platform with the Helm CLI You can use the Helm CLI to install Red Hat Developer Hub on Red Hat OpenShift Container Platform. Prerequisites You have installed the OpenShift CLI ( oc ) on your workstation. You are logged in to your OpenShift Container Platform account. A user with the OpenShift Container Platform admin role has configured the appropriate roles and permissions within your project to create an application. For more information about OpenShift Container Platform roles, see Using RBAC to define and apply permissions . You have created a project in OpenShift Container Platform. For more information about creating a project in OpenShift Container Platform, see Red Hat OpenShift Container Platform documentation . You have installed the Helm CLI tool. Procedure Create and activate the <my-rhdh-project> OpenShift Container Platform project: Install the Red Hat Developer Hub Helm chart: Configure your Developer Hub Helm chart instance with the Developer Hub database password and router base URL values from your OpenShift Container Platform cluster: Display the running Developer Hub instance URL: Verification Open the running Developer Hub instance URL in your browser to use Developer Hub. Additional resources Installing Helm
[ "global: auth: backend: enabled: true clusterRouterBase: apps.<clusterName>.com # other Red Hat Developer Hub Helm Chart configurations", "Loaded config from app-config-from-configmap.yaml, env 2023-07-24T19:44:46.223Z auth info Configuring \"database\" as KeyStore provider type=plugin Backend failed to start up Error: Missing required config value at 'backend.database.client'", "NAMESPACE=<emphasis><rhdh></emphasis> new-project USD{NAMESPACE} || oc project USD{NAMESPACE}", "helm upgrade redhat-developer-hub -i https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.4.2/redhat-developer-hub-1.4.2.tgz", "PASSWORD=USD(oc get secret redhat-developer-hub-postgresql -o jsonpath=\"{.data.password}\" | base64 -d) CLUSTER_ROUTER_BASE=USD(oc get route console -n openshift-console -o=jsonpath='{.spec.host}' | sed 's/^[^.]*\\.//') helm upgrade redhat-developer-hub -i \"https://github.com/openshift-helm-charts/charts/releases/download/redhat-redhat-developer-hub-1.4.2/redhat-developer-hub-1.4.2.tgz\" --set global.clusterRouterBase=\"USDCLUSTER_ROUTER_BASE\" --set global.postgresql.auth.password=\"USDPASSWORD\"", "echo \"https://redhat-developer-hub-USDNAMESPACE.USDCLUSTER_ROUTER_BASE\"" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/installing_red_hat_developer_hub_on_openshift_container_platform/assembly-install-rhdh-ocp-helm
Part V. Functional certification for OpenShift badges: Virtualization, Best practices, CNF, CNI, CSI
Part V. Functional certification for OpenShift badges: Virtualization, Best practices, CNF, CNI, CSI Red Hat OpenShift certification badges extend the Red Hat OpenShift Operator certification. The certification badges are built on the foundation of container and operator certifications. By receiving a Red Hat OpenShift Certification Badge, partners can confirm that their solution is Kubernetes enabled, meets Kubernetes best practices and utilizes specific Kubernetes APIs for addressing the respective use cases. The current OpenShift certification badges that are available are as follows: OpenShift Virtualization - for providing support to run and manage virtual machine workloads on Red Hat OpenShift. Meets Best practices -for meeting the Red Hat best practices checkpoints for cloud native software products that are deployed on Red Hat OpenShift. Cloud-Native Network Functions (CNF) -for the implementation of telecommunication functions deployed as containers. Container Networking Interface (CNI) -for the delivery of networking services through a pluggable framework. Container Storage Interface (CSI) -for providing and supporting a block or file persistent storage backend for Red Hat OpenShift.
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_software_certification_workflow_guide/functional-certification-for-openshift-badges-cnf-cni-csi_openshift-sw-cert-workflow-submitting-your-helm-chart-for-certification
Chapter 103. ZipArtifact schema reference
Chapter 103. ZipArtifact schema reference Used in: Plugin Property Property type Description type string Must be zip . url string URL of the artifact which will be downloaded. Streams for Apache Kafka does not do any security scanning of the downloaded artifacts. For security reasons, you should first verify the artifacts manually and configure the checksum verification to make sure the same artifact is used in the automated build. Required for jar , zip , tgz and other artifacts. Not applicable to the maven artifact type. sha512sum string SHA512 checksum of the artifact. Optional. If specified, the checksum will be verified while building the new container. If not specified, the downloaded artifact will not be verified. Not applicable to the maven artifact type. insecure boolean By default, connections using TLS are verified to check they are secure. The server certificate used must be valid, trusted, and contain the server name. By setting this option to true , all TLS verification is disabled and the artifact will be downloaded, even when the server is considered insecure.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-ZipArtifact-reference
Virtualization
Virtualization OpenShift Container Platform 4.13 OpenShift Virtualization installation, usage, and release notes Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/virtualization/index
6.2. Web UI: Using the Topology Graph to Manage Replication Topology
6.2. Web UI: Using the Topology Graph to Manage Replication Topology Accessing the Topology Graph The topology graph in the web UI shows the relationships between the servers in the domain: Select IPA Server Topology Topology Graph . If you make any changes to the topology that are not immediately reflected in the graph, click Refresh . Customizing the Topology View You can move individual topology nodes by dragging the mouse: Figure 6.3. Moving Topology Graph Nodes You can zoom in and zoom out the topology graph using the mouse wheel: Figure 6.4. Zooming the Topology Graph You can move the canvas of the topology graph by holding the left mouse button: Figure 6.5. Moving the Topology Graph Canvas Interpreting the Topology Graph Servers joined in a domain replication agreement are connected by an orange arrow. Servers joined in a CA replication agreement are connected by a blue arrow. Topology graph example: recommended topology Figure 6.6, "Recommended Topology Example" shows one of the possible recommended topologies for four servers: each server is connected to at least two other servers, and more than one server is a CA master. Figure 6.6. Recommended Topology Example Topology graph example: discouraged topology In Figure 6.7, "Discouraged Topology Example: Single Point of Failure" , server1 is a single point of failure. All the other servers have replication agreements with this server, but not with any of the other servers. Therefore, if server1 fails, all the other servers will become isolated. Avoid creating topologies like this. Figure 6.7. Discouraged Topology Example: Single Point of Failure For details on topology recommendations, see Section 4.2, "Deployment Considerations for Replicas" . 6.2.1. Setting up Replication Between Two Servers In the topology graph, hover your mouse over one of the server nodes. Figure 6.8. Domain or CA Options Click on the domain or the ca part of the circle depending on what type of topology segment you want to create. A new arrow representing the new replication agreement appears under your mouse pointer. Move your mouse to the other server node, and click on it. Figure 6.9. Creating a New Segment In the Add Topology Segment window, click Add to confirm the properties of the new segment. IdM creates a new topology segment between the two servers, which joins them in a replication agreement. The topology graph now shows the updated replication topology: Figure 6.10. New Segment Created 6.2.2. Stopping Replication Between Two Servers Click on an arrow representing the replication agreement you want to remove. This highlights the arrow. Figure 6.11. Topology Segment Highlighted Click Delete . In the Confirmation window, click OK . IdM removes the topology segment between the two servers, which deletes their replication agreement. The topology graph now shows the updated replication topology: Figure 6.12. Topology Segment Deleted
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/linux_domain_identity_authentication_and_policy_guide/managing-topology-graph-ui
Chapter 16. Tenant networking with IPv6
Chapter 16. Tenant networking with IPv6 16.1. Overview of project networking This chapter contains information about implementing IPv6 subnets in a Red Hat OpenStack Platform (RHOSP) project network. In addition to project networking, with RHOSP director, you can configure IPv6-native deployments for the overcloud nodes. RHOSP supports IPv6 in project networks. IPv6 subnets are created within existing project networks, and support a number of address assignment modes: Stateless Address Autoconfiguration (SLAAC) , Stateful DHCPv6 , and Stateless DHCPv6 . 16.2. IPv6 subnet options Create IPv6 subnets using the openstack subnet create command. You can also specify the address mode and the Router Advertisement mode. Use the following list to understand the possible combinations of options that you can include with the openstack subnet create command. RA Mode Address Mode Result ipv6_ra_mode=not set ipv6-address-mode=slaac The instance receives an IPv6 address from the external router (not managed by OpenStack Networking) using SLAAC . ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address and optional information from OpenStack Networking (dnsmasq) using DHCPv6 stateful . ipv6_ra_mode=not set ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from the external router using SLAAC, and optional information from OpenStack Networking (dnsmasq) using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=not-set The instance uses SLAAC to receive an IPv6 address from OpenStack Networking ( radvd ). ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=not-set The instance receives an IPv6 address and optional information from an external DHCPv6 server using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=not-set The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC, and optional information from an external DHCPv6 server using DHCPv6 stateless . ipv6_ra_mode=slaac ipv6-address-mode=slaac The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC . ipv6_ra_mode=dhcpv6-stateful ipv6-address-mode=dhcpv6-stateful The instance receives an IPv6 address from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateful . ipv6_ra_mode=dhcpv6-stateless ipv6-address-mode=dhcpv6-stateless The instance receives an IPv6 address from OpenStack Networking ( radvd ) using SLAAC , and optional information from OpenStack Networking ( dnsmasq ) using DHCPv6 stateless . 16.3. Create an IPv6 subnet using Stateful DHCPv6 Complete the steps in this procedure to create an IPv6 subnet in a project network using some of the options in section 17.1. First, gather the necessary project and network information, then include this information in the openstack subnet create command. Note OpenStack Networking supports only EUI-64 IPv6 address assignment for SLAAC. This allows for simplified IPv6 networking, as hosts self-assign addresses based on the base 64-bits plus the MAC address. You cannot create subnets with a different netmask and address_assign_type of SLAAC. Retrieve the project ID of the Project where you want to create the IPv6 subnet. These values are unique between OpenStack deployments, so your values differ from the values in this example. Retrieve a list of all networks present in OpenStack Networking (neutron), and note the name of the network that you want to host the IPv6 subnet: Include the project ID and network name in the openstack subnet create command: Validate this configuration by reviewing the network list. Note that the entry for database-servers now reflects the newly created IPv6 subnet: As a result of this configuration, instances that the QA project creates can receive a DHCP IPv6 address when added to the database-servers subnet:
[ "openstack project list +----------------------------------+----------+ | ID | Name | +----------------------------------+----------+ | 25837c567ed5458fbb441d39862e1399 | QA | | f59f631a77264a8eb0defc898cb836af | admin | | 4e2e1951e70643b5af7ed52f3ff36539 | demo | | 8561dff8310e4cd8be4b6fd03dc8acf5 | services | +----------------------------------+----------+", "openstack network list +--------------------------------------+------------------+-------------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-------------------------------------------------------------+ | 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-48670069af70 10.0.0.0/24 | | 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-1942a460b920 172.24.4.224/28 | | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | | +--------------------------------------+------------------+-------------------------------------------------------------+", "openstack subnet create --ip-version 6 --ipv6-address-mode dhcpv6-stateful --project 25837c567ed5458fbb441d39862e1399 --network database-servers --subnet-range fdf8:f53b:82e4::53/125 subnet_name Created a new subnet: +-------------------+--------------------------------------------------------------+ | Field | Value | +-------------------+--------------------------------------------------------------+ | allocation_pools | {\"start\": \"fdf8:f53b:82e4::52\", \"end\": \"fdf8:f53b:82e4::56\"} | | cidr | fdf8:f53b:82e4::53/125 | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | fdf8:f53b:82e4::51 | | host_routes | | | id | cdfc3398-997b-46eb-9db1-ebbd88f7de05 | | ip_version | 6 | | ipv6_address_mode | dhcpv6-stateful | | ipv6_ra_mode | | | name | | | network_id | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | | tenant_id | 25837c567ed5458fbb441d39862e1399 | +-------------------+--------------------------------------------------------------+", "openstack network list +--------------------------------------+------------------+-------------------------------------------------------------+ | id | name | subnets | +--------------------------------------+------------------+-------------------------------------------------------------+ | 6aff6826-4278-4a35-b74d-b0ca0cbba340 | database-servers | cdfc3398-997b-46eb-9db1-ebbd88f7de05 fdf8:f53b:82e4::50/125 | | 8357062a-0dc2-4146-8a7f-d2575165e363 | private | c17f74c4-db41-4538-af40-48670069af70 10.0.0.0/24 | | 31d61f7d-287e-4ada-ac29-ed7017a54542 | public | 303ced03-6019-4e79-a21c-1942a460b920 172.24.4.224/28 | +--------------------------------------+------------------+-------------------------------------------------------------+", "openstack server list +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+ | fad04b7a-75b5-4f96-aed9-b40654b56e03 | corp-vm-01 | ACTIVE | - | Running | database-servers=fdf8:f53b:82e4::52 | +--------------------------------------+------------+--------+------------+-------------+-------------------------------------+" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/networking_guide/sec-ipv6
6.5. Resource Groups
6.5. Resource Groups One of the most common elements of a cluster is a set of resources that need to be located together, start sequentially, and stop in the reverse order. To simplify this configuration, Pacemaker supports the concept of groups. You create a resource group with the following command, specifying the resources to include in the group. If the group does not exist, this command creates the group. If the group exists, this command adds additional resources to the group. The resources will start in the order you specify them with this command, and will stop in the reverse order of their starting order. You can use the --before and --after options of this command to specify the position of the added resources relative to a resource that already exists in the group. You can also add a new resource to an existing group when you create the resource, using the following command. The resource you create is added to the group named group_name . You remove a resource from a group with the following command. If there are no resources in the group, this command removes the group itself. The following command lists all currently configured resource groups. The following example creates a resource group named shortcut that contains the existing resources IPaddr and Email . There is no limit to the number of resources a group can contain. The fundamental properties of a group are as follows. Resources are started in the order in which you specify them (in this example, IPaddr first, then Email ). Resources are stopped in the reverse order in which you specify them. ( Email first, then IPaddr ). If a resource in the group cannot run anywhere, then no resource specified after that resource is allowed to run. If IPaddr cannot run anywhere, neither can Email . If Email cannot run anywhere, however, this does not affect IPaddr in any way. Obviously as the group grows bigger, the reduced configuration effort of creating resource groups can become significant. 6.5.1. Group Options A resource group inherits the following options from the resources that it contains: priority , target-role , is-managed . For information on resource options, see Table 6.3, "Resource Meta Options" . 6.5.2. Group Stickiness Stickiness, the measure of how much a resource wants to stay where it is, is additive in groups. Every active resource of the group will contribute its stickiness value to the group's total. So if the default resource-stickiness is 100, and a group has seven members, five of which are active, then the group as a whole will prefer its current location with a score of 500.
[ "pcs resource group add group_name resource_id [ resource_id ] ... [ resource_id ] [--before resource_id | --after resource_id ]", "pcs resource create resource_id standard:provider:type|type [resource_options] [op operation_action operation_options ] --group group_name", "pcs resource group remove group_name resource_id", "pcs resource group list", "pcs resource group add shortcut IPaddr Email" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-resourcegroups-haar
Chapter 15. Servers and Services
Chapter 15. Servers and Services Leftover dbus processes Red Hat Enterprise Linux 7.5 adds a feature that enables users to launch dbus -using applications remotely, for example over SSH or over IBM Platform LSF. However, when processes using dbus are launched remotely, dbus processes keep running even after the main process is closed, blocking the remote session and preventing it from terminating properly. To work around this problem, follow the instructions at https://access.redhat.com/solutions/3257651 . (BZ#1460262) dbus rebased to version 1.10 The dbus packages have been upgraded to upstream version 1.10, which provides a number of bug fixes and enhancements over the version. Notable changes include: dbus-run-session is a new utility to run a dbus session bus for the runtime of a login session, making ssh sessions which start dbus-using applications more predictable and reliable. See man 1 dbus-run-session for more details. Several memory and file descriptor leaks have been fixed. This improves the dbus-daemon memory usage and reliability. The well-known system and session bus configuration files have been moved from /etc/dbus-1/ to the /usr/share/dbus-1/ directory. While the old location can still be used, it is deprecated (specifically, session.conf and system.conf are deprecated, but system administrator configuration snippets under session.d and system.d are permitted). (BZ# 1480264 ) tuned rebased to version 2.9.0 The tuned packages have been upgraded to upstream version 2.9.0, which provides a number of bug fixes and enhancements over the version. Notable changes include the following: The net plug-in has been extended with the ring and pause parameters. The concept of manually or automatically set profile has been introduced. A directory for profile recommendation files is now supported. (BZ# 1467576 ) chrony rebased to version 3.2 The chrony packages have been upgraded to upstream version 3.2, which provides a number of bug fixes and enhancements over the version. Notable enhancements include: Support for hardware timestamping with bonding, bridging, and other logical interfaces that aggregate ethernet interfaces Support for transmit-only hardware timestamping with network cards that can timestamp only received Precision Time Protocol (PTP) packets but not Network Time Protocol (NTP) packets Improved stability of synchronization with hardware timestamping and interleaved modes An improved leapsectz option to automatically set the offset of the system clock between International Atomic Time (TAI) and Coordinated Universal Time (UTC) (BZ# 1482565 ) SNMP page counting can be now disabled in CUPS The simple network management protocol (SNMP) page counting currently shows incorrect information for certain printers. With this update, the CUPS printing system supports turning off the SNMP page counting, which prevents the problem. To do so, add *cupsSNMPPages: False into the printer's postscript printer description (PPD) file. The procedure for adding options into printer's PPD file is described in solution article at https://access.redhat.com/solutions/1427573 . (BZ# 1434153 ) CUPS can be set to use only ciphers from TLS version 1.2 or later The CUPS printing system can now be set to use only ciphers from TLS version 1.2 or later. You can use the functionality by adding the option SSLOptions MinTLS1.2 into the /etc/cups/client.conf file for the CUPS client or into the /etc/cups/cupsd.conf file for the CUPS daemon. (BZ#1466497) The squid packages now provide the kerberos_ldap_group helper This update adds the kerberos_ldap_group external Access Control Lists (ACL) helper to the squid packages. The kerberos_ldap_group helper is a reference implementation that supports Simple Authentication and Security Layer (SASL) and Generic Security Services API (GSSAPI) authentication to an LDAP server, intended primarily to connect to Active Directory or OpenLDAP-based LDAP servers. (BZ# 1452200 ) OpenIPMI rebased to version 2.0.23 The OpenIPMI packages have been upgraded to version 2.0.23, which provides a number of bug fixes and enhancements. Among others: It adds a command to set a duty cycle of the fans directly. It adds a way to specify the state directory from the command line after the compilation time. It changes the message map size to 32 bits so that it can handle a full 16-message window. It adds support for the IPMI LAN Simulator commands. See the ipmi_sim_cmd(5) man page. It adds support for the IPMI LAN Interface configuration file. See the ipmi_lan(5) man page. (BZ#1457805) Overview of changes from freeIPMI 1.2.9 to freeIPMI 1.5.7 These are the most important changes: - The ipmi-fru tool now supports the output of the DDR3 and DDR4 SDRAM modules and new FRU multirecords. - The new ipmi-config tool is a consolidated configuration tool implementing all the functionalities that were previously in the bmc-config , ipmi-pef-config , ipmi-sensors-config , and ipmi-chassis-config tools. - The ipmi-sel tool reads and manages the IPMI System Event Log records, which makes the tool useful for debugging the system. A complete list of changes is available after the installation in the /usr/share/doc/freeipmi/NEWS file. (BZ# 1435848 ) A new clear_env option available in PHP FPM pool configuration This update introduces a new clear_env option in PHP's FastCGI Process Manager (FPM) pool configuration. If the clear_env option is disabled, environment variables set when running the FPM daemon are preserved and available to scripts. By default, clear_env is enabled, preserving current behavior. (BZ# 1410010 )
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.5_release_notes/new_features_servers_and_services
Chapter 1. Red Hat build of Keycloak Operator installation
Chapter 1. Red Hat build of Keycloak Operator installation Use this procedure to install the Red Hat build of Keycloak Operator in an OpenShift cluster. Open the OpenShift Container Platform web console. In the left column, click Home , Operators , OperatorHub . Search for "Keycloak" on the search input box. Select the Operator from the list of results. Follow the instructions on the screen. For general instructions on installing Operators by using either the CLI or web console, see Installing Operators in your namespace . In the default Catalog, the Operator is named rhbk-operator . Make sure to use the channel corresponding with your desired Red Hat build of Keycloak version. 1.1. Installing Multiple Operators It is not fully supported for the operator to watch multiple or all namespaces. To watch multiple namespaces, you install multiple operators. In this situation, consider the following: All Operators share the Custom Resource Definitions (CRDs) as they are installed cluster wide. CRD revisions from newer Operator versions will not introduce breaking changes except for the eventual removal of fields that have been deprecated for some time. Thus newer CRDs are generally backward compatible. The last installed CRDs become the ones that are used. This rule also applies to OLM installations; the last installed Operator version also installs and overrides the CRDs if they already exist in the cluster. Older CRDs may not be forward compatible with new fields used by newer operators. When using OLM it will check if your custom resources are compatible with the CRDs being installed, so the usage of new fields can prevent the simultaneous installation of older operator versions. Fields introduced by newer CRDs are not supported by older Operators. Older Operators fail to handle CRs that use such new fields with a deserialization error for an unrecognized field. Therefore, in a multiple Operator installation scenario, the recommended approach is to keep versions aligned as closely as possible to minimize the potential problems with different versions.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/26.0/html/operator_guide/installation-
18. Databases
18. Databases 18.1. PostgreSQL PostgreSQL is an advanced Object-Relational database management system (DBMS). The postgresql packages include the client programs and libraries needed to access a PostgreSQL DBMS server. Red Hat Enterprise Linux 6 features version 8.4 of PostgreSQL 18.2. MySQL MySQL is a multi-user, multi-threaded SQL database server. It consists of the MySQL server daemon (mysqld) and many client programs and libraries. This release features version 5.1 of MySQL. For a list of all enhancements that this version provides, refer to the MySQL Release Notes
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_release_notes/databases
5.6.3.2. Logical Volume Resizing
5.6.3.2. Logical Volume Resizing The feature that most system administrators appreciate about LVM is its ability to easily direct storage where it is needed. In a non-LVM system configuration, running out of space means -- at best -- moving files from the full device to one with available space. Often it can mean actual reconfiguration of your system's mass storage devices; a task that would have to take place after normal business hours. However, LVM makes it possible to easily increase the size of a logical volume. Assume for a moment that our 200GB storage pool was used to create a 150GB logical volume, with the remaining 50GB held in reserve. If the 150GB logical volume became full, LVM makes it possible to increase its size (say, by 10GB) without any physical reconfiguration. Depending on the operating system environment, it may be possible to do this dynamically or it might require a short amount of downtime to actually perform the resizing.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s3-storage-adv-lvm-resizing
Architecture
Architecture OpenShift Container Platform 4.12 An overview of the architecture for OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/architecture/index
14.21. Configuring Memory Tuning
14.21. Configuring Memory Tuning The virsh memtune virtual_machine --parameter size is covered in the Virtualization Tuning and Optimization Guide .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-managing_guest_virtual_machines_with_virsh-configuring_memory_tuning
Chapter 8. Setting up the environment for using STS
Chapter 8. Setting up the environment for using STS After you meet the AWS prerequisites, set up your environment and install Red Hat OpenShift Service on AWS (ROSA). Tip AWS Security Token Service (STS) is the recommended credential mode for installing and interacting with clusters on Red Hat OpenShift Service on AWS because it provides enhanced security. 8.1. Setting up the environment for STS Before you create a Red Hat OpenShift Service on AWS (ROSA) cluster that uses the AWS Security Token Service (STS), complete the following steps to set up your environment. Prerequisites Review and complete the deployment prerequisites and policies. Create a Red Hat account , if you do not already have one. Then, check your email for a verification link. You will need these credentials to install ROSA. Procedure Log in to the Amazon Web Services (AWS) account that you want to use. It is recommended to use a dedicated AWS account to run production clusters. If you are using AWS Organizations, you can use an AWS account within your organization or create a new one . If you are using AWS Organizations and you need to have a service control policy (SCP) applied to the AWS account you plan to use, these policies must not be more restrictive than the roles and policies required by the cluster. Enable the ROSA service in the AWS Management Console. Sign in to your AWS account . To enable ROSA, go to the ROSA service and select Enable OpenShift . Install and configure the AWS CLI. Follow the AWS command-line interface documentation to install and configure the AWS CLI for your operating system. Specify the correct aws_access_key_id and aws_secret_access_key in the .aws/credentials file. See AWS Configuration basics in the AWS documentation. Set a default AWS region. Note You can use the environment variable to set the default AWS region. The ROSA service evaluates regions in the following priority order: The region specified when running the rosa command with the --region flag. The region set in the AWS_DEFAULT_REGION environment variable. See Environment variables to configure the AWS CLI in the AWS documentation. The default region set in your AWS configuration file. See Quick configuration with aws configure in the AWS documentation. Optional: Configure your AWS CLI settings and credentials by using an AWS named profile. rosa evaluates AWS named profiles in the following priority order: The profile specified when running the rosa command with the --profile flag. The profile set in the AWS_PROFILE environment variable. See Named profiles in the AWS documentation. Verify the AWS CLI is installed and configured correctly by running the following command to query the AWS API: USD aws sts get-caller-identity Install the latest version of the ROSA CLI ( rosa ). Download the latest release of the ROSA CLI for your operating system. Optional: Rename the file you downloaded to rosa and make the file executable. This documentation uses rosa to refer to the executable file. USD chmod +x rosa Optional: Add rosa to your path. USD mv rosa /usr/local/bin/rosa Enter the following command to verify your installation: USD rosa Example output Command line tool for Red Hat OpenShift Service on AWS. For further documentation visit https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws Usage: rosa [command] Available Commands: completion Generates completion scripts create Create a resource from stdin delete Delete a specific resource describe Show details of a specific resource download Download necessary tools for using your cluster edit Edit a specific resource grant Grant role to a specific resource help Help about any command init Applies templates to support Red Hat OpenShift Service on AWS install Installs a resource into a cluster link Link a ocm/user role from stdin list List all resources of a specific type login Log in to your Red Hat account logout Log out logs Show installation or uninstallation logs for a cluster revoke Revoke role from a specific resource uninstall Uninstalls a resource from a cluster unlink UnLink a ocm/user role from stdin upgrade Upgrade a resource verify Verify resources are configured correctly for cluster install version Prints the version of the tool whoami Displays user account information Flags: --color string Surround certain characters with escape sequences to display them in color on the terminal. Allowed options are [auto never always] (default "auto") --debug Enable debug mode. -h, --help help for rosa Use "rosa [command] --help" for more information about a command. Generate the command completion scripts for the ROSA CLI. The following example generates the Bash completion scripts for a Linux machine: USD rosa completion bash | sudo tee /etc/bash_completion.d/rosa Source the scripts to enable rosa command completion from your existing terminal. The following example sources the Bash completion scripts for rosa on a Linux machine: USD source /etc/bash_completion.d/rosa Log in to your Red Hat account with the ROSA CLI. Enter the following command. USD rosa login Replace <my_offline_access_token> with your token. Example output To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token> Example output continued I: Logged in as '<rh-rosa-user>' on 'https://api.openshift.com' Verify that your AWS account has the necessary quota to deploy a ROSA cluster. USD rosa verify quota [--region=<aws_region>] Example output I: Validating AWS quota... I: AWS quota ok Note Sometimes your AWS quota varies by region. If you receive any errors, try a different region. If you need to increase your quota, go to the AWS Management Console and request a quota increase for the service that failed. After the quota check succeeds, proceed to the step. Prepare your AWS account for cluster deployment: Run the following command to verify your Red Hat and AWS credentials are setup correctly. Check that your AWS Account ID, Default Region and ARN match what you expect. You can safely ignore the rows beginning with OpenShift Cluster Manager for now. USD rosa whoami Example output AWS Account ID: 000000000000 AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::000000000000:user/hello OCM API: https://api.openshift.com OCM Account ID: 1DzGIdIhqEWyt8UUXQhSoWaaaaa OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: 1HopHfA2hcmhup5gCr2uH5aaaaa OCM Organization Name: Red Hat OCM Organization External ID: 0000000 Install the OpenShift CLI ( oc ), version 4.7.9 or greater, from the ROSA ( rosa ) CLI. Enter this command to download the latest version of the oc CLI: USD rosa download openshift-client After downloading the oc CLI, unzip it and add it to your path. Enter this command to verify that the oc CLI is installed correctly: USD rosa verify openshift-client Create roles After completing these steps, you are ready to set up IAM and OIDC access-based roles. 8.2. steps Create a ROSA cluster with STS quickly or create a cluster using customizations . 8.3. Additional resources AWS Prerequisites Required AWS service quotas and increase requests
[ "aws sts get-caller-identity", "chmod +x rosa", "mv rosa /usr/local/bin/rosa", "rosa", "Command line tool for Red Hat OpenShift Service on AWS. For further documentation visit https://access.redhat.com/documentation/en-us/red_hat_openshift_service_on_aws Usage: rosa [command] Available Commands: completion Generates completion scripts create Create a resource from stdin delete Delete a specific resource describe Show details of a specific resource download Download necessary tools for using your cluster edit Edit a specific resource grant Grant role to a specific resource help Help about any command init Applies templates to support Red Hat OpenShift Service on AWS install Installs a resource into a cluster link Link a ocm/user role from stdin list List all resources of a specific type login Log in to your Red Hat account logout Log out logs Show installation or uninstallation logs for a cluster revoke Revoke role from a specific resource uninstall Uninstalls a resource from a cluster unlink UnLink a ocm/user role from stdin upgrade Upgrade a resource verify Verify resources are configured correctly for cluster install version Prints the version of the tool whoami Displays user account information Flags: --color string Surround certain characters with escape sequences to display them in color on the terminal. Allowed options are [auto never always] (default \"auto\") --debug Enable debug mode. -h, --help help for rosa Use \"rosa [command] --help\" for more information about a command.", "rosa completion bash | sudo tee /etc/bash_completion.d/rosa", "source /etc/bash_completion.d/rosa", "rosa login", "To login to your Red Hat account, get an offline access token at https://console.redhat.com/openshift/token/rosa ? Copy the token and paste it here: <my-offline-access-token>", "I: Logged in as '<rh-rosa-user>' on 'https://api.openshift.com'", "rosa verify quota [--region=<aws_region>]", "I: Validating AWS quota I: AWS quota ok", "rosa whoami", "AWS Account ID: 000000000000 AWS Default Region: us-east-1 AWS ARN: arn:aws:iam::000000000000:user/hello OCM API: https://api.openshift.com OCM Account ID: 1DzGIdIhqEWyt8UUXQhSoWaaaaa OCM Account Name: Your Name OCM Account Username: [email protected] OCM Account Email: [email protected] OCM Organization ID: 1HopHfA2hcmhup5gCr2uH5aaaaa OCM Organization Name: Red Hat OCM Organization External ID: 0000000", "rosa download openshift-client", "rosa verify openshift-client" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/prepare_your_environment/rosa-sts-setting-up-environment
Chapter 2. Disaster recovery subscription requirement
Chapter 2. Disaster recovery subscription requirement Disaster Recovery features supported by Red Hat OpenShift Data Foundation require all of the following prerequisites to successfully implement a disaster recovery solution: A valid Red Hat OpenShift Data Foundation Advanced entitlement A valid Red Hat Advanced Cluster Management for Kubernetes subscription Any Red Hat OpenShift Data Foundation Cluster containing PVs participating in active replication either as a source or destination requires OpenShift Data Foundation Advanced entitlement. This subscription should be active on both source and destination clusters. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Important OpenShift Data Foundation deployed with Multus networking is not supported for Regional Disaster Recovery (Regional-DR) setups.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/disaster-recovery-subscriptions_common
Nodes
Nodes OpenShift Container Platform 4.16 Configuring and managing nodes in OpenShift Container Platform Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/nodes/index
Chapter 8. Configuring authentication
Chapter 8. Configuring authentication Application users need credentials to access Data Grid clusters. You can use default, generated credentials or add your own. 8.1. Default credentials Data Grid Operator generates base64-encoded credentials for the following users: User Secret name Description developer infinispan-generated-secret Credentials for the default application user. operator infinispan-generated-operator-secret Credentials that Data Grid Operator uses to interact with Data Grid resources. 8.2. Retrieving credentials Get credentials from authentication secrets to access Data Grid clusters. Procedure Retrieve credentials from authentication secrets. Base64-decode credentials. 8.3. Adding custom user credentials Configure access to Data Grid cluster endpoints with custom credentials. Note Modifying spec.security.endpointSecretName triggers a cluster restart. Procedure Create an identities.yaml file with the credentials that you want to add. credentials: - username: myfirstusername password: changeme-one - username: mysecondusername password: changeme-two Create an authentication secret from identities.yaml . Specify the authentication secret with spec.security.endpointSecretName in your Infinispan CR and then apply the changes. 8.4. Changing the operator password You can change the password for the operator user if you do not want to use the automatically generated password. Procedure Update the password key in the infinispan-generated-operator-secret secret as follows: Note You should update only the password key in the generated-operator-secret secret. When you update the password, Data Grid Operator automatically refreshes other keys in that secret. 8.5. Disabling user authentication Allow users to access Data Grid clusters and manipulate data without providing credentials. Important Do not disable authentication if endpoints are accessible from outside the OpenShift cluster via spec.expose.type . You should disable authentication for development environments only. Procedure Set false as the value for the spec.security.endpointAuthentication field in your Infinispan CR. Apply the changes.
[ "get secret infinispan-generated-secret", "get secret infinispan-generated-secret -o jsonpath=\"{.data.identities\\.yaml}\" | base64 --decode", "credentials: - username: myfirstusername password: changeme-one - username: mysecondusername password: changeme-two", "create secret generic --from-file=identities.yaml connect-secret", "spec: security: endpointSecretName: connect-secret", "patch secret infinispan-generated-operator-secret -p='{\"stringData\":{\"password\": \"supersecretoperatorpassword\"}}'", "spec: security: endpointAuthentication: false" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_operator_guide/configuring-authentication
function::sprint_loadavg
function::sprint_loadavg Name function::sprint_loadavg - Report a pretty-printed load average Synopsis Arguments None Description Returns the a string with three decimal numbers in the usual format for 1-, 5- and 15-minute load averages.
[ "sprint_loadavg:string()" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-sprint-loadavg
Chapter 21. Troubleshooting: Correcting problems with filtering
Chapter 21. Troubleshooting: Correcting problems with filtering The subscriptions service includes several filters that you can use to sort data by different characteristics. These characteristics include subscription attributes, also known as system purpose or subscription settings, depending on the product. Types of subscription attributes include service level agreement (SLA), usage, and others. Subscription attributes values must be set on systems to enable filtering by those values on the product-level pages in the the subscriptions service interface. There are different methods to set these values, such as directly in the product or in one of the subscription management tools. Subscription attributes values should be set by only one method to avoid the potential for mismatched values. In the older entitlement-based subscription model, the system purpose values are used by the subscription management tools such as Red Hat Satellite or Red Hat Subscription Management to help match subscriptions with systems. If a system is correctly matched with a subscription, the system status value ( System Status Details or System Purpose Status in the various tools) shows as Matched . However, if you are using simple content access with the subscriptions service, that usage of system purpose is obsolete, because subscriptions are not attached to systems. After you enable simple content access, the system status shows as Disabled . Note The Disabled state for the system status means that per-system subscription attachment is not being enforced. It does not mean that system purpose values themselves are unimportant. The subscriptions service filters related to system purpose values will not show reliable data if these values are not set for all systems. Procedure If the filters that relate to subscription attributes (system purpose values) are showing unexpected results, you might be able to improve the accuracy of that data by ensuring that the subscription attributes are set correctly: Review system information in your preferred subscription management tool to detect whether there are systems where the subscription attributes are missing. If there are missing values for subscription attributes, set those values. You might be able to use options to set these values in bulk, depending on the type and version of subscription management tool that you are using. Additional resources For more information about how to set system purpose values in bulk in Red Hat Satellite, see the section about editing system purpose for multiple hosts in the Managing Hosts guide. For more information about how to use Ansible and the subscription-manager command to set system purpose values in bulk for Red Hat Subscription Management, see the redhat-subscription module information.
null
https://docs.redhat.com/en/documentation/subscription_central/1-latest/html/getting_started_with_the_subscriptions_service/proc-trbl-correcting-problems-filtering_assembly-troubleshooting-common-questions-ctxt
9.5. Additional Resources
9.5. Additional Resources For additional information on AIDE, see the following documentation: aide(1) man page aide.conf(5) man page Guide to the Secure Configuration of Red Hat Enterprise Linux 7 (OpenSCAP Security Guide): Verify Integrity with AIDE The AIDE manual
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sec-aide_additional_resources
Storage Strategies Guide
Storage Strategies Guide Red Hat Ceph Storage 8 Creating storage strategies for Red Hat Ceph Storage clusters Red Hat Ceph Storage Documentation Team
[ "ceph osd crush tree --show-shadow ID CLASS WEIGHT TYPE NAME -24 ssd 4.54849 root default~ssd -19 ssd 0.90970 host ceph01~ssd 8 ssd 0.90970 osd.8 -20 ssd 0.90970 host ceph02~ssd 7 ssd 0.90970 osd.7 -21 ssd 0.90970 host ceph03~ssd 3 ssd 0.90970 osd.3 -22 ssd 0.90970 host ceph04~ssd 5 ssd 0.90970 osd.5 -23 ssd 0.90970 host ceph05~ssd 6 ssd 0.90970 osd.6 -2 hdd 50.94173 root default~hdd -4 hdd 7.27739 host ceph01~hdd 10 hdd 7.27739 osd.10 -12 hdd 14.55478 host ceph02~hdd 0 hdd 7.27739 osd.0 12 hdd 7.27739 osd.12 -6 hdd 14.55478 host ceph03~hdd 4 hdd 7.27739 osd.4 11 hdd 7.27739 osd.11 -10 hdd 7.27739 host ceph04~hdd 1 hdd 7.27739 osd.1 -8 hdd 7.27739 host ceph05~hdd 2 hdd 7.27739 osd.2 -1 55.49022 root default -3 8.18709 host ceph01 10 hdd 7.27739 osd.10 8 ssd 0.90970 osd.8 -11 15.46448 host ceph02 0 hdd 7.27739 osd.0 12 hdd 7.27739 osd.12 7 ssd 0.90970 osd.7 -5 15.46448 host ceph03 4 hdd 7.27739 osd.4 11 hdd 7.27739 osd.11 3 ssd 0.90970 osd.3 -9 8.18709 host ceph04 1 hdd 7.27739 osd.1 5 ssd 0.90970 osd.5 -7 8.18709 host ceph05 2 hdd 7.27739 osd.2 6 ssd 0.90970 osd.6", "[bucket-type] [bucket-name] { id [a unique negative numeric ID] weight [the relative capacity/capability of the item(s)] alg [the bucket type: uniform | list | tree | straw2 ] hash [the hash type: 0 by default] item [item-name] weight [weight] }", "host node1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 } host node2 { id -2 alg straw2 hash 0 item osd.2 weight 1.00 item osd.3 weight 1.00 } rack rack1 { id -3 alg straw2 hash 0 item node1 weight 2.00 item node2 weight 2.00 }", "root=default row=a rack=a2 chassis=a2a host=a2a1", "ceph osd crush add-bucket {name} {type}", "ceph osd crush add-bucket ssd-root root ceph osd crush add-bucket hdd-journal-root root ceph osd crush add-bucket hdd-root root", "added bucket ssd-root type root to crush map added bucket hdd-journal-root type root to crush map added bucket hdd-root type root to crush map", "ceph osd crush add-bucket ssd-row1 row ceph osd crush add-bucket ssd-row1-rack1 rack ceph osd crush add-bucket ssd-row1-rack1-host1 host ceph osd crush add-bucket ssd-row1-rack1-host2 host ceph osd crush add-bucket hdd-row1 row ceph osd crush add-bucket hdd-row1-rack2 rack ceph osd crush add-bucket hdd-row1-rack1-host1 host ceph osd crush add-bucket hdd-row1-rack1-host2 host ceph osd crush add-bucket hdd-row1-rack1-host3 host ceph osd crush add-bucket hdd-row1-rack1-host4 host", "ceph osd tree", "ceph osd crush move ssd-row1 root=ssd-root ceph osd crush move ssd-row1-rack1 row=ssd-row1 ceph osd crush move ssd-row1-rack1-host1 rack=ssd-row1-rack1 ceph osd crush move ssd-row1-rack1-host2 rack=ssd-row1-rack1", "ceph osd tree", "ceph osd crush remove {bucket-name}", "ceph osd crush rm {bucket-name}", "ceph osd crush tree -f json-pretty", "[ { \"id\": -2, \"name\": \"ssd\", \"type\": \"root\", \"type_id\": 10, \"items\": [ { \"id\": -6, \"name\": \"dell-per630-11-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 6, \"name\": \"osd.6\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] }, { \"id\": -7, \"name\": \"dell-per630-12-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 7, \"name\": \"osd.7\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] }, { \"id\": -8, \"name\": \"dell-per630-13-ssd\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 8, \"name\": \"osd.8\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.099991, \"depth\": 2 } ] } ] }, { \"id\": -1, \"name\": \"default\", \"type\": \"root\", \"type_id\": 10, \"items\": [ { \"id\": -3, \"name\": \"dell-per630-11\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 0, \"name\": \"osd.0\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 3, \"name\": \"osd.3\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] }, { \"id\": -4, \"name\": \"dell-per630-12\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 1, \"name\": \"osd.1\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 4, \"name\": \"osd.4\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] }, { \"id\": -5, \"name\": \"dell-per630-13\", \"type\": \"host\", \"type_id\": 1, \"items\": [ { \"id\": 2, \"name\": \"osd.2\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.449997, \"depth\": 2 }, { \"id\": 5, \"name\": \"osd.5\", \"type\": \"osd\", \"type_id\": 0, \"crush_weight\": 0.289993, \"depth\": 2 } ] } ] } ]", "ceph orch daemon add osd HOST :_DEVICE_,[ DEVICE ]", "ceph osd crush add ID_OR_NAME WEIGHT [ BUCKET_TYPE = BUCKET_NAME ...]", "ceph osd crush add osd.0 1.0 root=default datacenter=dc1 room=room1 row=foo rack=bar host=foo-bar-1", "ceph osd crush set ID_OR_NAME WEIGHT root= POOL_NAME [ BUCKET_TYPE = BUCKET_NAME ...]", "ceph osd crush remove NAME", "ceph osd crush set-device-class CLASS OSD_ID [ OSD_ID ..]", "ceph osd crush set-device-class hdd osd.0 osd.1 ceph osd crush set-device-class ssd osd.2 osd.3 ceph osd crush set-device-class bucket-index osd.4", "ceph osd crush rm-device-class CLASS OSD_ID [ OSD_ID ..]", "ceph osd crush rm-device-class hdd osd.0 osd.1 ceph osd crush rm-device-class ssd osd.2 osd.3 ceph osd crush rm-device-class bucket-index osd.4", "ceph osd crush class rename OLD_NAME NEW_NAME", "ceph osd crush class rename hdd sas15k", "ceph osd crush class ls", "[ \"hdd\", \"ssd\", \"bucket-index\" ]", "ceph osd crush class ls-osd CLASS", "ceph osd crush class ls-osd hdd", "0 1 2 3 4 5 6", "ceph osd crush rule ls-by-class CLASS", "ceph osd crush rule ls-by-class hdd", "ceph osd crush reweight _NAME_ _WEIGHT_", "osd crush reweight-subtree NAME", "ceph osd reweight ID WEIGHT", "ceph osd reweight-by-utilization [THRESHOLD_] [ WEIGHT_CHANGE_AMOUNT ] [ NUMBER_OF_OSDS ] [--no-increasing]", "ceph osd test-reweight-by-utilization 110 .5 4 --no-increasing", "osd reweight-by-pg POOL_NAME", "osd crush reweight-all", "ceph osd primary-affinity OSD_ID WEIGHT", "rule <rulename> { id <unique number> type [replicated | erasure] min_size <min-size> max_size <max-size> step take <bucket-type> [class <class-name>] step [choose|chooseleaf] [firstn|indep] <N> <bucket-type> step emit }", "ceph osd crush rule list ceph osd crush rule ls", "ceph osd crush rule dump NAME", "ceph osd crush rule create-simple RUENAME ROOT BUCKET_NAME FIRSTN_OR_INDEP", "ceph osd crush rule create-simple deleteme default host firstn", "{ \"id\": 1, \"rule_name\": \"deleteme\", \"type\": 1, \"min_size\": 1, \"max_size\": 10, \"steps\": [ { \"op\": \"take\", \"item\": -1, \"item_name\": \"default\"}, { \"op\": \"chooseleaf_firstn\", \"num\": 0, \"type\": \"host\"}, { \"op\": \"emit\"}]}", "ceph osd crush rule create-replicated NAME ROOT FAILURE_DOMAIN CLASS", "ceph osd crush rule create-replicated fast default host ssd", "ceph osd crush rule create-erasure RULE_NAME PROFILE_NAME", "ceph osd crush rule create-erasure default default", "ceph osd crush rule rm NAME", "ceph osd crush tunables PROFILE", "ceph osd crush tunables optimal", "ceph osd crush tunables PROFILE", "ceph osd crush tunables legacy", "ceph osd getcrushmap -o /tmp/crush", "crushtool -i /tmp/crush --set-choose-local-tries 0 --set-choose-local-fallback-tries 0 --set-choose-total-tries 50 -o /tmp/crush.new", "ceph osd setcrushmap -i /tmp/crush.new", "crushtool -i /tmp/crush --set-choose-local-tries 2 --set-choose-local-fallback-tries 5 --set-choose-total-tries 19 --set-chooseleaf-descend-once 0 --set-chooseleaf-vary-r 0 -o /tmp/crush.legacy", "ceph osd getcrushmap -o COMPILED_CRUSHMAP_FILENAME", "crushtool -d COMPILED_CRUSHMAP_FILENAME -o DECOMPILED_CRUSHMAP_FILENAME", "ceph osd setcrushmap -i COMPILED_CRUSHMAP_FILENAME", "crushtool -c DECOMPILED_CRUSHMAP_FILENAME -o COMPILED_CRUSHMAP_FILENAME", "ceph osd crush set-device-class CLASS OSD_ID [ OSD_ID ]", "ceph osd crush set-device-class hdd osd.0 osd.1 osd.4 osd.5 ceph osd crush set-device-class ssd osd.2 osd.3 osd.6 osd.7", "ceph osd crush rule create-replicated RULENAME ROOT FAILURE_DOMAIN_TYPE DEVICE_CLASS", "ceph osd crush rule create-replicated cold default host hdd ceph osd crush rule create-replicated hot default host ssd", "ceph osd pool set POOL_NAME crush_rule RULENAME", "ceph osd pool set cold crush_rule hdd ceph osd pool set hot crush_rule ssd", "device 0 osd.0 class hdd device 1 osd.1 class hdd device 2 osd.2 class ssd device 3 osd.3 class ssd device 4 osd.4 class hdd device 5 osd.5 class hdd device 6 osd.6 class ssd device 7 osd.7 class ssd host ceph-osd-server-1 { id -1 alg straw2 hash 0 item osd.0 weight 1.00 item osd.1 weight 1.00 item osd.2 weight 1.00 item osd.3 weight 1.00 } host ceph-osd-server-2 { id -2 alg straw2 hash 0 item osd.4 weight 1.00 item osd.5 weight 1.00 item osd.6 weight 1.00 item osd.7 weight 1.00 } root default { id -3 alg straw2 hash 0 item ceph-osd-server-1 weight 4.00 item ceph-osd-server-2 weight 4.00 } rule cold { ruleset 0 type replicated min_size 2 max_size 11 step take default class hdd step chooseleaf firstn 0 type host step emit } rule hot { ruleset 1 type replicated min_size 2 max_size 11 step take default class ssd step chooseleaf firstn 0 type host step emit }", "osd pool default pg num = 100 osd pool default pgp num = 100", "(OSDs * 100) Total PGs = ------------ pool size", "(200 * 100) ----------- = 6667. Nearest power of 2: 8192 3", "mon pg warn max per osd", "ceph osd pool set POOL_NAME pg_autoscale_mode on", "ceph osd pool set testpool pg_autoscale_mode on", "ceph config set global osd_pool_default_pg_autoscale_mode MODE", "ceph config set global osd_pool_default_pg_autoscale_mode on", "ceph osd pool create POOL_NAME --bulk", "ceph osd pool create testpool --bulk", "ceph osd pool set ec_pool_overwrite bulk True Error EINVAL: expecting value 'true', 'false', '0', or '1'", "ceph osd pool set POOL_NAME bulk true / false / 1 / 0", "ceph osd pool set testpool bulk true", "ceph osd pool get POOL_NAME bulk", "ceph osd pool get testpool bulk bulk: true", "ceph osd pool autoscale-status", "POOL SIZE TARGET SIZE RATE RAW CAPACITY RATIO TARGET RATIO EFFECTIVE RATIO BIAS PG_NUM NEW PG_NUM AUTOSCALE BULK device_health_metrics 0 3.0 374.9G 0.0000 1.0 1 on False cephfs.cephfs.meta 24632 3.0 374.9G 0.0000 4.0 32 on False cephfs.cephfs.data 0 3.0 374.9G 0.0000 1.0 32 on False .rgw.root 1323 3.0 374.9G 0.0000 1.0 32 on False default.rgw.log 3702 3.0 374.9G 0.0000 1.0 32 on False default.rgw.control 0 3.0 374.9G 0.0000 1.0 32 on False default.rgw.meta 382 3.0 374.9G 0.0000 4.0 8 on False", "ceph config set global mon_target_pg_per_osd number", "ceph config set global mon_target_pg_per_osd 150", "ceph osd pool get noautoscale", "ceph osd pool set noautoscale", "ceph osd pool unset noautoscale", "ceph osd pool set pool-name target_size_bytes value", "ceph osd pool set mypool target_size_bytes 100T", "ceph osd pool set pool-name target_size_ratio ratio", "ceph osd pool set mypool target_size_ratio 1.0", "ceph osd pool set POOL_NAME pg_num PG_NUM", "ceph osd pool set POOL_NAME pgp_num PGP_NUM", "ceph osd pool get POOL_NAME pg_num", "ceph pg dump [--format FORMAT ]", "ceph pg dump_stuck {inactive|unclean|stale|undersized|degraded [inactive|unclean|stale|undersized|degraded...]} INTERVAL", "ceph pg map PG_ID", "ceph pg map 1.6c", "osdmap e13 pg 1.6c (1.6c) -> up [1,0] acting [1,0]", "ceph pg scrub PG_ID", "ceph pg PG_ID mark_unfound_lost revert|delete", "ceph osd lspools", "ceph config set global osd_pool_default_pg_num 250 ceph config set global osd_pool_default_pgp_num 250", "ceph osd pool create POOL_NAME PG_NUM PGP_NUM [replicated] [ CRUSH_RULE_NAME ] [ EXPECTED_NUMBER_OBJECTS ]", "ceph osd pool create POOL_NAME PG_NUM PGP_NUM erasure [ ERASURE_CODE_PROFILE ] [ CRUSH_RULE_NAME ] [ EXPECTED_NUMBER_OBJECTS ]", "ceph osd pool create POOL_NAME [--bulk]", "ceph osd pool set-quota POOL_NAME [max_objects OBJECT_COUNT ] [max_bytes BYTES ]", "ceph osd pool set-quota data max_objects 10000", "ceph osd pool delete POOL_NAME [ POOL_NAME --yes-i-really-really-mean-it]", "ceph osd pool rename CURRENT_POOL_NAME NEW_POOL_NAME", "ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ] rados cppool SOURCE_POOL NEW_POOL ceph osd pool rename SOURCE_POOL NEW_SOURCE_POOL_NAME ceph osd pool rename NEW_POOL SOURCE_POOL", "ceph osd pool create pool1 250 rados cppool pool2 pool1 ceph osd pool rename pool2 pool3 ceph osd pool rename pool1 pool2", "ceph osd pool create NEW_POOL PG_NUM [ <other new pool parameters> ] rados export --create SOURCE_POOL FILE_PATH rados import FILE_PATH NEW_POOL", "ceph osd pool create pool1 250 rados export --create pool2 <path of export file> rados import <path of export file> pool1", "rados export --workers 5 SOURCE_POOL FILE_PATH rados import --workers 5 FILE_PATH NEW_POOL", "rados export --workers 5 pool2 <path of export file> rados import --workers 5 <path of export file> pool1", "[ceph: root@host01 /] rados df", "ceph osd pool set POOL_NAME KEY VALUE", "ceph osd pool get POOL_NAME KEY", "ceph osd pool application enable POOL_NAME APP {--yes-i-really-mean-it}", "{ \"checks\": { \"POOL_APP_NOT_ENABLED\": { \"severity\": \"HEALTH_WARN\", \"summary\": { \"message\": \"application not enabled on 1 pool(s)\" }, \"detail\": [ { \"message\": \"application not enabled on pool '_POOL_NAME_'\" }, { \"message\": \"use 'ceph osd pool application enable _POOL_NAME_ _APP_', where _APP_ is 'cephfs', 'rbd', 'rgw', or freeform for custom applications.\" } ] } }, \"status\": \"HEALTH_WARN\", \"overall_status\": \"HEALTH_WARN\", \"detail\": [ \"'ceph health' JSON format has changed in luminous. If you see this your monitoring system is scraping the wrong fields. Disable this with 'mon health preluminous compat warning = false'\" ] }", "ceph osd pool application disable POOL_NAME APP {--yes-i-really-mean-it}", "ceph osd pool application set POOL_NAME APP KEY", "ceph osd pool application rm POOL_NAME APP KEY", "ceph osd pool set POOL_NAME size NUMBER_OF_REPLICAS", "ceph osd pool set data size 3", "ceph osd pool set data min_size 2", "ceph osd dump | grep 'replicated size'", "set_req_state_err err_no=95 resorting to 500", "ceph config set mon mon_osd_down_out_subtree_limit host ceph config set osd osd_async_recovery_min_cost 1099511627776", "ceph osd erasure-code-profile set ec22 k=2 m=2 crush-failure-domain=host", "ceph osd erasure-code-profile set ec22 k=2 m=2 crush-failure-domain=host Pool : ceph osd pool create test-ec-22 erasure ec22", "ceph osd pool create ecpool 32 32 erasure pool 'ecpool' created echo ABCDEFGHI | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHI", "ceph osd erasure-code-profile get default k=2 m=2 plugin=jerasure technique=reed_sol_van", "ceph osd erasure-code-profile set myprofile k=4 m=2 crush-failure-domain=rack ceph osd pool create ecpool 12 12 erasure *myprofile* echo ABCDEFGHIJKL | rados --pool ecpool put NYAN - rados --pool ecpool get NYAN - ABCDEFGHIJKL", "ceph osd erasure-code-profile set NAME [<directory= DIRECTORY >] [<plugin= PLUGIN >] [<stripe_unit= STRIPE_UNIT >] [<_CRUSH_DEVICE_CLASS_>] [<_CRUSH_FAILURE_DOMAIN_>] [<key=value> ...] [--force]", "ceph osd erasure-code-profile rm RULE_NAME", "ceph osd erasure-code-profile get NAME", "ceph osd erasure-code-profile ls", "ceph osd pool set ERASURE_CODED_POOL_NAME allow_ec_overwrites true", "ceph osd pool set ec_pool allow_ec_overwrites true", "rbd create --size IMAGE_SIZE_M|G|T --data-pool _ERASURE_CODED_POOL_NAME REPLICATED_POOL_NAME / IMAGE_NAME", "rbd create --size 1G --data-pool ec_pool rep_pool/image01", "ceph osd erasure-code-profile set NAME plugin=jerasure k= DATA_CHUNKS m= DATA_CHUNKS technique= TECHNIQUE [crush-root= ROOT ] [crush-failure-domain= BUCKET_TYPE ] [directory= DIRECTORY ] [--force]", "chunk nr 01234567 step 1 _cDD_cDD step 2 cDDD____ step 3 ____cDDD", "crush-steps='[ [ \"choose\", \"rack\", 2 ], [ \"chooseleaf\", \"host\", 4 ] ]'" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/storage_strategies_guide/index
5.12. Troubleshooting with the multipathd Interactive Console
5.12. Troubleshooting with the multipathd Interactive Console The multipathd -k command is an interactive interface to the multipathd daemon. Entering this command brings up an interactive multipath console. After executing this command, you can enter help to get a list of available commands, you can enter a interactive command, or you can enter CTRL-D to quit. The multipathd interactive console can be used to troubleshoot problems you may be having with your system. For example, the following command sequence displays the multipath configuration, including the defaults, before exiting the console. The following command sequence ensures that multipath has picked up any changes to the multipath.conf , Use the following command sequence to ensure that the path checker is working properly.
[ "multipathd -k > > show config > > CTRL-D", "multipathd -k > > reconfigure > > CTRL-D", "multipathd -k > > show paths > > CTRL-D" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/dm_multipath/multipath_config_confirm
B.32.3. RHSA-2010:0987 - Critical: java-1.6.0-ibm security and bug fix update
B.32.3. RHSA-2010:0987 - Critical: java-1.6.0-ibm security and bug fix update Updated java-1.6.0-ibm packages that fix several security issues and two bugs are now available for Red Hat Enterprise Linux 4 Extras, and Red Hat Enterprise Linux 5 and 6 Supplementary. The Red Hat Security Response Team has rated this update as having critical security impact. Common Vulnerability Scoring System (CVSS) base scores, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The IBM 1.6.0 Java release includes the IBM Java 2 Runtime Environment and the IBM Java 2 Software Development Kit. CVE-2009-3555 , CVE-2010-1321 , CVE-2010-3541 , CVE-2010-3548 , CVE-2010-3549 , CVE-2010-3550 , CVE-2010-3551 , CVE-2010-3553 , CVE-2010-3555 , CVE-2010-3556 , CVE-2010-3557 , CVE-2010-3558 , CVE-2010-3560 , CVE-2010-3562 , CVE-2010-3563 , CVE-2010-3565 , CVE-2010-3566 , CVE-2010-3568 , CVE-2010-3569 , CVE-2010-3571 , CVE-2010-3572 , CVE-2010-3573 , CVE-2010-3574 This update fixes several vulnerabilities in the IBM Java 2 Runtime Environment. Detailed vulnerability descriptions are linked from the IBM "Security alerts" page. Bug Fixes BZ# 659716 An error in the java-1.6.0-ibm RPM spec file caused an incorrect path to be included in HtmlConverter, preventing it from running. BZ# 633341 On AMD64 and Intel 64 systems, if only the 64-bit java-1.6.0-ibm packages were installed, IBM Java 6 Web Start was not available as an application that could open JNLP (Java Network Launching Protocol) files. This affected file management and web browser tools. Users had to manually open them with the "/usr/lib/jvm/jre-1.6.0-ibm.x86_64/bin/javaws" command. This update resolves this issue. All users of java-1.6.0-ibm are advised to upgrade to these updated packages, containing the IBM 1.6.0 SR9 Java release. All running instances of IBM Java must be restarted for the update to take effect.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/rhsa-2010-0987
3.2. Feature Compatibility Support
3.2. Feature Compatibility Support Red Hat Gluster Storage supports a number of features. Most features are supported with other features, but there are some exceptions. This section clearly identifies which features are supported and compatible with other features to help you in planning your Red Hat Gluster Storage deployment. Note Internet Protocol Version 6 (IPv6) support is available only for Red Hat Hyperconverged Infrastructure for Virtualization environments and not for Red Hat Gluster Storage standalone environments. Features in the following table are supported from the specified version and later. Table 3.3. Features supported by Red Hat Gluster Storage version Feature Version Arbiter bricks 3.2 Bitrot detection 3.1 Erasure coding 3.1 Google Compute Engine 3.1.3 Metadata caching 3.2 Microsoft Azure 3.1.3 NFS version 4 3.1 SELinux 3.1 Sharding 3.2.0 Snapshots 3.0 Snapshots, cloning 3.1.3 Snapshots, user-serviceable 3.0.3 Tiering (Deprecated) 3.1.2 Volume Shadow Copy (VSS) 3.1.3 Table 3.4. Features supported by volume type Volume Type Sharding Tiering (Deprecated) Quota Snapshots Geo-Rep Bitrot Arbitrated-Replicated Yes No Yes Yes Yes Yes Distributed No Yes Yes Yes Yes Yes Distributed-Dispersed No Yes Yes Yes Yes Yes Distributed-Replicated Yes Yes Yes Yes Yes Yes Replicated Yes Yes Yes Yes Yes Yes Sharded N/A No No No Yes No Tiered (Deprecated) No N/A Limited [a] Limited [a] Limited [a] Limited [a] [a] See Tiering Limitations in the Red Hat Gluster Storage 3.5 Administration Guide for details. Table 3.5. Features supported by client protocol Feature FUSE Gluster-NFS NFS-Ganesha SMB Arbiter Yes Yes Yes Yes Bitrot detection Yes Yes No Yes dm-cache Yes Yes Yes Yes Encryption (TLS-SSL) Yes Yes Yes Yes Erasure coding Yes Yes Yes Yes Export subdirectory Yes Yes Yes N/A Geo-replication Yes Yes Yes Yes Quota (Deprecated) Warning Using QUOTA feature is considered to be deprecated in Red Hat Gluster Storage 3.5.3. Red Hat no longer recommends to use this feature and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. See Chapter 9, Managing Directory Quotas for more details. Yes Yes Yes Yes RDMA (Deprecated) Warning Using RDMA as a transport protocol is considered deprecated in Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support it on new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Yes No No No Snapshots Yes Yes Yes Yes Snapshot cloning Yes Yes Yes Yes Tiering (Deprecated) Warning Tiering is considered deprecated as of Red Hat Gluster Storage 3.5. Red Hat no longer recommends its use, and does not support tiering in new deployments and existing deployments that upgrade to Red Hat Gluster Storage 3.5.3. Yes Yes N/A N/A
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/administration_guide/feature-compatibility
Chapter 11. Troubleshooting
Chapter 11. Troubleshooting When troubleshooting OpenShift sandboxed containers, you can open a support case and provide debugging information using the must-gather tool. If you are a cluster administrator, you can also review logs on your own, enabling a more detailed level of logs. 11.1. Collecting data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. The must-gather tool enables you to collect diagnostic information about your OpenShift Container Platform cluster, including virtual machines and other data related to OpenShift sandboxed containers. For prompt support, supply diagnostic information for both OpenShift Container Platform and OpenShift sandboxed containers. Using the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues, including: Resource definitions Service logs By default, the oc adm must-gather command uses the default plugin image and writes into ./must-gather.local . Alternatively, you can collect specific information by running the command with the appropriate arguments as described in the following sections: To collect data related to one or more specific features, use the --image argument with an image, as listed in a following section. For example: USD oc adm must-gather --image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel9:1.8.1 To collect the audit logs, use the -- /usr/bin/gather_audit_logs argument, as described in a following section. For example: USD oc adm must-gather -- /usr/bin/gather_audit_logs Note Audit logs are not collected as part of the default set of information to reduce the size of the files. When you run oc adm must-gather , a new pod with a random name is created in a new project on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. For example: NAMESPACE NAME READY STATUS RESTARTS AGE ... openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s ... Optionally, you can run the oc adm must-gather command in a specific namespace by using the --run-namespace option. For example: USD oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel9:1.8.1 11.2. Collecting log data The following features and objects are associated with OpenShift sandboxed containers: All namespaces and their child objects that belong to OpenShift sandboxed containers resources All OpenShift sandboxed containers custom resource definitions (CRDs) You can collect the following component logs for each pod running with the kata runtime: Kata agent logs Kata runtime logs QEMU logs Audit logs CRI-O logs 11.2.1. Enabling debug logs for CRI-O runtime You can enable debug logs by updating the logLevel field in the KataConfig CR. This changes the log level in the CRI-O runtime for the worker nodes running OpenShift sandboxed containers. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure Change the logLevel field in your existing KataConfig CR to debug : USD oc patch kataconfig <kataconfig> --type merge --patch '{"spec":{"logLevel":"debug"}}' Monitor the kata-oc machine config pool until the value of UPDATED is True , indicating that all worker nodes are updated: USD oc get mcp kata-oc Example output NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE kata-oc rendered-kata-oc-169 False True False 3 1 1 0 9h Verification Start a debug session with a node in the machine config pool: USD oc debug node/<node_name> Change the root directory to /host : # chroot /host Verify the changes in the crio.conf file: # crio config | egrep 'log_level Example output log_level = "debug" 11.2.2. Viewing debug logs for components Cluster administrators can use the debug logs to troubleshoot issues. The logs for each node are printed to the node journal. You can review the logs for the following OpenShift sandboxed containers components: Kata agent Kata runtime ( containerd-shim-kata-v2 ) virtiofsd QEMU only generates warning and error logs. These warnings and errors print to the node journal in both the Kata runtime logs and the CRI-O logs with an extra qemuPid field. Example of QEMU logs Mar 11 11:57:28 openshift-worker-0 kata[2241647]: time="2023-03-11T11:57:28.587116986Z" level=info msg="Start logging QEMU (qemuPid=2241693)" name=containerd-shim-v2 pid=2241647 sandbox=d1d4d68efc35e5ccb4331af73da459c13f46269b512774aa6bde7da34db48987 source=virtcontainers/hypervisor subsystem=qemu Mar 11 11:57:28 openshift-worker-0 kata[2241647]: time="2023-03-11T11:57:28.607339014Z" level=error msg="qemu-kvm: -machine q35,accel=kvm,kernel_irqchip=split,foo: Expected '=' after parameter 'foo'" name=containerd-shim-v2 pid=2241647 qemuPid=2241693 sandbox=d1d4d68efc35e5ccb4331af73da459c13f46269b512774aa6bde7da34db48987 source=virtcontainers/hypervisor subsystem=qemu Mar 11 11:57:28 openshift-worker-0 kata[2241647]: time="2023-03-11T11:57:28.60890737Z" level=info msg="Stop logging QEMU (qemuPid=2241693)" name=containerd-shim-v2 pid=2241647 sandbox=d1d4d68efc35e5ccb4331af73da459c13f46269b512774aa6bde7da34db48987 source=virtcontainers/hypervisor subsystem=qemu The Kata runtime prints Start logging QEMU when QEMU starts, and Stop Logging QEMU when QEMU stops. The error appears in between these two log messages with the qemuPid field. The actual error message from QEMU appears in red. The console of the QEMU guest is printed to the node journal as well. You can view the guest console logs together with the Kata agent logs. Prerequisites You have installed the OpenShift CLI ( oc ). You have access to the cluster as a user with the cluster-admin role. Procedure To review the Kata agent logs and guest console logs, run the following command: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g "reading guest console" To review the Kata runtime logs, run the following command: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata To review the virtiofsd logs, run the following command: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t virtiofsd To review the QEMU logs, run the following command: USD oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g "qemuPid=\d+" Additional resources Gathering data about your cluster in the OpenShift Container Platform documentation
[ "oc adm must-gather --image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel9:1.8.1", "oc adm must-gather -- /usr/bin/gather_audit_logs", "NAMESPACE NAME READY STATUS RESTARTS AGE openshift-must-gather-5drcj must-gather-bklx4 2/2 Running 0 72s openshift-must-gather-5drcj must-gather-s8sdh 2/2 Running 0 72s", "oc adm must-gather --run-namespace <namespace> --image=registry.redhat.io/openshift-sandboxed-containers/osc-must-gather-rhel9:1.8.1", "oc patch kataconfig <kataconfig> --type merge --patch '{\"spec\":{\"logLevel\":\"debug\"}}'", "oc get mcp kata-oc", "NAME CONFIG UPDATED UPDATING DEGRADED MACHINECOUNT READYMACHINECOUNT UPDATEDMACHINECOUNT DEGRADEDMACHINECOUNT AGE kata-oc rendered-kata-oc-169 False True False 3 1 1 0 9h", "oc debug node/<node_name>", "chroot /host", "crio config | egrep 'log_level", "log_level = \"debug\"", "Mar 11 11:57:28 openshift-worker-0 kata[2241647]: time=\"2023-03-11T11:57:28.587116986Z\" level=info msg=\"Start logging QEMU (qemuPid=2241693)\" name=containerd-shim-v2 pid=2241647 sandbox=d1d4d68efc35e5ccb4331af73da459c13f46269b512774aa6bde7da34db48987 source=virtcontainers/hypervisor subsystem=qemu Mar 11 11:57:28 openshift-worker-0 kata[2241647]: time=\"2023-03-11T11:57:28.607339014Z\" level=error msg=\"qemu-kvm: -machine q35,accel=kvm,kernel_irqchip=split,foo: Expected '=' after parameter 'foo'\" name=containerd-shim-v2 pid=2241647 qemuPid=2241693 sandbox=d1d4d68efc35e5ccb4331af73da459c13f46269b512774aa6bde7da34db48987 source=virtcontainers/hypervisor subsystem=qemu Mar 11 11:57:28 openshift-worker-0 kata[2241647]: time=\"2023-03-11T11:57:28.60890737Z\" level=info msg=\"Stop logging QEMU (qemuPid=2241693)\" name=containerd-shim-v2 pid=2241647 sandbox=d1d4d68efc35e5ccb4331af73da459c13f46269b512774aa6bde7da34db48987 source=virtcontainers/hypervisor subsystem=qemu", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g \"reading guest console\"", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t virtiofsd", "oc debug node/<nodename> -- journalctl -D /host/var/log/journal -t kata -g \"qemuPid=\\d+\"" ]
https://docs.redhat.com/en/documentation/openshift_sandboxed_containers/1.8/html/user_guide/troubleshooting
Chapter 5. Running the AMQ Broker examples
Chapter 5. Running the AMQ Broker examples AMQ Broker ships with many example programs that demonstrate basic and advanced features of the product. You can run these examples to become familiar with the capabilities of AMQ Broker. To run the AMQ Broker examples, you must first set up your machine by installing and configuring Apache Maven and the AMQ Maven repository. Then, you use Maven to run the AMQ Broker example programs. 5.1. Setting up your machine to run the AMQ Broker examples Before you can run the included AMQ Broker example programs, you must first download and install Maven and the AMQ Maven repository, and configure the Maven settings file. 5.1.1. Downloading and installing Maven Maven is required to run the AMQ Broker examples. Procedure Go to the Apache Maven Download page and download the latest distribution for your operating system. Install Maven for your operating system. For more information, see Installing Apache Maven . Additional resources For more information about Maven, see Introduction to Apache Maven . 5.1.2. Downloading and installing the AMQ Maven repository After Maven is installed on your machine, you download and install the AMQ Maven repository. This repository is available on the Red Hat Customer Portal. In a web browser, navigate to https://access.redhat.com/downloads/ and log in. The Product Downloads page is displayed. In the Integration and Automation section, click the Red Hat AMQ Broker link. The Software Downloads page is displayed. Select the desired AMQ Broker version from the Version drop-down menu. On the Releases tab, click the Download link for the AMQ Broker Maven Repository. The AMQ Maven repository file is downloaded as a zip file. On your machine, unzip the AMQ repository file into a directory of your choosing. A new directory is created on your machine, which contains the Maven repository in a subdirectory named maven-repository/ . 5.1.3. Configuring the Maven settings file After downloading and installing the AMQ Maven repository, you must add the repository to the Maven settings file. Procedure Open the Maven settings.xml file. The settings.xml file is typically located in the USD{user.home}/.m2/ directory. For Linux, this is ~/.m2/ For Windows, this is \Documents and Settings\.m2\ or \Users\.m2\ If you do not find a settings.xml file in USD{user.home}/.m2/ , there is a default version located in the conf/ directory of your Maven installation. Copy the default settings.xml file into the USD{user.home}/.m2/ directory. In the <profiles> element, add a profile for the AMQ Maven repository. <!-- Configure the JBoss AMQ Maven repository --> <profile> <id>jboss-amq-maven-repository</id> <repositories> <repository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 1 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 2 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> 1 2 Replace <JBoss-AMQ-repository-path> with the location of the Maven repository that you installed. Typically, this location ends with /maven-repository . For example: <url>file:///path/to/repo/amq-broker-7.2.0-maven-repository/maven-repository</url> In the <activeProfiles> element, set the AMQ Maven repository to be active: <activeProfiles> <activeProfile>jboss-amq-maven-repository</activeProfile> ... </activeProfiles> If you copied the default settings.xml from your Maven installation, uncomment the <active-profiles> section if it was commented out by default. Save and close settings.xml . Remove the cached USD{user.home}/.m2/repository/ directory. If your Maven repository contains outdated artifacts, you may encounter one of the following Maven error messages when you build or deploy your project: Missing artifact <artifact-name> [ERROR] Failed to execute goal on project <project-name>; Could not resolve dependencies for <project-name> 5.2. AMQ Broker example programs The AMQ Broker examples demonstrate how to use AMQ Broker features and the supported messaging protocols. To find the example programs, see AMQ Broker example programs . The examples include the following: Features Broker-specific features such as: Clustered - examples showing load balancing and distribution capabilities HA - examples showing failover and reconnection capabilities Perf - examples allowing you to run a few performance tests on the server Standard - examples demonstrating various broker features Sub-modules - examples of integrated external modules Protocols Examples for each of the supported messaging protocols: AMQP MQTT OpenWire STOMP Additional resources For a description of each example program, see Examples in the Apache Artemis documentation. 5.3. Running an AMQ Broker example program Example programs for AMQ Broker demonstrate basic and advanced features of the product. You use Maven to run these programs. Prerequisites Your machine is set up to run the AMQ Broker examples. For more information, see Section 5.1, "Setting up your machine to run the AMQ Broker examples" . You downloaded the AMQ Broker example programs . Procedure Navigate to the directory of the example that you want to run. The following example assumed that you downloaded the examples to a directory called amq-broker-examples . USD cd amq-broker-examples/examples/features/standard/queue Use the mvn clean verify command to run the example program. Maven starts the broker and runs the example program. The first time you run the example program, Maven downloads any missing dependencies, which may take a while to run. In this case, the queue example program is run, which creates a producer, sends a test message, and then creates a consumer that receives the message: USD mvn clean verify [INFO] Scanning for projects... [INFO] [INFO] -------------< org.apache.activemq.examples.broker:queue >-------------- [INFO] Building ActiveMQ Artemis JMS Queue Example 2.6.1.amq-720004-redhat-1 [INFO] --------------------------------[ jar ]--------------------------------- ... server-out:2018-12-05 16:37:57,023 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [0.0.0.0, nodeID=06f529d3-f8d6-11e8-9bea-0800271b03bd] [INFO] Server started [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:runClient (runClient) @ queue --- Sent message: This is a text message Received message: This is a text message [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:cli (stop) @ queue --- server-out:2018-12-05 16:37:59,519 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [06f529d3-f8d6-11e8-9bea-0800271b03bd] stopped, uptime 3.734 seconds server-out:Server stopped! [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 48.681 s [INFO] Finished at: 2018-12-05T16:37:59-05:00 [INFO] ------------------------------------------------------------------------ Note Some of the example programs use UDP clustering, and may not work in your environment by default. To run these examples successfully, redirect traffic directed to 224.0.0.0 to the loopback interface: USD sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo
[ "<!-- Configure the JBoss AMQ Maven repository --> <profile> <id>jboss-amq-maven-repository</id> <repositories> <repository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 1 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>jboss-amq-maven-repository</id> <url>file:// <JBoss-AMQ-repository-path> </url> 2 <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile>", "<url>file:///path/to/repo/amq-broker-7.2.0-maven-repository/maven-repository</url>", "<activeProfiles> <activeProfile>jboss-amq-maven-repository</activeProfile> </activeProfiles>", "cd amq-broker-examples/examples/features/standard/queue", "mvn clean verify [INFO] Scanning for projects [INFO] [INFO] -------------< org.apache.activemq.examples.broker:queue >-------------- [INFO] Building ActiveMQ Artemis JMS Queue Example 2.6.1.amq-720004-redhat-1 [INFO] --------------------------------[ jar ]--------------------------------- server-out:2018-12-05 16:37:57,023 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [0.0.0.0, nodeID=06f529d3-f8d6-11e8-9bea-0800271b03bd] [INFO] Server started [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:runClient (runClient) @ queue --- Sent message: This is a text message Received message: This is a text message [INFO] [INFO] --- artemis-maven-plugin:2.6.1.amq-720004-redhat-1:cli (stop) @ queue --- server-out:2018-12-05 16:37:59,519 INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.1.amq-720004-redhat-1 [06f529d3-f8d6-11e8-9bea-0800271b03bd] stopped, uptime 3.734 seconds server-out:Server stopped! [INFO] ------------------------------------------------------------------------ [INFO] BUILD SUCCESS [INFO] ------------------------------------------------------------------------ [INFO] Total time: 48.681 s [INFO] Finished at: 2018-12-05T16:37:59-05:00 [INFO] ------------------------------------------------------------------------", "sudo route add -net 224.0.0.0 netmask 240.0.0.0 dev lo" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/getting_started_with_amq_broker/running-broker-examples-getting-started
Preface
Preface Deploying applications to a container orchestration platform such as Red Hat OpenShift Container Platform provides a number of advantages from an operational perspective. For example, an update to the base image of an application can be made through a simple in-place upgrade with little to no disruption. Upgrading the required operating system of an application deployed to traditional virtual machines can be a much more disruptive and risky process. Although application and operator developers can provide many options to OpenShift Container Platform users regarding the deployment of the application, these configurations must be provided by the end user because they are dependent on the configuration of OpenShift Container Platform. For example, use of node labels in the Openshift cluster can help ensure different workloads are run on specific nodes. This type of configuration must be provided by the user as the Ansible Automation Platform Operator has no way of inferring this.
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_performance_considerations_for_operator_based_installations/pr01
Chapter 22. Importing projects from Git repositories
Chapter 22. Importing projects from Git repositories Git is a distributed version control system. It implements revisions as commit objects. When you save your changes to a repository, a new commit object in the Git repository is created. Business Central uses Git to store project data, including assets such as rules and processes. When you create a project in Business Central, it is added to a Git repository that is embedded in Business Central. If you have projects in other Git repositories, you can import those projects into the Business Central Git repository through Business Central spaces. Prerequisites Red Hat Process Automation Manager projects exist in an external Git repository. You have the credentials required for read access to that external Git repository. Procedure In Business Central, click Menu Design Projects . Select or create the space into which you want to import the projects. The default space is MySpace . To import a project, do one of the following: Click Import Project . Select Import Project from the drop-down list. In the Import Project window, enter the URL and credentials for the Git repository that contains the projects that you want to import and click Import . The projects are added to the Business Central Git repository and are available from the current space.
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/installing_and_configuring_red_hat_process_automation_manager/git-import-proc_install-on-eap
Release notes
Release notes OpenShift Container Platform 4.16 Highlights of what is new and what has changed with this OpenShift Container Platform release Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/release_notes/index
3.3. Install the Maven Repository
3.3. Install the Maven Repository There are three ways to install the required repositories: On your local file system ( Section 3.3.1, "Local File System Repository Installation" ). On Apache Web Server. With a Maven repository manager ( Section 3.3.2, "Maven Repository Manager Installation" ). Use the option that best suits your environment. Report a bug 3.3.1. Local File System Repository Installation This option is best suited for initial testing in a small team. Follow the outlined procedure to extract the Red Hat JBoss Data Grid and JBoss Enterprise Application Platform Maven repositories to a directory in your local file system: Procedure 3.1. Local File System Repository Installation (JBoss Data Grid) Log Into the Customer Portal In a browser window, navigate to the Customer Portal page ( https://access.redhat.com/home ) and log in. Download the JBoss Data Grid Repository File Download the jboss-datagrid- {VERSION} -maven-repository.zip file from the Red Hat Customer Portal. Unzip the file to a directory on your local file system (for example USDJDG_HOME/projects/maven-repositories/ ). Report a bug 3.3.2. Maven Repository Manager Installation This option is ideal if you are already using a repository manager. The Red Hat JBoss Data Grid and JBoss Enterprise Application Server repositories are installed using a Maven repository manager using its documentation. Examples of such repository managers are: Apache Archiva: http://archiva.apache.org/ JFrog Artifactory: http://www.jfrog.com/products.php Sonatype Nexus: http://nexus.sonatype.org/ For details, see Section B.1, "Install the JBoss Enterprise Application Platform Repository Using Nexus" . Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/getting_started_guide/sect-install_the_maven_repository
Chapter 110. Slack
Chapter 110. Slack Both producer and consumer are supported The Slack component allows you to connect to an instance of Slack and delivers a message contained in the message body via a pre established Slack incoming webhook . 110.1. Dependencies When using slack with Red Hat build of Camel Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-slack-starter</artifactId> </dependency> 110.2. URI format To send a message to a channel. To send a direct message to a slackuser. 110.3. Configuring Options Camel components are configured on two levels: Component level Endpoint level 110.3.1. Component Level Options The component level is the highest level. The configurations you define at this level are inherited by all the endpoints. For example, a component can have security settings, credentials for authentication, urls for network connection, and so on. Since components typically have pre-configured defaults for the most common cases, you may need to only configure a few component options, or maybe none at all. You can configure components with Component DSL in a configuration file (application.properties|yaml), or directly with Java code. 110.3.2. Endpoint Level Options At the Endpoint level you have many options, which you can use to configure what you want the endpoint to do. The options are categorized according to whether the endpoint is used as a consumer (from) or as a producer (to) or used for both. You can configure endpoints directly in the endpoint URI as path and query parameters. You can also use Endpoint DSL and DataFormat DSL as type safe ways of configuring endpoints and data formats in Java. When configuring options, use Property Placeholders for urls, port numbers, sensitive information, and other settings. Placeholders allows you to externalize the configuration from your code, giving you more flexible and reusable code. 110.4. Component Options The Slack component supports 5 options, which are listed below. Name Description Default Type bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean token (token) The token to use. String webhookUrl (webhook) The incoming webhook URL. String 110.5. Endpoint Options The Slack endpoint is configured using URI syntax: with the following path and query parameters: 110.5.1. Path Parameters (1 parameters) Name Description Default Type channel (common) Required The channel name (syntax #name) or slackuser (syntax userName) to send a message directly to an user. String 110.5.2. Query Parameters (29 parameters) Name Description Default Type token (common) The token to use. String bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean conversationType (consumer) Type of conversation. Enum values: PUBLIC_CHANNEL PRIVATE_CHANNEL MPIM IM PUBLIC_CHANNEL ConversationType maxResults (consumer) The Max Result for the poll. 10 String naturalOrder (consumer) Create exchanges in natural order (oldest to newest) or not. false boolean sendEmptyMessageWhenIdle (consumer) If the polling consumer did not poll any files, you can enable this option to send an empty message (no body) instead. false boolean serverUrl (consumer) The Server URL of the Slack instance. String exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut InOptionalOut ExchangePattern pollStrategy (consumer (advanced)) A pluggable org.apache.camel.PollingConsumerPollingStrategy allowing you to provide your custom implementation to control error handling usually occurred during the poll operation before an Exchange have been created and being routed in Camel. PollingConsumerPollStrategy iconEmoji (producer) Deprecated Use a Slack emoji as an avatar. String iconUrl (producer) Deprecated The avatar that the component will use when sending message to a channel or user. String lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean username (producer) Deprecated This is the username that the bot will have when sending messages to a channel or user. String webhookUrl (producer) The incoming webhook URL. String backoffErrorThreshold (scheduler) The number of subsequent error polls (failed due some error) that should happen before the backoffMultipler should kick-in. int backoffIdleThreshold (scheduler) The number of subsequent idle polls that should happen before the backoffMultipler should kick-in. int backoffMultiplier (scheduler) To let the scheduled polling consumer backoff if there has been a number of subsequent idles/errors in a row. The multiplier is then the number of polls that will be skipped before the actual attempt is happening again. When this option is in use then backoffIdleThreshold and/or backoffErrorThreshold must also be configured. int delay (scheduler) Milliseconds before the poll. 500 long greedy (scheduler) If greedy is enabled, then the ScheduledPollConsumer will run immediately again, if the run polled 1 or more messages. false boolean initialDelay (scheduler) Milliseconds before the first poll starts. 1000 long repeatCount (scheduler) Specifies a maximum limit of number of fires. So if you set it to 1, the scheduler will only fire once. If you set it to 5, it will only fire five times. A value of zero or negative means fire forever. 0 long runLoggingLevel (scheduler) The consumer logs a start/complete log line when it polls. This option allows you to configure the logging level for that. Enum values: TRACE DEBUG INFO WARN ERROR OFF TRACE LoggingLevel scheduledExecutorService (scheduler) Allows for configuring a custom/shared thread pool to use for the consumer. By default each consumer has its own single threaded thread pool. ScheduledExecutorService scheduler (scheduler) To use a cron scheduler from either camel-spring or camel-quartz component. Use value spring or quartz for built in scheduler. none Object schedulerProperties (scheduler) To configure additional properties when using a custom scheduler or any of the Quartz, Spring based scheduler. Map startScheduler (scheduler) Whether the scheduler should be auto started. true boolean timeUnit (scheduler) Time unit for initialDelay and delay options. Enum values: NANOSECONDS MICROSECONDS MILLISECONDS SECONDS MINUTES HOURS DAYS MILLISECONDS TimeUnit useFixedDelay (scheduler) Controls if fixed delay or fixed rate is used. See ScheduledExecutorService in JDK for details. true boolean 110.6. Configuring in Sprint XML The Slack component with XML must be configured as a Spring or Blueprint bean that contains the incoming webhook url or the app token for the integration as a parameter. <bean id="slack" class="org.apache.camel.component.slack.SlackComponent"> <property name="webhookUrl" value="https://hooks.slack.com/services/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"/> <property name="token" value="xoxb-12345678901-1234567890123-xxxxxxxxxxxxxxxxxxxxxxxx"/> </bean> For Java you can configure this using Java code. 110.7. Example A CamelContext with Blueprint could be as: <?xml version="1.0" encoding="UTF-8"?> <blueprint xmlns="http://www.osgi.org/xmlns/blueprint/v1.0.0" default-activation="lazy"> <bean id="slack" class="org.apache.camel.component.slack.SlackComponent"> <property name="webhookUrl" value="https://hooks.slack.com/services/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"/> </bean> <camelContext xmlns="http://camel.apache.org/schema/blueprint"> <route> <from uri="direct:test"/> <to uri="slack:#channel?iconEmoji=:camel:&amp;username=CamelTest"/> </route> </camelContext> </blueprint> 110.8. Producer You can now use a token to send a message instead of WebhookUrl. from("direct:test") .to("slack:#random?token=RAW(<YOUR_TOKEN>)"); You can now use the Slack API model to create blocks. You can read more about it here https://api.slack.com/block-kit . public void testSlackAPIModelMessage() { Message message = new Message(); message.setBlocks(Collections.singletonList(SectionBlock .builder() .text(MarkdownTextObject .builder() .text("*Hello from Camel!*") .build()) .build())); template.sendBody(test, message); } 110.9. Consumer You can use also a consumer for messages in channel. from("slack://general?token=RAW(<YOUR_TOKEN>)&maxResults=1") .to("mock:result"); In this way you'll get the last message from general channel. The consumer will take track of the timestamp of the last message consumed and in the poll it will check from that timestamp. You'll need to create a Slack app and use it on your workspace. Use the 'Bot User OAuth Access Token' as token for the consumer endpoint. Note Add the corresponding history ( channels:history or groups:history or mpim:history or im:history ) and read ( channels:read or groups:read or mpim:read or im:read ) user token scope to your app to grant it permission to view messages in the corresponding channel. You will need to use the conversationType option to set it up too ( PUBLIC_CHANNEL , PRIVATE_CHANNEL , MPIM , IM ) The naturalOrder option allows consuming messages from the oldest to the newest. Originally you would get the newest first and consume backward (message 3 ⇒ message 2 ⇒ message 1) Note You can use the conversationType option to read history and messages from a channel that is not only public ( PUBLIC_CHANNEL , PRIVATE_CHANNEL , MPIM , IM ) 110.10. Spring Boot Auto-Configuration The component supports 6 options, which are listed below. Name Description Default Type camel.component.slack.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.slack.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.slack.enabled Whether to enable auto configuration of the slack component. This is enabled by default. Boolean camel.component.slack.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.slack.token The token to use. String camel.component.slack.webhook-url The incoming webhook URL. String
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-slack-starter</artifactId> </dependency>", "slack:#channel[?options]", "slack:@userID[?options]", "slack:channel", "<bean id=\"slack\" class=\"org.apache.camel.component.slack.SlackComponent\"> <property name=\"webhookUrl\" value=\"https://hooks.slack.com/services/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\"/> <property name=\"token\" value=\"xoxb-12345678901-1234567890123-xxxxxxxxxxxxxxxxxxxxxxxx\"/> </bean>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" default-activation=\"lazy\"> <bean id=\"slack\" class=\"org.apache.camel.component.slack.SlackComponent\"> <property name=\"webhookUrl\" value=\"https://hooks.slack.com/services/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\"/> </bean> <camelContext xmlns=\"http://camel.apache.org/schema/blueprint\"> <route> <from uri=\"direct:test\"/> <to uri=\"slack:#channel?iconEmoji=:camel:&amp;username=CamelTest\"/> </route> </camelContext> </blueprint>", "from(\"direct:test\") .to(\"slack:#random?token=RAW(<YOUR_TOKEN>)\");", "public void testSlackAPIModelMessage() { Message message = new Message(); message.setBlocks(Collections.singletonList(SectionBlock .builder() .text(MarkdownTextObject .builder() .text(\"*Hello from Camel!*\") .build()) .build())); template.sendBody(test, message); }", "from(\"slack://general?token=RAW(<YOUR_TOKEN>)&maxResults=1\") .to(\"mock:result\");" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-slack-component-starter
Chapter 14. Using Red Hat build of OptaPlanner in an IDE: an employee rostering example
Chapter 14. Using Red Hat build of OptaPlanner in an IDE: an employee rostering example As a business rules developer, you can use an IDE to build, run, and modify the optaweb-employee-rostering starter application that uses the Red Hat build of OptaPlanner functionality. Prerequisites You use an integrated development environment, such as Red Hat CodeReady Studio or IntelliJ IDEA. You have an understanding of the Java language. You have an understanding of React and TypeScript. This requirement is necessary to develop the OptaWeb UI. 14.1. Overview of the employee rostering starter application The employee rostering starter application assigns employees to shifts on various positions in an organization. For example, you can use the application to distribute shifts in a hospital between nurses, guard duty shifts across a number of locations, or shifts on an assembly line between workers. Optimal employee rostering must take a number of variables into account. For example, different skills can be required for shifts in different positions. Also, some employees might be unavailable for some time slots or might prefer a particular time slot. Moreover, an employee can have a contract that limits the number of hours that the employee can work in a single time period. The Red Hat build of OptaPlanner rules for this starter application use both hard and soft constraints. During an optimization, the planning engine may not violate hard constraints, for example, if an employee is unavailable (out sick), or that an employee cannot work two spots in a single shift. The planning engine tries to adhere to soft constraints, such as an employee's preference to not work a specific shift, but can violate them if the optimal solution requires it. 14.2. Building and running the employee rostering starter application You can build the employee rostering starter application from the source code and run it as a JAR file. Alternatively, you can use your IDE, for example, Eclipse (including Red Hat CodeReady Studio), to build and run the application. 14.2.1. Preparing deployment files You must download and prepare the deployment files before building and deploying the application. Procedure Navigate to the Software Downloads page in the Red Hat Customer Portal (login required), and select the product and version from the drop-down options: Product: Process Automation Manager Version: 7.13.5 Download Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts ( rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip ). Extract the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file. Download Red Hat Process Automation Manager 7.13 Maven Repository Kogito and OptaPlanner 8 Maven Repository ( rhpam-7.13.5-kogito-maven-repository.zip ). Extract the rhpam-7.13.5-kogito-maven-repository.zip file. Copy the contents of the rhpam-7.13.5-kogito-maven-repository/maven-repository subdirectory into the ~/.m2/repository directory. Navigate to the optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering directory. This folder is the base folder in subsequent parts of this document. Note File and folder names might have higher version numbers than specifically noted in this document. 14.2.2. Running the Employee Rostering starter application JAR file You can run the Employee Rostering starter application from a JAR file included in the Red Hat Process Automation Manager 7.13.5 Kogito and OptaPlanner 8 Decision Services Quickstarts download. Prerequisites You have downloaded and extracted the rhpam-7.13.5-kogito-and-optaplanner-quickstarts.zip file as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure In a command terminal, change to the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering directory. Enter the following command: mvn clean install -DskipTests Wait for the build process to complete. Navigate to the rhpam-7.13.5-kogito-and-optaplanner-quickstarts/optaweb-8.13.0.Final-redhat-00013/optaweb-employee-rostering/optaweb-employee-rostering-standalone/target directory. Enter the following command to run the Employee Rostering JAR file: java -jar quarkus-app/quarkus-run.jar Note The value of the quarkus.datasource.db-kind parameter is set to H2 by default at build time. To use a different database, you must rebuild the standalone module and specify the database type on the command line. For example, to use a PostgreSQL database, enter the following command: mvn clean install -DskipTests -Dquarkus.profile=postgres To access the application, enter http://localhost:8080/ in a web browser. 14.2.3. Building and running the Employee Rostering starter application using Maven You can use the command line to build and run the employee rostering starter application. If you use this procedure, the data is stored in memory and is lost when the server is stopped. To build and run the application with a database server for persistent storage, see Section 14.2.4, "Building and running the employee rostering starter application with persistent data storage from the command line" . Prerequisites You have prepared the deployment files as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure Navigate to the optaweb-employee-rostering-backend directory. Enter the following command: mvn quarkus:dev Navigate to the optaweb-employee-rostering-frontend directory. Enter the following command: npm start Note If you use npm to start the server, npm monitors code changes. To access the application, enter http://localhost:3000/ in a web browser. 14.2.4. Building and running the employee rostering starter application with persistent data storage from the command line If you use the command line to build the employee rostering starter application and run it, you can provide a database server for persistent data storage. Prerequisites You have prepared the deployment files as described in Section 14.2.1, "Preparing deployment files" . A Java Development Kit is installed. Maven is installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. You have a deployed MySQL or PostrgeSQL database server. Procedure In a command terminal, navigate to the optaweb-employee-rostering-standalone/target directory. Enter the following command to run the Employee Rostering JAR file: java \ -Dquarkus.datasource.username=<DATABASE_USER> \ -Dquarkus.datasource.password=<DATABASE_PASSWORD> \ -Dquarkus.datasource.jdbc.url=<DATABASE_URL> \ -jar quarkus-app/quarkus-run.jar In this example, replace the following placeholders: <DATABASE_URL> : URL to connect to the database <DATABASE_USER> : The user to connect to the database <DATABASE_PASSWORD> : The password for <DATABASE_USER> Note The value of the quarkus.datasource.db-kind parameter is set to H2 by default at build time. To use a different database, you must rebuild the standalone module and specify the database type on the command line. For example, to use a PostgreSQL database, enter the following command: mvn clean install -DskipTests -Dquarkus.profile=postgres 14.2.5. Building and running the employee rostering starter application using IntelliJ IDEA You can use IntelliJ IDEA to build and run the employee rostering starter application. Prerequisites You have downloaded the Employee Rostering source code, available from the Employee Rostering GitHub page. IntelliJ IDEA, Maven, and Node.js are installed. The host has access to the Internet. The build process uses the Internet for downloading Maven packages from external repositories. Procedure Start IntelliJ IDEA. From the IntelliJ IDEA main menu, select File Open . Select the root directory of the application source and click OK . From the main menu, select Run Edit Configurations . In the window that appears, expand Templates and select Maven . The Maven sidebar appears. In the Maven sidebar, select optaweb-employee-rostering-backend from the Working Directory menu. In Command Line , enter mvn quarkus:dev . To start the back end, click OK . In a command terminal, navigate to the optaweb-employee-rostering-frontend directory. Enter the following command to start the front end: To access the application, enter http://localhost:3000/ in a web browser. 14.3. Overview of the source code of the employee rostering starter application The employee rostering starter application consists of the following principal components: A backend that implements the rostering logic using Red Hat build of OptaPlanner and provides a REST API A frontend module that implements a user interface using React and interacts with the backend module through the REST API You can build and use these components independently. In particular, you can implement a different user interface and use the REST API to call the server. In addition to the two main components, the employee rostering template contains a generator of random source data (useful for demonstration and testing purposes) and a benchmarking application. Modules and key classes The Java source code of the employee rostering template contains several Maven modules. Each of these modules includes a separate Maven project file ( pom.xml ), but they are intended for building in a common project. The modules contain a number of files, including Java classes. This document lists all the modules, as well as the classes and other files that contain the key information for the employee rostering calculations. optaweb-employee-rostering-benchmark module: Contains an additional application that generates random data and benchmarks the solution. optaweb-employee-rostering-distribution module: Contains README files. optaweb-employee-rostering-docs module: Contains documentation files. optaweb-employee-rostering-frontend module: Contains the client application with the user interface, developed in React. optaweb-employee-rostering-backend module: Contains the server application that uses OptaPlanner to perform the rostering calculation. src/main/java/org.optaweb.employeerostering.service.roster/rosterGenerator.java : Generates random input data for demonstration and testing purposes. If you change the required input data, change the generator accordingly. src/main/java/org.optaweb.employeerostering.domain.employee/EmployeeAvailability.java : Defines availability information for an employee. For every time slot, an employee can be unavailable, available, or the time slot can be designated a preferred time slot for the employee. src/main/java/org.optaweb.employeerostering.domain.employee/Employee.java : Defines an employee. An employee has a name, a list of skills, and works under a contract. Skills are represented by skill objects. src/main/java/org.optaweb.employeerostering.domain.roster/Roster.java : Defines the calculated rostering information. src/main/java/org.optaweb.employeerostering.domain.shift/Shift.java : Defines a shift to which an employee can be assigned. A shift is defined by a time slot and a spot. For example, in a diner there could be a shift in the Kitchen spot for the February 20 8AM-4PM time slot. Multiple shifts can be defined for a specific spot and time slot. In this case, multiple employees are required for this spot and time slot. src/main/java/org.optaweb.employeerostering.domain.skill/Skill.java : Defines a skill that an employee can have. src/main/java/org.optaweb.employeerostering.domain.spot/Spot.java : Defines a spot where employees can be placed. For example, a Kitchen can be a spot. src/main/java/org.optaweb.employeerostering.domain.contract/Contract.java : Defines a contract that sets limits on work time for an employee in various time periods. src/main/java/org.optaweb.employeerostering.domain.tenant/Tenant.java : Defines a tenant. Each tenant represents an independent set of data. Changes in the data for one tenant do not affect any other tenants. *View.java : Classes related to domain objects that define value sets that are calculated from other information; the client application can read these values through the REST API, but not write them. *Service.java : Interfaces located in the service package that define the REST API. Both the server and the client application separately define implementations of these interfaces. optaweb-employee-rostering-standalone module: Contains the assembly configurations for the standalone application. 14.4. Modifying the employee rostering starter application To modify the employee rostering starter application to suit your needs, you must change the rules that govern the optimization process. You must also ensure that the data structures include the required data and provide the required calculations for the rules. If the required data is not present in the user interface, you must also modify the user interface. The following procedure outlines the general approach to modifying the employee rostering starter application. Prerequisites You have a build environment that successfully builds the application. You can read and modify Java code. Procedure Plan the required changes. Answer the following questions: What are the additional scenarios that must be avoided? These scenarios are hard constraints . What are the additional scenarios that the optimizer must try to avoid when possible? These scenarios are soft constraints . What data is required to calculate if each scenario is happening in a potential solution? Which of the data can be derived from the information that the user enters in the existing version? Which of the data can be hardcoded? Which of the data must be entered by the user and is not entered in the current version? If any required data can be calculated from the current data or can be hardcoded, add the calculations or hardcoding to existing view or utility classes. If the data must be calculated on the server side, add REST API endpoints to read it. If any required data must be entered by the user, add the data to the classes representing the data entities (for example, the Employee class), add REST API endpoints to read and write the data, and modify the user interface to enter the data. When all of the data is available, modify the rules. For most modifications, you must add a new rule. The rules are located in the src/main/java/org/optaweb/employeerostering/service/solver/EmployeeRosteringConstraintProvider.java file of the optaweb-employee-rostering-backend module. After modifying the application, build and run it.
[ "mvn clean install -DskipTests", "java -jar quarkus-app/quarkus-run.jar", "mvn quarkus:dev", "npm start", "java -Dquarkus.datasource.username=<DATABASE_USER> -Dquarkus.datasource.password=<DATABASE_PASSWORD> -Dquarkus.datasource.jdbc.url=<DATABASE_URL> -jar quarkus-app/quarkus-run.jar", "npm start" ]
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/developing_solvers_with_red_hat_build_of_optaplanner_in_red_hat_process_automation_manager/assembly-optimizer-modifying-ER-template-IDE
Chapter 53. Desktop
Chapter 53. Desktop Firefox 60.1 ESR fails to start on IBM Z and POWER JavaScript engine in the Firefox 60.1 Extended Support Release (ESR) browser was changed. As a consequence, Firefox 60.1 ESR on IBM Z and POWER architectures fails to start with a segmentation fault error message. (BZ# 1576289 , BZ#1579705) GV100GL graphics cannot use correctly more than one monitor Due to missing signed firmware for the GV100GL graphics, GV100GL cannot have more than one monitor connected. When a second monitor is connected, it is recognized, and graphics set the correct resolution, but the monitor stays in power-saving mode. To work around this problem, install the NVIDIA binary driver. As a result, the second monitor output works as expected under the described circumstances. (BZ# 1624337 ) The Files application can not burn disks in default installation The default installation of the Files application does not include the brasero-nautilus package necessary for burning CDs or DVDs. As a consequence, the Files application allows files to be dragged and dropped into CD or DVD devices but no content is burned to the CD or DVD. As a workaround, install brasero-nautilus package by: (BZ# 1600163 ) The on screen keyboard feature not visible in GTK applications After enabling the on screen keyboard feature by using the Settings - Universal Access - Typing - Screen keyboard menu, on screen keyboard is not visible to access with GIMP Toolkit (GTK) applications, such as gedit . To work around this problem, add the below line into the /etc/environment configuration file, and restart GNOME: (BZ# 1625700 ) 32- and 64-bit fwupd packages cannot be used together when installing or upgrading the system The /usr/lib/systemd/system/fwupd.service file in the fwupd packages is different for 32- and 64-bit architectures. Consequently, it is impossible to install both 32- and 64-bit fwupd packages or to upgrade a Red Hat Enterprise Linux 7.5 system with both 32- and 64-bit fwupd packages to Red Hat Enterprise Linux 7.6. To work around this problem: Either do not install multilibrary fwupd packages. Or remove the 32-bit or the 64-bit fwupd package before upgrading from Red Hat Enterprise Linux 7.5 to Red Hat Enterprise Linux 7.6. (BZ#1623466) Installation in and booting into graphical mode are not possible on Huawei servers When installing RHEL 7.6 in graphical mode on Huawei servers with AMD64 and Intel 64 processors, the screen becomes blurred and the install interface is no longer visible. After finishing the installation in console mode, the operating system cannot be booted into graphical mode. To work around this problem: 1. Add kernel command line parameter inst.xdriver=fbdev when installing the system, and install the system as server with GUI . 2. After the installation completes, reboot and add kernel command line single to make the system boot into maintenance mode. 3. Run the following commands: (BZ# 1624847 ) X.org server crashes during fast user switching The X.Org X11 qxl video driver does not emulate the leaving virtual terminal event on shutdown. Consequently, the X.Org display server terminates unexpectedly during fast user switching, and the current user session is terminated when switching a user. (BZ# 1640918 ) X.org X11 crashes on Lenovo T580 Due to a bug in the libpciaccess library, the X.org X11 server terminates unexpectedly on Lenovo T580 laptops. (BZ# 1641044 ) Soft lock-ups might occur during boot in the kernel with i915 On a rare occasion when a GM45 system has an improper firmware configuration, an incorrect DisplayPort hot-plug signal can cause the i915 driver to be overloaded on boot. Consequently, certain GM45 systems might experience very slow boot times while the video driver attempts to work around the problem. In some cases, the kernel might report soft lock-ups occurring. Customers are advised to contact their hardware vendors and request a firmware update to address this problem. (BZ#1608704) System boots to a blank screen when Xinerama is enabled When the Xinerama extension is enabled in /etc/X11/xorg.conf on a system using the nvidia/nouveau driver, the RANDR X extension gets disabled. Consequently, login screen fails to start upon boot due to the RANDR X extension being disabled. To work around this problem, do not enable Xinerama in /etc/X11/xorg.conf . (BZ# 1579257 )
[ "yum install brasero-nautilus", "GTK_IM_MODULE=ibus", "-e xorg-x11-drivers -e xorg-x11-drv-vesa init 5" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/known_issues_desktop
Chapter 20. Configuring Routes
Chapter 20. Configuring Routes 20.1. Route configuration 20.1.1. Creating an HTTP-based route A route allows you to host your application at a public URL. It can either be secure or unsecured, depending on the network security configuration of your application. An HTTP-based route is an unsecured route that uses the basic HTTP routing protocol and exposes a service on an unsecured application port. The following procedure describes how to create a simple HTTP-based route to a web application, using the hello-openshift application as an example. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as an administrator. You have a web application that exposes a port and a TCP endpoint listening for traffic on the port. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create an unsecured route to the hello-openshift application by running the following command: USD oc expose svc hello-openshift Verification To verify that the route resource that you created, run the following command: USD oc get routes -o yaml <name of resource> 1 1 In this example, the route is named hello-openshift . Sample YAML definition of the created unsecured route apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift 1 The host field is an alias DNS record that points to the service. This field can be any valid DNS name, such as www.example.com . The DNS name must follow DNS952 subdomain conventions. If not specified, a route name is automatically generated. 2 The targetPort field is the target port on pods that is selected by the service that this route points to. Note To display your default ingress domain, run the following command: USD oc get ingresses.config/cluster -o jsonpath={.spec.domain} 20.1.2. Creating a route for Ingress Controller sharding A route allows you to host your application at a URL. In this case, the hostname is not set and the route uses a subdomain instead. When you specify a subdomain, you automatically use the domain of the Ingress Controller that exposes the route. For situations where a route is exposed by multiple Ingress Controllers, the route is hosted at multiple URLs. The following procedure describes how to create a route for Ingress Controller sharding, using the hello-openshift application as an example. Ingress Controller sharding is useful when balancing incoming traffic load among a set of Ingress Controllers and when isolating traffic to a specific Ingress Controller. For example, company A goes to one Ingress Controller and company B to another. Prerequisites You installed the OpenShift CLI ( oc ). You are logged in as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. You have configured the Ingress Controller for sharding. Procedure Create a project called hello-openshift by running the following command: USD oc new-project hello-openshift Create a pod in the project by running the following command: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json Create a service called hello-openshift by running the following command: USD oc expose pod/hello-openshift Create a route definition called hello-openshift-route.yaml : YAML definition of the created route for sharding apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift 1 Both the label key and its corresponding label value must match the ones specified in the Ingress Controller. In this example, the Ingress Controller has the label key and value type: sharded . 2 The route will be exposed using the value of the subdomain field. When you specify the subdomain field, you must leave the hostname unset. If you specify both the host and subdomain fields, then the route will use the value of the host field, and ignore the subdomain field. Use hello-openshift-route.yaml to create a route to the hello-openshift application by running the following command: USD oc -n hello-openshift create -f hello-openshift-route.yaml Verification Get the status of the route with the following command: USD oc -n hello-openshift get routes/hello-openshift-edge -o yaml The resulting Route resource should look similar to the following: Example output apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3 1 The hostname the Ingress Controller, or router, uses to expose the route. The value of the host field is automatically determined by the Ingress Controller, and uses its domain. In this example, the domain of the Ingress Controller is <apps-sharded.basedomain.example.net> . 2 The hostname of the Ingress Controller. 3 The name of the Ingress Controller. In this example, the Ingress Controller has the name sharded . 20.1.3. Configuring route timeouts You can configure the default timeouts for an existing route when you have services in need of a low timeout, which is required for Service Level Availability (SLA) purposes, or a high timeout, for cases with a slow back end. Prerequisites You need a deployed Ingress Controller on a running cluster. Procedure Using the oc annotate command, add the timeout to the route: USD oc annotate route <route_name> \ --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1 1 Supported time units are microseconds (us), milliseconds (ms), seconds (s), minutes (m), hours (h), or days (d). The following example sets a timeout of two seconds on a route named myroute : USD oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s 20.1.4. HTTP Strict Transport Security HTTP Strict Transport Security (HSTS) policy is a security enhancement, which signals to the browser client that only HTTPS traffic is allowed on the route host. HSTS also optimizes web traffic by signaling HTTPS transport is required, without using HTTP redirects. HSTS is useful for speeding up interactions with websites. When HSTS policy is enforced, HSTS adds a Strict Transport Security header to HTTP and HTTPS responses from the site. You can use the insecureEdgeTerminationPolicy value in a route to redirect HTTP to HTTPS. When HSTS is enforced, the client changes all requests from the HTTP URL to HTTPS before the request is sent, eliminating the need for a redirect. Cluster administrators can configure HSTS to do the following: Enable HSTS per-route Disable HSTS per-route Enforce HSTS per-domain, for a set of domains, or use namespace labels in combination with domains Important HSTS works only with secure routes, either edge-terminated or re-encrypt. The configuration is ineffective on HTTP or passthrough routes. 20.1.4.1. Enabling HTTP Strict Transport Security per-route HTTP strict transport security (HSTS) is implemented in the HAProxy template and applied to edge and re-encrypt routes that have the haproxy.router.openshift.io/hsts_header annotation. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ). Procedure To enable HSTS on a route, add the haproxy.router.openshift.io/hsts_header value to the edge-terminated or re-encrypt route. You can use the oc annotate tool to do this by running the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000;\ 1 includeSubDomains;preload" 1 In this example, the maximum age is set to 31536000 ms, which is approximately 8.5 hours. Note In this example, the equal sign ( = ) is in quotes. This is required to properly execute the annotate command. Example route configured with an annotation apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 ... spec: host: def.abc.com tls: termination: "reencrypt" ... wildcardPolicy: "Subdomain" 1 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. If set to 0 , it negates the policy. 2 Optional. When included, includeSubDomains tells the client that all subdomains of the host must have the same HSTS policy as the host. 3 Optional. When max-age is greater than 0, you can add preload in haproxy.router.openshift.io/hsts_header to allow external services to include this site in their HSTS preload lists. For example, sites such as Google can construct a list of sites that have preload set. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, even before they have interacted with the site. Without preload set, browsers must have interacted with the site over HTTPS, at least once, to get the header. 20.1.4.2. Disabling HTTP Strict Transport Security per-route To disable HTTP strict transport security (HSTS) per-route, you can set the max-age value in the route annotation to 0 . Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ). Procedure To disable HSTS, set the max-age value in the route annotation to 0 , by entering the following command: USD oc annotate route <route_name> -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Tip You can alternatively apply the following YAML to create the config map: Example of disabling HSTS per-route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0 To disable HSTS for every route in a namespace, enter the following command: USD oc annotate route --all -n <namespace> --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=0" Verification To query the annotation for all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: routename HSTS: max-age=0 20.1.4.3. Enforcing HTTP Strict Transport Security per-domain To enforce HTTP Strict Transport Security (HSTS) per-domain for secure routes, add a requiredHSTSPolicies record to the Ingress spec to capture the configuration of the HSTS policy. If you configure a requiredHSTSPolicy to enforce HSTS, then any newly created route must be configured with a compliant HSTS policy annotation. Note To handle upgraded clusters with non-compliant HSTS routes, you can update the manifests at the source and apply the updates. Note You cannot use oc expose route or oc create route commands to add a route in a domain that enforces HSTS, because the API for these commands does not accept annotations. Important HSTS cannot be applied to insecure, or non-TLS routes, even if HSTS is requested for all routes globally. Prerequisites You are logged in to the cluster with a user with administrator privileges for the project. You installed the OpenShift CLI ( oc ). Procedure Edit the Ingress configuration YAML by running the following command and updating fields as needed: USD oc edit ingresses.config.openshift.io/cluster Example HSTS policy apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains 1 Required. requiredHSTSPolicies are validated in order, and the first matching domainPatterns applies. 2 Required. You must specify at least one domainPatterns hostname. Any number of domains can be listed. You can include multiple sections of enforcing options for different domainPatterns . 3 Optional. If you include namespaceSelector , it must match the labels of the project where the routes reside, to enforce the set HSTS policy on the routes. Routes that only match the namespaceSelector and not the domainPatterns are not validated. 4 Required. max-age measures the length of time, in seconds, that the HSTS policy is in effect. This policy setting allows for a smallest and largest max-age to be enforced. The largestMaxAge value must be between 0 and 2147483647 . It can be left unspecified, which means no upper limit is enforced. The smallestMaxAge value must be between 0 and 2147483647 . Enter 0 to disable HSTS for troubleshooting, otherwise enter 1 if you never want HSTS to be disabled. It can be left unspecified, which means no lower limit is enforced. 5 Optional. Including preload in haproxy.router.openshift.io/hsts_header allows external services to include this site in their HSTS preload lists. Browsers can then use these lists to determine which sites they can communicate with over HTTPS, before they have interacted with the site. Without preload set, browsers need to interact at least once with the site to get the header. preload can be set with one of the following: RequirePreload : preload is required by the RequiredHSTSPolicy . RequireNoPreload : preload is forbidden by the RequiredHSTSPolicy . NoOpinion : preload does not matter to the RequiredHSTSPolicy . 6 Optional. includeSubDomainsPolicy can be set with one of the following: RequireIncludeSubDomains : includeSubDomains is required by the RequiredHSTSPolicy . RequireNoIncludeSubDomains : includeSubDomains is forbidden by the RequiredHSTSPolicy . NoOpinion : includeSubDomains does not matter to the RequiredHSTSPolicy . You can apply HSTS to all routes in the cluster or in a particular namespace by entering the oc annotate command . To apply HSTS to all routes in the cluster, enter the oc annotate command . For example: USD oc annotate route --all --all-namespaces --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" To apply HSTS to all routes in a particular namespace, enter the oc annotate command . For example: USD oc annotate route --all -n my-namespace --overwrite=true "haproxy.router.openshift.io/hsts_header"="max-age=31536000" Verification You can review the HSTS policy you configured. For example: To review the maxAge set for required HSTS policies, enter the following command: USD oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{"\n"}{end}' To review the HSTS annotations on all routes, enter the following command: USD oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations "haproxy.router.openshift.io/hsts_header"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{"\n"}}{{else}}{{""}}{{end}}{{end}}{{end}}' Example output Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains 20.1.5. Throughput issue troubleshooting methods Sometimes applications deployed by using OpenShift Container Platform can cause network throughput issues, such as unusually high latency between specific services. If pod logs do not reveal any cause of the problem, use the following methods to analyze performance issues: Use a packet analyzer, such as ping or tcpdump to analyze traffic between a pod and its node. For example, run the tcpdump tool on each pod while reproducing the behavior that led to the issue. Review the captures on both sides to compare send and receive timestamps to analyze the latency of traffic to and from a pod. Latency can occur in OpenShift Container Platform if a node interface is overloaded with traffic from other pods, storage devices, or the data plane. USD tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1 1 podip is the IP address for the pod. Run the oc get pod <pod_name> -o wide command to get the IP address of a pod. The tcpdump command generates a file at /tmp/dump.pcap containing all traffic between these two pods. You can run the analyzer shortly before the issue is reproduced and stop the analyzer shortly after the issue is finished reproducing to minimize the size of the file. You can also run a packet analyzer between the nodes with: USD tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789 Use a bandwidth measuring tool, such as iperf , to measure streaming throughput and UDP throughput. Locate any bottlenecks by running the tool from the pods first, and then running it from the nodes. For information on installing and using iperf , see this Red Hat Solution . In some cases, the cluster might mark the node with the router pod as unhealthy due to latency issues. Use worker latency profiles to adjust the frequency that the cluster waits for a status update from the node before taking action. If your cluster has designated lower-latency and higher-latency nodes, configure the spec.nodePlacement field in the Ingress Controller to control the placement of the router pod. Additional resources Latency spikes or temporary reduction in throughput to remote workers Ingress Controller configuration parameters 20.1.6. Using cookies to keep route statefulness OpenShift Container Platform provides sticky sessions, which enables stateful application traffic by ensuring all traffic hits the same endpoint. However, if the endpoint pod terminates, whether through restart, scaling, or a change in configuration, this statefulness can disappear. OpenShift Container Platform can use cookies to configure session persistence. The ingress controller selects an endpoint to handle any user requests, and creates a cookie for the session. The cookie is passed back in the response to the request and the user sends the cookie back with the request in the session. The cookie tells the ingress controller which endpoint is handling the session, ensuring that client requests use the cookie so that they are routed to the same pod. Note Cookies cannot be set on passthrough routes, because the HTTP traffic cannot be seen. Instead, a number is calculated based on the source IP address, which determines the backend. If backends change, the traffic can be directed to the wrong server, making it less sticky. If you are using a load balancer, which hides source IP, the same number is set for all connections and traffic is sent to the same pod. 20.1.6.1. Annotating a route with a cookie You can set a cookie name to overwrite the default, auto-generated one for the route. This allows the application receiving route traffic to know the cookie name. Deleting the cookie can force the request to re-choose an endpoint. The result is that if a server is overloaded, that server tries to remove the requests from the client and redistribute them. Procedure Annotate the route with the specified cookie name: USD oc annotate route <route_name> router.openshift.io/cookie_name="<cookie_name>" where: <route_name> Specifies the name of the route. <cookie_name> Specifies the name for the cookie. For example, to annotate the route my_route with the cookie name my_cookie : USD oc annotate route my_route router.openshift.io/cookie_name="my_cookie" Capture the route hostname in a variable: USD ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}') where: <route_name> Specifies the name of the route. Save the cookie, and then access the route: USD curl USDROUTE_NAME -k -c /tmp/cookie_jar Use the cookie saved by the command when connecting to the route: USD curl USDROUTE_NAME -k -b /tmp/cookie_jar 20.1.7. Path-based routes Path-based routes specify a path component that can be compared against a URL, which requires that the traffic for the route be HTTP based. Thus, multiple routes can be served using the same hostname, each with a different path. Routers should match routes based on the most specific path to the least. The following table shows example routes and their accessibility: Table 20.1. Route availability Route When Compared to Accessible www.example.com/test www.example.com/test Yes www.example.com No www.example.com/test and www.example.com www.example.com/test Yes www.example.com Yes www.example.com www.example.com/text Yes (Matched by the host, not the route) www.example.com Yes An unsecured route with a path apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: "/test" 1 to: kind: Service name: service-name 1 The path is the only added attribute for a path-based route. Note Path-based routing is not available when using passthrough TLS, as the router does not terminate TLS in that case and cannot read the contents of the request. 20.1.8. HTTP header configuration OpenShift Container Platform provides different methods for working with HTTP headers. When setting or deleting headers, you can use specific fields in the Ingress Controller or an individual route to modify request and response headers. You can also set certain headers by using route annotations. The various ways of configuring headers can present challenges when working together. Note You can only set or delete headers within an IngressController or Route CR, you cannot append them. If an HTTP header is set with a value, that value must be complete and not require appending in the future. In situations where it makes sense to append a header, such as the X-Forwarded-For header, use the spec.httpHeaders.forwardedHeaderPolicy field, instead of spec.httpHeaders.actions . 20.1.8.1. Order of precedence When the same HTTP header is modified both in the Ingress Controller and in a route, HAProxy prioritizes the actions in certain ways depending on whether it is a request or response header. For HTTP response headers, actions specified in the Ingress Controller are executed after the actions specified in a route. This means that the actions specified in the Ingress Controller take precedence. For HTTP request headers, actions specified in a route are executed after the actions specified in the Ingress Controller. This means that the actions specified in the route take precedence. For example, a cluster administrator sets the X-Frame-Options response header with the value DENY in the Ingress Controller using the following configuration: Example IngressController spec apiVersion: operator.openshift.io/v1 kind: IngressController # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY A route owner sets the same response header that the cluster administrator set in the Ingress Controller, but with the value SAMEORIGIN using the following configuration: Example Route spec apiVersion: route.openshift.io/v1 kind: Route # ... spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN When both the IngressController spec and Route spec are configuring the X-Frame-Options response header, then the value set for this header at the global level in the Ingress Controller takes precedence, even if a specific route allows frames. For a request header, the Route spec value overrides the IngressController spec value. This prioritization occurs because the haproxy.config file uses the following logic, where the Ingress Controller is considered the front end and individual routes are considered the back end. The header value DENY applied to the front end configurations overrides the same header with the value SAMEORIGIN that is set in the back end: frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN' Additionally, any actions defined in either the Ingress Controller or a route override values set using route annotations. 20.1.8.2. Special case headers The following headers are either prevented entirely from being set or deleted, or allowed under specific circumstances: Table 20.2. Special case header configuration options Header name Configurable using IngressController spec Configurable using Route spec Reason for disallowment Configurable using another method proxy No No The proxy HTTP request header can be used to exploit vulnerable CGI applications by injecting the header value into the HTTP_PROXY environment variable. The proxy HTTP request header is also non-standard and prone to error during configuration. No host No Yes When the host HTTP request header is set using the IngressController CR, HAProxy can fail when looking up the correct route. No strict-transport-security No No The strict-transport-security HTTP response header is already handled using route annotations and does not need a separate implementation. Yes: the haproxy.router.openshift.io/hsts_header route annotation cookie and set-cookie No No The cookies that HAProxy sets are used for session tracking to map client connections to particular back-end servers. Allowing these headers to be set could interfere with HAProxy's session affinity and restrict HAProxy's ownership of a cookie. Yes: the haproxy.router.openshift.io/disable_cookie route annotation the haproxy.router.openshift.io/cookie_name route annotation 20.1.9. Setting or deleting HTTP request and response headers in a route You can set or delete certain HTTP request and response headers for compliance purposes or other reasons. You can set or delete these headers either for all routes served by an Ingress Controller or for specific routes. For example, you might want to enable a web application to serve content in alternate locations for specific routes if that content is written in multiple languages, even if there is a default global location specified by the Ingress Controller serving the routes. The following procedure creates a route that sets the Content-Location HTTP request header so that the URL associated with the application, https://app.example.com , directs to the location https://app.example.com/lang/en-us . Directing application traffic to this location means that anyone using that specific route is accessing web content written in American English. Prerequisites You have installed the OpenShift CLI ( oc ). You are logged into an OpenShift Container Platform cluster as a project administrator. You have a web application that exposes a port and an HTTP or TLS endpoint listening for traffic on the port. Procedure Create a route definition and save it in a file called app-example-route.yaml : YAML definition of the created route with HTTP header directives apiVersion: route.openshift.io/v1 kind: Route # ... spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5 1 The list of actions you want to perform on the HTTP headers. 2 The type of header you want to change. In this case, a response header. 3 The name of the header you want to change. For a list of available headers you can set or delete, see HTTP header configuration . 4 The type of action being taken on the header. This field can have the value Set or Delete . 5 When setting HTTP headers, you must provide a value . The value can be a string from a list of available directives for that header, for example DENY , or it can be a dynamic value that will be interpreted using HAProxy's dynamic value syntax. In this case, the value is set to the relative location of the content. Create a route to your existing web application using the newly created route definition: USD oc -n app-example create -f app-example-route.yaml For HTTP request headers, the actions specified in the route definitions are executed after any actions performed on HTTP request headers in the Ingress Controller. This means that any values set for those request headers in a route will take precedence over the ones set in the Ingress Controller. For more information on the processing order of HTTP headers, see HTTP header configuration . 20.1.10. Route-specific annotations The Ingress Controller can set the default options for all the routes it exposes. An individual route can override some of these defaults by providing specific configurations in its annotations. Red Hat does not support adding a route annotation to an operator-managed route. Important To create a whitelist with multiple source IPs or subnets, use a space-delimited list. Any other delimiter type causes the list to be ignored without a warning or error message. Table 20.3. Route annotations Variable Description Environment variable used as default haproxy.router.openshift.io/balance Sets the load-balancing algorithm. Available options are random , source , roundrobin , and leastconn . The default value is source for TLS passthrough routes. For all other routes, the default is random . ROUTER_TCP_BALANCE_SCHEME for passthrough routes. Otherwise, use ROUTER_LOAD_BALANCE_ALGORITHM . haproxy.router.openshift.io/disable_cookies Disables the use of cookies to track related connections. If set to 'true' or 'TRUE' , the balance algorithm is used to choose which back-end serves connections for each incoming HTTP request. router.openshift.io/cookie_name Specifies an optional cookie to use for this route. The name must consist of any combination of upper and lower case letters, digits, "_", and "-". The default is the hashed internal key name for the route. haproxy.router.openshift.io/pod-concurrent-connections Sets the maximum number of connections that are allowed to a backing pod from a router. Note: If there are multiple pods, each can have this many connections. If you have multiple routers, there is no coordination among them, each may connect this many times. If not set, or set to 0, there is no limit. haproxy.router.openshift.io/rate-limit-connections Setting 'true' or 'TRUE' enables rate limiting functionality which is implemented through stick-tables on the specific backend per route. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.concurrent-tcp Limits the number of concurrent TCP connections made through the same source IP address. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-http Limits the rate at which a client with the same source IP address can make HTTP requests. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/rate-limit-connections.rate-tcp Limits the rate at which a client with the same source IP address can make TCP connections. It accepts a numeric value. Note: Using this annotation provides basic protection against denial-of-service attacks. haproxy.router.openshift.io/timeout Sets a server-side timeout for the route. (TimeUnits) ROUTER_DEFAULT_SERVER_TIMEOUT haproxy.router.openshift.io/timeout-tunnel This timeout applies to a tunnel connection, for example, WebSocket over cleartext, edge, reencrypt, or passthrough routes. With cleartext, edge, or reencrypt route types, this annotation is applied as a timeout tunnel with the existing timeout value. For the passthrough route types, the annotation takes precedence over any existing timeout value set. ROUTER_DEFAULT_TUNNEL_TIMEOUT ingresses.config/cluster ingress.operator.openshift.io/hard-stop-after You can set either an IngressController or the ingress config . This annotation redeploys the router and configures the HA proxy to emit the haproxy hard-stop-after global option, which defines the maximum time allowed to perform a clean soft-stop. ROUTER_HARD_STOP_AFTER router.openshift.io/haproxy.health.check.interval Sets the interval for the back-end health checks. (TimeUnits) ROUTER_BACKEND_CHECK_INTERVAL haproxy.router.openshift.io/ip_whitelist Sets an allowlist for the route. The allowlist is a space-separated list of IP addresses and CIDR ranges for the approved source addresses. Requests from IP addresses that are not in the allowlist are dropped. The maximum number of IP addresses and CIDR ranges directly visible in the haproxy.config file is 61. [ 1 ] haproxy.router.openshift.io/hsts_header Sets a Strict-Transport-Security header for the edge terminated or re-encrypt route. haproxy.router.openshift.io/rewrite-target Sets the rewrite path of the request on the backend. router.openshift.io/cookie-same-site Sets a value to restrict cookies. The values are: Lax : the browser does not send cookies on cross-site requests, but does send cookies when users navigate to the origin site from an external site. This is the default browser behavior when the SameSite value is not specified. Strict : the browser sends cookies only for same-site requests. None : the browser sends cookies for both cross-site and same-site requests. This value is applicable to re-encrypt and edge routes only. For more information, see the SameSite cookies documentation . haproxy.router.openshift.io/set-forwarded-headers Sets the policy for handling the Forwarded and X-Forwarded-For HTTP headers per route. The values are: append : appends the header, preserving any existing header. This is the default value. replace : sets the header, removing any existing header. never : never sets the header, but preserves any existing header. if-none : sets the header if it is not already set. ROUTER_SET_FORWARDED_HEADERS If the number of IP addresses and CIDR ranges in an allowlist exceeds 61, they are written into a separate file that is then referenced from haproxy.config . This file is stored in the var/lib/haproxy/router/whitelists folder. Note To ensure that the addresses are written to the allowlist, check that the full list of CIDR ranges are listed in the Ingress Controller configuration file. The etcd object size limit restricts how large a route annotation can be. Because of this, it creates a threshold for the maximum number of IP addresses and CIDR ranges that you can include in an allowlist. Note Environment variables cannot be edited. Router timeout variables TimeUnits are represented by a number followed by the unit: us *(microseconds), ms (milliseconds, default), s (seconds), m (minutes), h *(hours), d (days). The regular expression is: [1-9][0-9]*( us \| ms \| s \| m \| h \| d ). Variable Default Description ROUTER_BACKEND_CHECK_INTERVAL 5000ms Length of time between subsequent liveness checks on back ends. ROUTER_CLIENT_FIN_TIMEOUT 1s Controls the TCP FIN timeout period for the client connecting to the route. If the FIN sent to close the connection does not answer within the given time, HAProxy closes the connection. This is harmless if set to a low value and uses fewer resources on the router. ROUTER_DEFAULT_CLIENT_TIMEOUT 30s Length of time that a client has to acknowledge or send data. ROUTER_DEFAULT_CONNECT_TIMEOUT 5s The maximum connection time. ROUTER_DEFAULT_SERVER_FIN_TIMEOUT 1s Controls the TCP FIN timeout from the router to the pod backing the route. ROUTER_DEFAULT_SERVER_TIMEOUT 30s Length of time that a server has to acknowledge or send data. ROUTER_DEFAULT_TUNNEL_TIMEOUT 1h Length of time for TCP or WebSocket connections to remain open. This timeout period resets whenever HAProxy reloads. ROUTER_SLOWLORIS_HTTP_KEEPALIVE 300s Set the maximum time to wait for a new HTTP request to appear. If this is set too low, it can cause problems with browsers and applications not expecting a small keepalive value. Some effective timeout values can be the sum of certain variables, rather than the specific expected timeout. For example, ROUTER_SLOWLORIS_HTTP_KEEPALIVE adjusts timeout http-keep-alive . It is set to 300s by default, but HAProxy also waits on tcp-request inspect-delay , which is set to 5s . In this case, the overall timeout would be 300s plus 5s . ROUTER_SLOWLORIS_TIMEOUT 10s Length of time the transmission of an HTTP request can take. RELOAD_INTERVAL 5s Allows the minimum frequency for the router to reload and accept new changes. ROUTER_METRICS_HAPROXY_TIMEOUT 5s Timeout for the gathering of HAProxy metrics. A route setting custom timeout apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1 ... 1 Specifies the new timeout with HAProxy supported units ( us , ms , s , m , h , d ). If the unit is not provided, ms is the default. Note Setting a server-side timeout value for passthrough routes too low can cause WebSocket connections to timeout frequently on that route. A route that allows only one specific IP address metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 A route that allows several IP addresses metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12 A route that allows an IP address CIDR network metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24 A route that allows both IP an address and IP address CIDR networks metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8 A route specifying a rewrite target apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1 ... 1 Sets / as rewrite path of the request on the backend. Setting the haproxy.router.openshift.io/rewrite-target annotation on a route specifies that the Ingress Controller should rewrite paths in HTTP requests using this route before forwarding the requests to the backend application. The part of the request path that matches the path specified in spec.path is replaced with the rewrite target specified in the annotation. The following table provides examples of the path rewriting behavior for various combinations of spec.path , request path, and rewrite target. Table 20.4. rewrite-target examples Route.spec.path Request path Rewrite target Forwarded request path /foo /foo / / /foo /foo/ / / /foo /foo/bar / /bar /foo /foo/bar/ / /bar/ /foo /foo /bar /bar /foo /foo/ /bar /bar/ /foo /foo/bar /baz /baz/bar /foo /foo/bar/ /baz /baz/bar/ /foo/ /foo / N/A (request path does not match route path) /foo/ /foo/ / / /foo/ /foo/bar / /bar Certain special characters in haproxy.router.openshift.io/rewrite-target require special handling because they must be escaped properly. Refer to the following table to understand how these characters are handled. Table 20.5. Special character handling For character Use characters Notes # \# Avoid # because it terminates the rewrite expression % % or %% Avoid odd sequences such as %%% ' \' Avoid ' because it is ignored All other valid URL characters can be used without escaping. 20.1.11. Configuring the route admission policy Administrators and application developers can run applications in multiple namespaces with the same domain name. This is for organizations where multiple teams develop microservices that are exposed on the same hostname. Warning Allowing claims across namespaces should only be enabled for clusters with trust between namespaces, otherwise a malicious user could take over a hostname. For this reason, the default admission policy disallows hostname claims across namespaces. Prerequisites Cluster administrator privileges. Procedure Edit the .spec.routeAdmission field of the ingresscontroller resource variable using the following command: USD oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{"spec":{"routeAdmission":{"namespaceOwnership":"InterNamespaceAllowed"}}}' --type=merge Sample Ingress Controller configuration spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed ... Tip You can alternatively apply the following YAML to configure the route admission policy: apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed 20.1.12. Creating a route through an Ingress object Some ecosystem components have an integration with Ingress resources but not with route resources. To cover this case, OpenShift Container Platform automatically creates managed route objects when an Ingress object is created. These route objects are deleted when the corresponding Ingress objects are deleted. Procedure Define an Ingress object in the OpenShift Container Platform console or by entering the oc create command: YAML Definition of an Ingress apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate 1 The route.openshift.io/termination annotation can be used to configure the spec.tls.termination field of the Route as Ingress has no field for this. The accepted values are edge , passthrough and reencrypt . All other values are silently ignored. When the annotation value is unset, edge is the default route. The TLS certificate details must be defined in the template file to implement the default edge route. 3 When working with an Ingress object, you must specify an explicit hostname, unlike when working with routes. You can use the <host_name>.<cluster_ingress_domain> syntax, for example apps.openshiftdemos.com , to take advantage of the *.<cluster_ingress_domain> wildcard DNS record and serving certificate for the cluster. Otherwise, you must ensure that there is a DNS record for the chosen hostname. If you specify the passthrough value in the route.openshift.io/termination annotation, set path to '' and pathType to ImplementationSpecific in the spec: spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443 USD oc apply -f ingress.yaml 2 The route.openshift.io/destination-ca-certificate-secret can be used on an Ingress object to define a route with a custom destination certificate (CA). The annotation references a kubernetes secret, secret-ca-cert that will be inserted into the generated route. To specify a route object with a destination CA from an ingress object, you must create a kubernetes.io/tls or Opaque type secret with a certificate in PEM-encoded format in the data.tls.crt specifier of the secret. List your routes: USD oc get routes The result includes an autogenerated route whose name starts with frontend- : NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None If you inspect this route, it looks this: YAML Definition of an autogenerated route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend 20.1.13. Creating a route using the default certificate through an Ingress object If you create an Ingress object without specifying any TLS configuration, OpenShift Container Platform generates an insecure route. To create an Ingress object that generates a secure, edge-terminated route using the default ingress certificate, you can specify an empty TLS configuration as follows. Prerequisites You have a service that you want to expose. You have access to the OpenShift CLI ( oc ). Procedure Create a YAML file for the Ingress object. In this example, the file is called example-ingress.yaml : YAML definition of an Ingress object apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend ... spec: rules: ... tls: - {} 1 1 Use this exact syntax to specify TLS without specifying a custom certificate. Create the Ingress object by running the following command: USD oc create -f example-ingress.yaml Verification Verify that OpenShift Container Platform has created the expected route for the Ingress object by running the following command: USD oc get routes -o yaml Example output apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 ... spec: ... tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3 ... 1 The name of the route includes the name of the Ingress object followed by a random suffix. 2 In order to use the default certificate, the route should not specify spec.certificate . 3 The route should specify the edge termination policy. 20.1.14. Creating a route using the destination CA certificate in the Ingress annotation The route.openshift.io/destination-ca-certificate-secret annotation can be used on an Ingress object to define a route with a custom destination CA certificate. Prerequisites You may have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Procedure Create a secret for the destination CA certificate by entering the following command: USD oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path> For example: USD oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt Example output secret/dest-ca-cert created Add the route.openshift.io/destination-ca-certificate-secret to the Ingress annotations: apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: "reencrypt" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1 ... 1 The annotation references a kubernetes secret. The secret referenced in this annotation will be inserted into the generated route. Example output apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: ... tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- ... 20.1.15. Configuring the OpenShift Container Platform Ingress Controller for dual-stack networking If your OpenShift Container Platform cluster is configured for IPv4 and IPv6 dual-stack networking, your cluster is externally reachable by OpenShift Container Platform routes. The Ingress Controller automatically serves services that have both IPv4 and IPv6 endpoints, but you can configure the Ingress Controller for single-stack or dual-stack services. Prerequisites You deployed an OpenShift Container Platform cluster on bare metal. You installed the OpenShift CLI ( oc ). Procedure To have the Ingress Controller serve traffic over IPv4/IPv6 to a workload, you can create a service YAML file or modify an existing service YAML file by setting the ipFamilies and ipFamilyPolicy fields. For example: Sample service YAML file apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: "<resource_version_number>" selfLink: "/api/v1/namespaces/<namespace_name>/services/<service_name>" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {} 1 In a dual-stack instance, there are two different clusterIPs provided. 2 For a single-stack instance, enter IPv4 or IPv6 . For a dual-stack instance, enter both IPv4 and IPv6 . 3 For a single-stack instance, enter SingleStack . For a dual-stack instance, enter RequireDualStack . These resources generate corresponding endpoints . The Ingress Controller now watches endpointslices . To view endpoints , enter the following command: USD oc get endpoints To view endpointslices , enter the following command: USD oc get endpointslices Additional resources Specifying an alternative cluster domain using the appsDomain option 20.2. Secured routes Secure routes provide the ability to use several types of TLS termination to serve certificates to the client. The following sections describe how to create re-encrypt, edge, and passthrough routes with custom certificates. Important If you create routes in Microsoft Azure through public endpoints, the resource names are subject to restriction. You cannot create resources that use certain terms. For a list of terms that Azure restricts, see Resolve reserved resource name errors in the Azure documentation. 20.2.1. Creating a re-encrypt route with a custom certificate You can configure a secure route using reencrypt TLS termination with a custom certificate by using the oc create route command. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a separate destination CA certificate in a PEM-encoded file. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and reencrypt TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You must also specify a destination CA certificate to enable the Ingress Controller to trust the service's certificate. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , cacert.crt , and (optionally) ca.crt . Substitute the name of the Service resource that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using reencrypt TLS termination and a custom certificate: USD oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route reencrypt --help for more options. 20.2.2. Creating an edge route with a custom certificate You can configure a secure route using edge TLS termination with a custom certificate by using the oc create route command. With an edge route, the Ingress Controller terminates TLS encryption before forwarding traffic to the destination pod. The route specifies the TLS certificate and key that the Ingress Controller uses for the route. Prerequisites You must have a certificate/key pair in PEM-encoded files, where the certificate is valid for the route host. You may have a separate CA certificate in a PEM-encoded file that completes the certificate chain. You must have a service that you want to expose. Note Password protected key files are not supported. To remove a passphrase from a key file, use the following command: USD openssl rsa -in password_protected_tls.key -out tls.key Procedure This procedure creates a Route resource with a custom certificate and edge TLS termination. The following assumes that the certificate/key pair are in the tls.crt and tls.key files in the current working directory. You may also specify a CA certificate if needed to complete the certificate chain. Substitute the actual path names for tls.crt , tls.key , and (optionally) ca.crt . Substitute the name of the service that you want to expose for frontend . Substitute the appropriate hostname for www.example.com . Create a secure Route resource using edge TLS termination and a custom certificate. USD oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com If you examine the resulting Route resource, it should look similar to the following: YAML Definition of the Secure Route apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- See oc create route edge --help for more options. 20.2.3. Creating a passthrough route You can configure a secure route using passthrough termination by using the oc create route command. With passthrough termination, encrypted traffic is sent straight to the destination without the router providing TLS termination. Therefore no key or certificate is required on the route. Prerequisites You must have a service that you want to expose. Procedure Create a Route resource: USD oc create route passthrough route-passthrough-secured --service=frontend --port=8080 If you examine the resulting Route resource, it should look similar to the following: A Secured Route Using Passthrough Termination apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend 1 The name of the object, which is limited to 63 characters. 2 The termination field is set to passthrough . This is the only required tls field. 3 Optional insecureEdgeTerminationPolicy . The only valid values are None , Redirect , or empty for disabled. The destination pod is responsible for serving certificates for the traffic at the endpoint. This is currently the only method that can support requiring client certificates, also known as two-way authentication. 20.2.4. Creating a route with externally managed certificate Important Securing route with external certificates in TLS secrets is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see Technology Preview Features Support Scope . You can configure OpenShift Container Platform routes with third-party certificate management solutions by using the .spec.tls.externalCertificate field of the route API. You can reference externally managed TLS certificates via secrets, eliminating the need for manual certificate management. Using the externally managed certificate reduces errors ensuring a smoother rollout of certificate updates, enabling the OpenShift router to serve renewed certificates promptly. Note This feature applies to both edge routes and re-encrypt routes. Prerequisites You must enable the RouteExternalCertificate feature gate. You must have the create and update permissions on the routes/custom-host . You must have a secret containing a valid certificate/key pair in PEM-encoded format of type kubernetes.io/tls , which includes both tls.key and tls.crt keys. You must place the referenced secret in the same namespace as the route you want to secure. Procedure Create a role in the same namespace as the secret to allow the router service account read access by running the following command: USD oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \ 1 --namespace=<current-namespace> 2 1 Specify the actual name of your secret. 2 Specify the namespace where both your secret and route reside. Create a rolebinding in the same namespace as the secret and bind the router service account to the newly created role by running the following command: USD oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1 1 Specify the namespace where both your secret and route reside. Create a YAML file that defines the route and specifies the secret containing your certificate using the following example. YAML definition of the secure route apiVersion: route.openshift.io/v1 kind: Route metadata: name: myedge namespace: test spec: host: myedge-test.apps.example.com tls: externalCertificate: name: <secret-name> 1 termination: edge [...] [...] 1 Specify the actual name of your secret. Create a route resource by running the following command: USD oc apply -f <route.yaml> 1 1 Specify the generated YAML filename. If the secret exists and has a certificate/key pair, the router will serve the generated certificate if all prerequisites are met. Note If .spec.tls.externalCertificate is not provided, the router will use default generated certificates. You cannot provide the .spec.tls.certificate field or the .spec.tls.key field when using the .spec.tls.externalCertificate field. Additional resources For troubleshooting routes with externally managed certificates, check the OpenShift Container Platform router pod logs for errors, see Investigating pod issues .
[ "oc new-project hello-openshift", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json", "oc expose pod/hello-openshift", "oc expose svc hello-openshift", "oc get routes -o yaml <name of resource> 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: hello-openshift spec: host: www.example.com 1 port: targetPort: 8080 2 to: kind: Service name: hello-openshift", "oc get ingresses.config/cluster -o jsonpath={.spec.domain}", "oc new-project hello-openshift", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/hello-openshift/hello-pod.json", "oc expose pod/hello-openshift", "apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded 1 name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift 2 tls: termination: edge to: kind: Service name: hello-openshift", "oc -n hello-openshift create -f hello-openshift-route.yaml", "oc -n hello-openshift get routes/hello-openshift-edge -o yaml", "apiVersion: route.openshift.io/v1 kind: Route metadata: labels: type: sharded name: hello-openshift-edge namespace: hello-openshift spec: subdomain: hello-openshift tls: termination: edge to: kind: Service name: hello-openshift status: ingress: - host: hello-openshift.<apps-sharded.basedomain.example.net> 1 routerCanonicalHostname: router-sharded.<apps-sharded.basedomain.example.net> 2 routerName: sharded 3", "oc annotate route <route_name> --overwrite haproxy.router.openshift.io/timeout=<timeout><time_unit> 1", "oc annotate route myroute --overwrite haproxy.router.openshift.io/timeout=2s", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000;\\ 1 includeSubDomains;preload\"", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=31536000;includeSubDomains;preload 1 2 3 spec: host: def.abc.com tls: termination: \"reencrypt\" wildcardPolicy: \"Subdomain\"", "oc annotate route <route_name> -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "metadata: annotations: haproxy.router.openshift.io/hsts_header: max-age=0", "oc annotate route --all -n <namespace> --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=0\"", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: routename HSTS: max-age=0", "oc edit ingresses.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Ingress metadata: name: cluster spec: domain: 'hello-openshift-default.apps.username.devcluster.openshift.com' requiredHSTSPolicies: 1 - domainPatterns: 2 - '*hello-openshift-default.apps.username.devcluster.openshift.com' - '*hello-openshift-default2.apps.username.devcluster.openshift.com' namespaceSelector: 3 matchLabels: myPolicy: strict maxAge: 4 smallestMaxAge: 1 largestMaxAge: 31536000 preloadPolicy: RequirePreload 5 includeSubDomainsPolicy: RequireIncludeSubDomains 6 - domainPatterns: - 'abc.example.com' - '*xyz.example.com' namespaceSelector: matchLabels: {} maxAge: {} preloadPolicy: NoOpinion includeSubDomainsPolicy: RequireNoIncludeSubDomains", "oc annotate route --all --all-namespaces --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"", "oc annotate route --all -n my-namespace --overwrite=true \"haproxy.router.openshift.io/hsts_header\"=\"max-age=31536000\"", "oc get clusteroperator/ingress -n openshift-ingress-operator -o jsonpath='{range .spec.requiredHSTSPolicies[*]}{.spec.requiredHSTSPolicies.maxAgePolicy.largestMaxAge}{\"\\n\"}{end}'", "oc get route --all-namespaces -o go-template='{{range .items}}{{if .metadata.annotations}}{{USDa := index .metadata.annotations \"haproxy.router.openshift.io/hsts_header\"}}{{USDn := .metadata.name}}{{with USDa}}Name: {{USDn}} HSTS: {{USDa}}{{\"\\n\"}}{{else}}{{\"\"}}{{end}}{{end}}{{end}}'", "Name: <_routename_> HSTS: max-age=31536000;preload;includeSubDomains", "tcpdump -s 0 -i any -w /tmp/dump.pcap host <podip 1> && host <podip 2> 1", "tcpdump -s 0 -i any -w /tmp/dump.pcap port 4789", "oc annotate route <route_name> router.openshift.io/cookie_name=\"<cookie_name>\"", "oc annotate route my_route router.openshift.io/cookie_name=\"my_cookie\"", "ROUTE_NAME=USD(oc get route <route_name> -o jsonpath='{.spec.host}')", "curl USDROUTE_NAME -k -c /tmp/cookie_jar", "curl USDROUTE_NAME -k -b /tmp/cookie_jar", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-unsecured spec: host: www.example.com path: \"/test\" 1 to: kind: Service name: service-name", "apiVersion: operator.openshift.io/v1 kind: IngressController spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: DENY", "apiVersion: route.openshift.io/v1 kind: Route spec: httpHeaders: actions: response: - name: X-Frame-Options action: type: Set set: value: SAMEORIGIN", "frontend public http-response set-header X-Frame-Options 'DENY' frontend fe_sni http-response set-header X-Frame-Options 'DENY' frontend fe_no_sni http-response set-header X-Frame-Options 'DENY' backend be_secure:openshift-monitoring:alertmanager-main http-response set-header X-Frame-Options 'SAMEORIGIN'", "apiVersion: route.openshift.io/v1 kind: Route spec: host: app.example.com tls: termination: edge to: kind: Service name: app-example httpHeaders: actions: 1 response: 2 - name: Content-Location 3 action: type: Set 4 set: value: /lang/en-us 5", "oc -n app-example create -f app-example-route.yaml", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/timeout: 5500ms 1", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.10 192.168.1.11 192.168.1.12", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 192.168.1.0/24", "metadata: annotations: haproxy.router.openshift.io/ip_whitelist: 180.5.61.153 192.168.1.0/24 10.0.0.0/8", "apiVersion: route.openshift.io/v1 kind: Route metadata: annotations: haproxy.router.openshift.io/rewrite-target: / 1", "oc -n openshift-ingress-operator patch ingresscontroller/default --patch '{\"spec\":{\"routeAdmission\":{\"namespaceOwnership\":\"InterNamespaceAllowed\"}}}' --type=merge", "spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: operator.openshift.io/v1 kind: IngressController metadata: name: default namespace: openshift-ingress-operator spec: routeAdmission: namespaceOwnership: InterNamespaceAllowed", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" 1 route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 2 spec: rules: - host: www.example.com 3 http: paths: - backend: service: name: frontend port: number: 443 path: / pathType: Prefix tls: - hosts: - www.example.com secretName: example-com-tls-certificate", "spec: rules: - host: www.example.com http: paths: - path: '' pathType: ImplementationSpecific backend: service: name: frontend port: number: 443", "oc apply -f ingress.yaml", "oc get routes", "NAME HOST/PORT PATH SERVICES PORT TERMINATION WILDCARD frontend-gnztq www.example.com frontend 443 reencrypt/Redirect None", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-gnztq ownerReferences: - apiVersion: networking.k8s.io/v1 controller: true kind: Ingress name: frontend uid: 4e6c59cc-704d-4f44-b390-617d879033b6 spec: host: www.example.com path: / port: targetPort: https tls: certificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- insecureEdgeTerminationPolicy: Redirect key: | -----BEGIN RSA PRIVATE KEY----- [...] -----END RSA PRIVATE KEY----- termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- to: kind: Service name: frontend", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend spec: rules: tls: - {} 1", "oc create -f example-ingress.yaml", "oc get routes -o yaml", "apiVersion: v1 items: - apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend-j9sdd 1 spec: tls: 2 insecureEdgeTerminationPolicy: Redirect termination: edge 3", "oc create secret generic dest-ca-cert --from-file=tls.crt=<file_path>", "oc -n test-ns create secret generic dest-ca-cert --from-file=tls.crt=tls.crt", "secret/dest-ca-cert created", "apiVersion: networking.k8s.io/v1 kind: Ingress metadata: name: frontend annotations: route.openshift.io/termination: \"reencrypt\" route.openshift.io/destination-ca-certificate-secret: secret-ca-cert 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend annotations: route.openshift.io/termination: reencrypt route.openshift.io/destination-ca-certificate-secret: secret-ca-cert spec: tls: insecureEdgeTerminationPolicy: Redirect termination: reencrypt destinationCACertificate: | -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "apiVersion: v1 kind: Service metadata: creationTimestamp: yyyy-mm-ddT00:00:00Z labels: name: <service_name> manager: kubectl-create operation: Update time: yyyy-mm-ddT00:00:00Z name: <service_name> namespace: <namespace_name> resourceVersion: \"<resource_version_number>\" selfLink: \"/api/v1/namespaces/<namespace_name>/services/<service_name>\" uid: <uid_number> spec: clusterIP: 172.30.0.0/16 clusterIPs: 1 - 172.30.0.0/16 - <second_IP_address> ipFamilies: 2 - IPv4 - IPv6 ipFamilyPolicy: RequireDualStack 3 ports: - port: 8080 protocol: TCP targetport: 8080 selector: name: <namespace_name> sessionAffinity: None type: ClusterIP status: loadbalancer: {}", "oc get endpoints", "oc get endpointslices", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route reencrypt --service=frontend --cert=tls.crt --key=tls.key --dest-ca-cert=destca.crt --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: reencrypt key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- destinationCACertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "openssl rsa -in password_protected_tls.key -out tls.key", "oc create route edge --service=frontend --cert=tls.crt --key=tls.key --ca-cert=ca.crt --hostname=www.example.com", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: frontend spec: host: www.example.com to: kind: Service name: frontend tls: termination: edge key: |- -----BEGIN PRIVATE KEY----- [...] -----END PRIVATE KEY----- certificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE----- caCertificate: |- -----BEGIN CERTIFICATE----- [...] -----END CERTIFICATE-----", "oc create route passthrough route-passthrough-secured --service=frontend --port=8080", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: route-passthrough-secured 1 spec: host: www.example.com port: targetPort: 8080 tls: termination: passthrough 2 insecureEdgeTerminationPolicy: None 3 to: kind: Service name: frontend", "oc create role secret-reader --verb=get,list,watch --resource=secrets --resource-name=<secret-name> \\ 1 --namespace=<current-namespace> 2", "oc create rolebinding secret-reader-binding --role=secret-reader --serviceaccount=openshift-ingress:router --namespace=<current-namespace> 1", "apiVersion: route.openshift.io/v1 kind: Route metadata: name: myedge namespace: test spec: host: myedge-test.apps.example.com tls: externalCertificate: name: <secret-name> 1 termination: edge [...] [...]", "oc apply -f <route.yaml> 1" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/networking/configuring-routes
Chapter 10. Configuring AWS STS for Red Hat Quay
Chapter 10. Configuring AWS STS for Red Hat Quay Support for Amazon Web Services (AWS) Security Token Service (STS) is available for standalone Red Hat Quay deployments and Red Hat Quay on OpenShift Container Platform. AWS STS is a web service for requesting temporary, limited-privilege credentials for AWS Identity and Access Management (IAM) users and for users that you authenticate, or federated users . This feature is useful for clusters using Amazon S3 as an object storage, allowing Red Hat Quay to use STS protocols to authenticate with Amazon S3, which can enhance the overall security of the cluster and help to ensure that access to sensitive data is properly authenticated and authorized. Configuring AWS STS is a multi-step process that requires creating an AWS IAM user, creating an S3 role, and configuring your Red Hat Quay config.yaml file to include the proper resources. Use the following procedures to configure AWS STS for Red Hat Quay. 10.1. Creating an IAM user Use the following procedure to create an IAM user. Procedure Log in to the Amazon Web Services (AWS) console and navigate to the Identity and Access Management (IAM) console. In the navigation pane, under Access management click Users . Click Create User and enter the following information: Enter a valid username, for example, quay-user . For Permissions options , click Add user to group . On the review and create page, click Create user . You are redirected to the Users page. Click the username, for example, quay-user . Copy the ARN of the user, for example, arn:aws:iam::123492922789:user/quay-user . On the same page, click the Security credentials tab. Navigate to Access keys . Click Create access key . On the Access key best practices & alternatives page, click Command Line Interface (CLI) , then, check the confirmation box. Then click . Optional. On the Set description tag - optional page, enter a description. Click Create access key . Copy and store the access key and the secret access key. Important This is the only time that the secret access key can be viewed or downloaded. You cannot recover it later. However, you can create a new access key any time. Click Done . 10.2. Creating an S3 role Use the following procedure to create an S3 role for AWS STS. Prerequisites You have created an IAM user and stored the access key and the secret access key. Procedure If you are not already, navigate to the IAM dashboard by clicking Dashboard . In the navigation pane, click Roles under Access management . Click Create role . Click Custom Trust Policy , which shows an editable JSON policy. By default, it shows the following information: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": {}, "Action": "sts:AssumeRole" } ] } Under the Principal configuration field, add your AWS ARN information. For example: { "Version": "2012-10-17", "Statement": [ { "Sid": "Statement1", "Effect": "Allow", "Principal": { "AWS": "arn:aws:iam::123492922789:user/quay-user" }, "Action": "sts:AssumeRole" } ] } Click . On the Add permissions page, type AmazonS3FullAccess in the search box. Check the box to add that policy to the S3 role, then click . On the Name, review, and create page, enter the following information: Enter a role name, for example, example-role . Optional. Add a description. Click the Create role button. You are navigated to the Roles page. Under Role name , the newly created S3 should be available. 10.3. Configuring Red Hat Quay on OpenShift Container Platform to use AWS STS Use the following procedure to edit your Red Hat Quay on OpenShift Container Platform config.yaml file to use AWS STS. Note You can also edit and re-deploy your Red Hat Quay on OpenShift Container Platform config.yaml file directly instead of using the OpenShift Container Platform UI. Prerequisites You have configured a Role ARN. You have generated a User Access Key. You have generated a User Secret Key. Procedure On the Home page of your OpenShift Container Platform deployment, click Operators Installed Operators . Click Red Hat Quay . Click Quay Registry and then the name of your Red Hat Quay registry. Under Config Bundle Secret , click the name of your registry configuration bundle, for example, quay-registry-config-bundle-qet56 . On the configuration bundle page, click Actions to reveal a drop-down menu. Then click Edit Secret . Update your the DISTRIBUTED_STORAGE_CONFIG fields of your config.yaml file with the following information: # ... DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6 # ... 1 The unique Amazon Resource Name (ARN) required when configuring AWS STS 2 The name of your s3 bucket. 3 The storage path for data. Usually /datastorage . 4 Optional. The Amazon Web Services region. Defaults to us-east-1 . 5 The generated AWS S3 user access key required when configuring AWS STS. 6 The generated AWS S3 user secret key required when configuring AWS STS. Click Save . Verification Tag a sample image, for example, busybox , that will be pushed to the repository. For example: USD podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test Push the sample image by running the following command: USD podman push <quay-server.example.com>/<organization_name>/busybox:test Verify that the push was successful by navigating to the Organization that you pushed the image to in your Red Hat Quay registry Tags . Navigate to the Amazon Web Services (AWS) console and locate your s3 bucket. Click the name of your s3 bucket. On the Objects page, click datastorage/ . On the datastorage/ page, the following resources should seen: sha256/ uploads/ These resources indicate that the push was successful, and that AWS STS is properly configured.
[ "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": {}, \"Action\": \"sts:AssumeRole\" } ] }", "{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Sid\": \"Statement1\", \"Effect\": \"Allow\", \"Principal\": { \"AWS\": \"arn:aws:iam::123492922789:user/quay-user\" }, \"Action\": \"sts:AssumeRole\" } ] }", "DISTRIBUTED_STORAGE_CONFIG: default: - STSS3Storage - sts_role_arn: <role_arn> 1 s3_bucket: <s3_bucket_name> 2 storage_path: <storage_path> 3 s3_region: <region> 4 sts_user_access_key: <s3_user_access_key> 5 sts_user_secret_key: <s3_user_secret_key> 6", "podman tag docker.io/library/busybox <quay-server.example.com>/<organization_name>/busybox:test", "podman push <quay-server.example.com>/<organization_name>/busybox:test" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/red_hat_quay_operator_features/configuring-aws-sts-quay
Networking
Networking OpenShift Container Platform 4.7 Configuring and managing cluster networking Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/networking/index
Chapter 6. Configuring the Image service (glance)
Chapter 6. Configuring the Image service (glance) The Image service (glance) provides discovery, registration, and delivery services for disk and server images. It provides the ability to copy or store a snapshot of a server image. You can use stored images as templates to commission new servers quickly and more consistently than installing a server operating system and individually configuring services. You can configure the following back ends (stores) for the Image service: RADOS Block Device (RBD) is the default back end when you use Red Hat Ceph Storage. For more information, see Configuring the control plane to use the Red Hat Ceph Storage cluster . Block Storage (cinder). Object Storage (swift). NFS. RBD multistore. You can use multiple stores with distributed edge architecture so that you can have an image pool at every edge site. Prerequisites You have the oc command line tool installed on your workstation. You are logged on to a workstation that has access to the RHOSO control plane as a user with cluster-admin privileges. 6.1. Configuring a Block Storage back end for the Image service You can configure the Image service (glance) with the Block Storage service (cinder) as the storage back end. Prerequisites You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift . Ensure that placement, network, and transport protocol requirements are met. For example, if your Block Storage service back end is Fibre Channel (FC), the nodes on which the Image service API is running must have a host bus adapter (HBA). For FC, iSCSI, and NVMe over Fabrics (NVMe-oF), configure the nodes to support the protocol and use multipath. For more information, see Configuring transport protocols . Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and add the following parameters to the glance template to configure the Block Storage service as the back end: Set replicas to 3 for high availability across APIs. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Additional resources Parameters for configuring the Block Storage back end 6.1.1. Enabling the creation of multiple instances or volumes from a volume-backed image When using the Block Storage service (cinder) as the back end for the Image service (glance), each image is stored as a volume (image volume) ideally in the Block Storage service project owned by the glance user. When a user wants to create multiple instances or volumes from a volume-backed image, the Image service host must attach to the image volume to copy the data multiple times. But this causes performance issues and some of these instances or volumes will not be created, because, by default, Block Storage volumes cannot be attached multiple times to the same host. However, most Block Storage back ends support the volume multi-attach property, which enables a volume to be attached multiple times to the same host. Therefore, you can prevent these performance issues by creating a Block Storage volume type for the Image service back end that enables this multi-attach property and configuring the Image service to use this multi-attach volume type. Note By default, only the Block Storage project administrator can create volume types. Procedure Access the remote shell for the OpenStackClient pod from your workstation: Create a Block Storage volume type for the Image service back end that enables the multi-attach property, as follows: If you do not specify a back end for this volume type, then the Block Storage scheduler service determines which back end to use when creating each image volume, therefore these volumes might be saved on different back ends. You can specify the name of the back end by adding the volume_backend_name property to this volume type. You might need to ask your Block Storage administrator for the correct volume_backend_name for your multi-attach volume type. For this example, we are using iscsi as the back-end name. Exit the openstackclient pod: USD exit Open your OpenStackControlPlane CR file, openstack_control_plane.yaml . In the glance template, add the following parameter to the end of the customServiceConfig , [default_backend] section to configure the Image service to use the Block Storage multi-attach volume type: Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. Additional resources Parameters for configuring the Block Storage back end 6.1.2. Parameters for configuring the Block Storage back end You can add the following parameters to the end of the customServiceConfig , [default_backend] section of the glance template in your OpenStackControlPlane CR file. Table 6.1. Block Storage back-end parameters for the Image service Parameter = Default value Type Description of use cinder_use_multipath = False boolean value Set to True when multipath is supported for your deployment. cinder_enforce_multipath = False boolean value Set to True to abort the attachment of volumes for image transfer when multipath is not running. cinder_mount_point_base = /var/lib/glance/mnt string value Specify a string representing the absolute path of the mount point, the directory where the Image service mounts the NFS share. Note This parameter is only applicable when using an NFS Block Storage back end for the Image service. cinder_do_extend_attached = False boolean value Set to True when the images are > 1 GB to optimize the Block Storage process of creating the required volume sizes for each image. The Block Storage service creates an initial 1 GB volume and extends the volume size in 1 GB increments until it contains the data of the entire image. When this parameter is either not added or set to False , the incremental process of extending the volume is very time-consuming, requiring the volume to be subsequently detached, extended by 1 GB if it is still smaller than the image size and then reattached. By setting this parameter to True, this process is optimized by performing these consecutive 1 GB volume extensions while the volume is attached. Note This parameter requires your Block Storage back end to support the extension of attached (in-use) volumes. See your back-end driver documentation for information on which features are supported. cinder_volume_type = __DEFAULT__ string value Specify the name of the Block Storage volume type that can be optimized for creating volumes for images. For example, you can create a volume type that enables the creation of multiple instances or volumes from a volume-backed image. For more information, see Creating a multi-attach volume type . When this parameter is not used, volumes are created by using the default Block Storage volume type. 6.2. Configuring an Object Storage back end You can configure the Image service (glance) with the Object Storage service (swift) as the storage back end. Prerequisites You have planned networking for storage to ensure connectivity between the storage back end, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift . Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and add the following parameters to the glance template to configure the Object Storage service as the back end: Set replicas to 3 for high availability across APIs. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 6.3. Configuring an NFS back end You can configure the Image service (glance) with NFS as the storage back end. NFS is not native to the Image service. When you mount an NFS share to use for the Image service, the Image service writes data to the file system but does not validate the availability of the NFS share. If you use NFS as a back end for the Image service, refer to the following best practices to mitigate risk: Use a reliable production-grade NFS back end. Make sure the network is available to the Red Hat OpenStack Services on OpenShift (RHOSO) control plane where the Image service is deployed, and that the Image service has a NetworkAttachmentDefinition custom resource (CR) that points to the network. This configuration ensures that the Image service pods can reach the NFS server. Set export permissions. Write permissions must be present in the shared file system that you use as a store. Limitations In Red Hat OpenStack Services on OpenShift (RHOSO), you cannot set client-side NFS mount options in a pod spec. You can set NFS mount options in one of the following ways: Set server-side mount options. Use /etc/nfsmount.conf . Mount NFS volumes by using PersistentVolumes, which have mount options. Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and add the extraMounts parameter in the spec section to add the export path and IP address of the NFS share. The path is mapped to /var/lib/glance/images , where the Image service API stores and retrieves images: Replace <nfs_export_path> with the export path of your NFS share. Replace <nfs_ip_address> with the IP address of your NFS share. This IP address must be part of the overlay network that is reachable by the Image service. Add the following parameters to the glance template to configure NFS as the back end: Set replicas to 3 for high availability across APIs. Note When you configure an NFS back end, you must set the type to single . By default, the Image service has a split deployment type for an external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone), and an internal API service, which is accessible only through the internal endpoint for the Identity service. The split deployment type is invalid for a file back end because different pods access the same file share. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 6.4. Configuring multistore for a single Image service API instance You can configure the Image service (glance) with multiple storage back ends. To configure multiple back ends for a single Image service API ( glanceAPI ) instance, you set the enabled_backends parameter with key-value pairs. The key is the identifier for the store and the value is the type of store. The following values are valid: file http rbd swift cinder Prerequisites You have planned networking for storage to ensure connectivity between the storage back ends, the control plane, and the Compute nodes on the data plane. For more information, see Storage networks in Planning your deployment and Preparing networks for Red Hat OpenStack Services on OpenShift in Deploying Red Hat OpenStack Services on OpenShift . Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and add the parameters to the glance template to configure the back ends. In the following example, there are two Ceph RBD stores and one Object Storage service (swift) store: Specify the back end to use as the default back end. In the following example, the default back end is ceph-1 : Add the configuration for each back end type you want to use: Add the configuration for the first Ceph RBD store, ceph-0 : Add the configuration for the second Ceph RBD store, ceph-1 : Add the configuration for the Object Storage service store, swift-0 : Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 6.5. Configuring multiple Image service API instances You can deploy multiple Image service API ( glanceAPI ) instances to serve different workloads, for example in an edge deployment. When you deploy multiple glanceAPI instances, they are orchestrated by the same glance-operator , but you can connect them to a single back end or to different back ends. Multiple glanceAPI instances inherit the same configuration from the main customServiceConfig parameter in your OpenStackControlPlane CR file. You use the extraMounts parameter to connect each instance to a back end. For example, you can connect each instance to a single Red Hat Ceph Storage cluster or to different Red Hat Ceph Storage clusters. You can also deploy multiple glanceAPI instances in an availability zone (AZ) to serve different workloads in that AZ. Note You can only register one glanceAPI instance as an endpoint for OpenStack CLI operations in the Keystone catalog, but you can change the default endpoint by updating the keystoneEndpoint parameter in your OpenStackControlPlane CR file. For information about adding and decommissioning glanceAPIs , see Performing operations with the Image service (glance) . Procedure Open your OpenStackControlPlane CR file, openstack_control_plane.yaml , and add the glanceAPIs parameter to the glance template to configure multiple glanceAPI instances. In the following example, you create three glanceAPI instances that are named api0 , api1 , and api2 : api0 is registered in the Keystone catalog and is the default endpoint for OpenStack CLI operations. api1 and api2 are not default endpoints, but they are active APIs that users can use for image uploads by specifying the --os-image-url parameter when they upload an image. You can update the keystoneEndpoint parameter to change the default endpoint in the Keystone catalog. Add the extraMounts parameter to connect the three glanceAPI instances to a different back end. In the following example, you connect api0 , api1 , and api2 to three different Ceph Storage clusters that are named ceph0 , ceph1 , and ceph2 : Replace <secret_name> with the name of the secret associated to the Ceph Storage cluster that you are using as the back end for the specific glanceAPI , for example, ceph-conf-files-0 for the ceph0 cluster. Update the control plane: Wait until RHOCP creates the resources related to the OpenStackControlPlane CR. Run the following command to check the status: The OpenStackControlPlane resources are created when the status is "Setup complete". Tip Append the -w option to the end of the get command to track deployment progress. 6.6. Split and single Image service API layouts By default, the Image service (glance) has a split deployment type: An external API service, which is accessible through the public and administrator endpoints for the Identity service (keystone) An internal API service, which is accessible only through the internal endpoint for the Identity service The split deployment type is invalid for an NFS or file back end because different pods access the same file share. When you configure an NFS or file back end, you must set the type to single in your OpenStackControlPlane CR. Split layout example In the following example of a split layout type in an edge deployment, two glanceAPI instances are deployed in an availability zone (AZ) to serve different workloads in that AZ. Single layout example In the following example of a single layout type in an NFS back-end configuration, different pods access the same file share: Set replicas to 3 for high availability across APIs. 6.7. Configuring multistore with edge architecture When you use multiple stores with distributed edge architecture, you can have a Ceph RADOS Block Device (RBD) image pool at every edge site. You can copy images between the central site, which is also known as the hub site, and the edge sites. The image metadata contains the location of each copy. For example, an image present on two edge sites is exposed as a single UUID with three locations: the central site plus the two edge sites. This means you can have copies of image data that share a single UUID on many stores. With an RBD image pool at every edge site, you can launch instances quickly by using Ceph RBD copy-on-write (COW) and snapshot layering technology. This means that you can launch instances from volumes and have live migration. For more information about layering with Ceph RBD, see Ceph block device layering in the Red Hat Ceph Storage Block Device Guide . When you launch an instance at an edge site, the required image is copied to the local Image service (glance) store automatically. However, you can copy images in advance from the central Image service store to edge sites to save time during instance launch. Refer to the following requirements to use images with edge sites: A copy of each image must exist in the Image service at the central location. You must copy images from an edge site to the central location before you can copy them to other edge sites. You must use raw images when deploying a Distributed Compute Node (DCN) architecture with Red Hat Ceph Storage. RBD must be the storage driver for the Image, Compute, and Block Storage services. For more information about using images with DCN, see Deploying a Distributed Compute Node (DCN) architecture .
[ "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service customServiceConfig: | [DEFAULT] enabled_backends = default_backend:cinder [glance_store] default_backend = default_backend [default_backend] rootwrap_config = /etc/glance/rootwrap.conf description = Default cinder backend cinder_store_user_name = {{ .ServiceUser }} cinder_store_password = {{ .ServicePassword }} cinder_store_project_name = servicecinder_catalog_info volumev3::publicURL", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "oc rsh -n openstack openstackclient", "openstack volume type create glance-multiattach openstack volume type set --property multiattach=\"<is> True\" glance-multiattach", "openstack volume type set glance-multiattach --property volume_backend_name=iscsi", "exit", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: glance: template: customServiceConfig: | [default_backend] cinder_volume_type = glance-multiattach", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane spec: glance: template: glanceAPIs: default: replicas: 3 # Configure back end; set to 3 when deploying service customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift [glance_store] default_backend = default_backend [default_backend] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "apiVersion: core.openstack.org/v1beta1 kind: OpenStackControlPlane metadata: name: openstack spec: extraMounts: - extraVol: - extraVolType: Nfs mounts: - mountPath: /var/lib/glance/images name: nfs propagation: - Glance volumes: - name: nfs nfs: path: <nfs_export_path> server: <nfs_ip_address> name: r1 region: r1", "spec: extraMounts: glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "spec: glance: template: customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift", "customServiceConfig: | [DEFAULT] debug=True enabled_backends = ceph-0:rbd,ceph-1:rbd,swift-0:swift [glance_store] default_backend = ceph-1", "customServiceConfig: | [DEFAULT] [ceph-0] rbd_store_ceph_conf = /etc/ceph/ceph-0.conf store_description = \"RBD backend\" rbd_store_pool = images rbd_store_user = openstack", "customServiceConfig: | [DEFAULT] [ceph-0] [ceph-1] rbd_store_ceph_conf = /etc/ceph/ceph-1.conf store_description = \"RBD backend 1\" rbd_store_pool = images rbd_store_user = openstack", "customServiceConfig: | [DEFAULT] [ceph-0] [ceph-1] [swift-0] swift_store_create_container_on_put = True swift_store_auth_version = 3 swift_store_auth_address = {{ .KeystoneInternalURL }} swift_store_key = {{ .ServicePassword }} swift_store_user = service:glance swift_store_endpoint_type = internalURL", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "spec: glance: template: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd [glance_store] default_backend = default_backend [default_backend] rbd_store_ceph_conf = /etc/ceph/ceph.conf store_description = \"RBD backend\" rbd_store_pool = images rbd_store_user = openstack databaseInstance: openstack databaseUser: glance keystoneEndpoint: api0 glanceAPIs: api0: replicas: 1 api1: replicas: 1 api2: replicas: 1", "spec: glance: template: customServiceConfig: | [DEFAULT] extraMounts: - name: api0 region: r1 extraVol: - propagation: - api0 volumes: - name: ceph0 secret: secretName: <secret_name> mounts: - name: ceph0 mountPath: \"/etc/ceph\" readOnly: true - name: api1 region: r1 extraVol: - propagation: - api1 volumes: - name: ceph1 secret: secretName: <secret_name> mounts: - name: ceph1 mountPath: \"/etc/ceph\" readOnly: true - name: api2 region: r1 extraVol: - propagation: - api2 volumes: - name: ceph2 secret: secretName: <secret_name> mounts: - name: ceph2 mountPath: \"/etc/ceph\" readOnly: true", "oc apply -f openstack_control_plane.yaml -n openstack", "oc get openstackcontrolplane -n openstack", "spec: glance: template: customServiceConfig: | [DEFAULT] keystoneEndpoint: api0 glanceAPIs: api0: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:rbd replicas: 1 type: split api1: customServiceConfig: | [DEFAULT] enabled_backends = default_backend:swift replicas: 1 type: split", "spec: extraMounts: glance: template: glanceAPIs: default: type: single replicas: 3 # Configure back end; set to 3 when deploying service customServiceConfig: | [DEFAULT] enabled_backends = default_backend:file [glance_store] default_backend = default_backend [default_backend] filesystem_store_datadir = /var/lib/glance/images databaseInstance: openstack glanceAPIs:" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_persistent_storage/assembly_glance-configuring-glance_image
Chapter 5. Modifying the DM Multipath configuration file
Chapter 5. Modifying the DM Multipath configuration file By default, DM Multipath provides configuration values for the most common uses of multipathing. In addition, DM Multipath includes support for the most common storage arrays that themselves support DM Multipath. You can override the default configuration values for DM Multipath by editing the /etc/multipath.conf configuration file. If necessary, you can also add an unsupported by default storage array to the configuration file. For information about the default configuration values, including supported devices, run either of the following commands: Note If you run multipath from the initramfs file system and you make any changes to the multipath configuration files, you must rebuild the initramfs file system for the changes to take effect In the multipath configuration file, you need to specify only the sections that you need for your configuration, or that you need to change from the default values. If there are sections of the file that are not relevant to your environment or for which you do not need to override the default values, you can leave them commented out, as they are in the initial file. The configuration file allows regular expression description syntax. 5.1. Configuration file overview The multipath configuration file is divided into the following sections: blacklist Listing of specific devices that will not be considered for multipath. blacklist_exceptions Listing of multipath devices that would otherwise be ignored according to the parameters of the blacklist section. defaults General default settings for DM Multipath. multipaths Settings for the characteristics of individual multipath devices. These values overwrite what is specified in the overrides , devices , and defaults sections of the configuration file. devices Settings for the individual storage controllers. These values overwrite what is specified in the defaults section of the configuration file. If you are using a storage array that is not supported by default, you may need to create a devices subsection for your array. overrides Settings that are applied to all devices. These values overwrite what is specified in the devices and defaults sections of the configuration file. When the system determines the attributes of a multipath device, it checks the settings of the separate sections from the multipath.conf file in the following order: multipaths section overrides section devices section defaults section 5.2. Configuration file defaults The /etc/multipath.conf configuration file contains a defaults section. This section includes the default configuration of Device Mapper (DM) Multipath. The default values might differ based on your initial device settings. The following are the ways to view the default configurations: If you install your machine on a multipath device, the default multipath configuration applies automatically. The default configuration includes the following: For a complete list of the default configuration values, execute either multipath -t or multipathd show config command. For a list of configuration options with descriptions, see the multipath.conf man page. If you did not set up multipathing during installation, execute the mpathconf --enable command to get the default configuration. The following table describes the attributes, set in the defaults section of the multipath.conf configuration file. Attributes specified in the multipaths section have higher priority over values in the devices section. Attributes specified in the devices section have higher priority over the default values. Use the overrides section to set attribute values for all device types, even if those device types have a builtin configuration entry in the devices section. The overrides section has no mandatory attributes. However, any attribute set in this section takes precedence over values in the devices or defaults sections. Table 5.1. Multipath configuration defaults Attribute Description polling_interval Specifies the interval between two path checks in seconds. For properly functioning paths, the interval between checks gradually increases to max_polling_interval . The default value is 5 . max_polling_interval Specifies the maximum length of the interval between two path checks in seconds. The default value is 4 * polling_interval . find_multipaths Defines the mode for setting up multipath devices. Available values include: no : If find_multipaths is set to no , multipath applies rules as with the strict value and the multipathd daemon applies rules as with the greedy value. yes : If there are at least two devices that are not on the blacklist with the same World Wide Identifier (WWID), or if multipath created a multipath device with a device WWID before (even if that multipath device is no longer present), then the device is treated as a multipath device path. greedy : Both multipathd and multipath treat every non-blacklisted device as a multipath device path. smart : Multipath automatically considers that every non-blacklisted device is a multipath device path. If a second path, with the same WWID does not appear within the time set for find_multipaths_timeout , multipath releases the device and enables it for use by the rest of the system. The multipathd daemon applies rules as with the yes value. strict : This value only treats a device as a multipath path, if you create a multipath device with the device WWID. The default value is off . The default multipath.conf file sets find_multipaths to yes . find_multipaths_timeout This represents the timeout in seconds, to wait for additional paths after detecting the first one, if find_multipaths smart is set. Possible values include: Positive value : If set with a positive value, the timeout applies for all non-blacklisted devices. Negative value : If set with a negative value, the timeout applies only to known devices that have an entry in the multipath hardware table, either in the built-in table, or in a device section. Other unknown devices use a timeout of only 1 second to avoid booting delays. 0 : The system applies the built-in default for this attribute. The default value for known hardware is -10 . This means that known devices have a 10 second timeout. Unknown devices have a 1 second timeout. If the find_multipaths attribute has a value other than smart , this attribute has no effect. uxsock_timeout Set the timeout of multipathd interactive commands in milliseconds. For systems with a large number of devices, multipathd interactive commands might timeout and fail. If this happens, increase this timeout to resolve the issue. The default value is 4000 . reassign_maps Enable reassigning of device-mapper maps. With this option, the multipathd daemon remaps existing device-mapper maps to always point to the multipath device, not the underlying block devices. Possible values are yes and no . The default value is no . verbosity The default verbosity value is 2 . Higher values increase the verbosity level. Valid levels are between 0 and 4 . path_selector Specifies the default algorithm to use in determining what path to use for the I/O operation. Possible values include: round-robin 0 : Loop through every path in the path group, sending the same number of I/O requests, determined by rr_min_io or rr_min_io_rq , to each. queue-length 0 : Send the group of I/O requests down the path with the least number of outstanding I/O requests. service-time 0 : Send the group of I/O requests down the path with the shortest estimated service time. This is determined by dividing the total size of the outstanding I/O to each path by the relative throughput. The default value is service-time 0 . path_grouping_policy Specifies the default path grouping policy to apply to unspecified multipaths. Possible values include: failover : 1 path per priority group. multibus : All valid paths in 1 priority group. group_by_serial : 1 priority group per detected serial number. group_by_prio : 1 priority group per path priority value. Priorities are determined by the prio attribute. group_by_node_name : 1 priority group per target node name. The /sys/class/fc_transport/target*/node_name directory includes target node names. The default value is failover . uid_attrs Set this option to activate merging uevents by WWID. This action might improve uevent processing efficiency. It is also an alternative method to configure the udev properties to use for determining unique path identifiers (WWIDs). The value of this option is a space separated list of records like type:ATTR , where type is matched against the beginning of the device node name, and ATTR is the name of the udev property to use for matching devices. If you configure this option and it matches the device node name of a device, it overrides any other configured methods for determining the WWID for this device. You can enable uevent merging by setting this value to sd:ID_SERIAL dasd:ID_UID nvme:ID_WWN . The default is unset . prio Specifies the default function to call to obtain a path priority value. For example, the ALUA bits in SPC-3 provide an exploitable prio value. Possible values include: const : Set a priority of 1 to all paths. emc : Generate the path priority for EMC arrays. sysfs : Generate the path priority from sysfs . This prioritizer accepts the optional prio_arg value exclusive_pref_bit . The sysfs value uses the sysfs attributes access_state and preferred_path . alua : Generate the path priority based on the SCSI-3 ALUA settings. If you specify prio alua and prio_args exclusive_pref_bit in your device configuration, multipath creates a path group that contains only the path with the exclusive_pref_bit set, and assigns that path group the highest priority. Refer to the multipath.conf(5) man page for more information about this type of cases. ontap : Generate the path priority for NetApp arrays. rdac : Generate the path priority for LSI/Engenio RDAC controller. hp_sw : Generate the path priority for Compaq/HP controller in active/standby mode. hds : Generate the path priority for Hitachi HDS Modular storage arrays. random : Generate a random priority between 1 and 10. weightedpath : Generate the path priority based on the regular expression and the provided priority as an argument. Requires a prio_args keyword. path_latency : Generate the path priority based on a latency algorithm. Requires a prio_args keyword. ana : Generate the path priority based on the NVMe ANA settings. This priority routine is hardware dependent. datacore : Generate the path priority for some DataCore storage arrays. Requires a prio_args keyword. This priority routine is hardware dependent. iet : Generate path priority for iSCSI targets based on IP their address. Requires a prio_args keyword. This priority routine is available only with iSCSI. The default value depends on the detect_prio setting. If detect_prio is set to yes , then the default priority algorithm is sysfs . The only exception is for NetAPP E-Series, where the default is alua . If detect_prio is set to no , the default priority algorithm is const . prio_args Arguments to pass to the prio function. This applies only to the following prioritizers: weighted : Needs a value of the form <hbtl,devname,serial,wwn> <regex1> <prio1> <regex2> <prio2> hbtl : The Regex value can be of SCSI H:B:T:L format. For example: 1:0:.:. , *:0:0: . devname : The Regex value can be in device name format. For example: sda , sd.e . serial : The Regex value can be in serial number format. Look up serial through sysfs , or by running the command multipathd show paths format "%z" . wwn : The Regex value can be in the form host_wwnn:host_wwpn:target_wwnn:target_wwpn . These values can be looked up through sysfs or by running the command multipathd show paths format %N:%R:%n:%r" . path_latency : Requires a value in the form io_num= <integer> base_num=<integer> . io_num : The number of read IOs, continuously sent to the current path. This value helps calculate the average path latency. Valid values include Integer , [2, 200] . base_num : The base number value of logarithmic scale. This value helps to partition different priority ranks. Valid values include Integer , [2, 10] . The maximum average latency value is 100s and the minimum average latency value is 1us . alua : If the exclusive_pref_bit value is set, paths with the preferred_path_bit set always create their own path group. sysfs : If the exclusive_pref_bit value is set, paths with the preferred_path_bit set always create their own path group. datacore : Requires a value of the form timeout=<milliseconds> preferredsds=<name> . preferredsds : This value is mandatory and it represents the preferred SDS name. timeout : This value is optional. Set the timeout for the inquiry in milliseconds. iet : Requires a value of the form preferredip=<ip_address> . preferredip : This value is mandatory. This is the preferred IP address, in dotted decimal notation, for iSCSI targets. The default value is unset . features The default extra features of multipath devices, using the format: "number_of_features_plus_arguments feature1 ... " . Possible values for features include: queue_if_no_path : The same as setting no_path_retry to queue . pg_init_retries n : Retry path group initialization up to n times before failing. The number must be between 1 and 50. pg_init_delay_msecs msecs : Number of milliseconds before pg_init retry initiates. The number must be between 0 and 60000. queue_mode mode : Select the queueing mode per multipath device. The mode value options are bio , rq or mq . These correspond to bio-based, request-based, and block-multiqueue request-based ( blk-mq ), respectively. By default, the value is unset . The default can also depend on the kernel parameter dm_mod.use_blk_mq . The two options are mq if it is already set in the parameter, or rq otherwise. path_checker Specifies the default method to determine the state of the paths. Possible values include: readsector0 : Read the first sector of the device. tur : Issue a TEST UNIT READY command to the device. emc_clariion : Query the EMC Clariion specific EVPD page 0xC0 to determine the path. hp_sw : Check the path state for HP storage arrays with Active/Standby firmware. rdac : Check the path state for LSI/Engenio RDAC storage controller. directio : Read the first sector with direct I/O. cciss_tur : Check the path state for HP/COMPAQ Smart Array(CCISS) controllers. This is hardware dependent. none : Does not check the device. Falls back to use values retrieved from sysfs . The default value is tur . alias_prefix This attribute represents the user_friendly_names prefix. The default value is mpath . failback Manages path group failback. Possible values include: immediate : Specifies immediate failback to the highest priority path group that contains active paths. manual : Specifies that there is no immediate failback, but that failback can happen only with operator intervention. followover : Specifies that automatic failback can only be performed when the first path of a path group becomes active. This keeps a node from automatically failing back, when another node requested the failover. A numeric value greater than zero, specifies deferred failback, and is expressed in seconds. The default value is manual . rr_min_io Specifies the number of I/O requests to route to a path before switching to the path in the current path group. This setting is only for systems running kernels older than 2.6.31. Newer systems should use rr_min_io_rq . The default value is 1000 . rr_min_io_rq Specifies the number of I/O requests to route to a path, before switching to the path in the current path group. Uses a request-based device-mapper-multipath. This setting can be used on systems running current kernels. On systems running kernels older than 2.6.31, use rr_min_io . The default value is 1 . no_path_retry A numeric value for this attribute specifies the number of times that the path checker must fail for all paths in a multipath device, before disabling queuing. A value of fail indicates immediate failure, without queuing. A value of queue indicates that queuing should not stop until the path is fixed. The default value is fail . user_friendly_names Possible values include: yes : Specifies that the system can use the /etc/multipath/bindings file to assign a persistent and unique alias to the multipath, in the form of mpath<n> . no : The system uses the WWID as the alias for the multipath. Any device-specific alias you set in the multipaths section of the configuration file, overrides this name. The default value is no . queue_without_daemon If set to no , the multipathd daemon disables queuing for all devices, when it is shut down. The default value is no . flush_on_last_del If set to yes , the multipathd daemon disables queuing when the last path to a device is deleted. The default value is no. max_fds Sets the maximum number of open file descriptors that can be opened by multipath and the multipathd daemon. This is equivalent to the ulimit -n command. The default value is max , which sets this to the system limit from /proc/sys/fs/nr_open . checker_timeout The timeout to use for prioritizers and path checkers that issue SCSI commands with an explicit timeout, in seconds. The sys/block/sd<x>/device/timeout directory contains the default value. fast_io_fail_tmo The number of seconds the SCSI layer waits after a problem is detected on an FC remote port, before failing I/O to devices on that remote port. This value must be smaller than the value of dev_loss_tmo . Setting this to off disables the timeout. The default value is 5 . The fast_io_fail_tmo option overrides the values of the recovery_tmo and replacement_timeout options of the underlying path devices. dev_loss_tmo The number of seconds the SCSI layer waits after a problem is detected on an FC remote port, before removing it from the system. Setting this to infinity will set this to 2147483647 seconds, or 68 years. The OS determines the default value. eh_deadline Specifies the maximum number of seconds the SCSI layer spends performing error handling, when SCSI devices fail. After this timeout, the scsi layer performs a full HBA reset. Setting this is necessary in cases where the rport is never lost, so fast_io_fail_tmo and dev_loss_tmo never trigger, but scsi commands still hang. When the SCSI error handler performs the HBA reset, this affects all target paths on that HBA. The eh_deadline value should only be set in cases where all targets on the affected HBAs are multipathed. The default value is unset . detect_prio If this is set to yes , multipath detects if the device is a SCSI device that supports Asymmetric Logical Unit Access (ALUA), or a NVMe device that supports Asymmetric Namespace Access (ANA). If the device supports ALUA, multipath automatically assigns it the alua prioritizer. If the device supports ANA, multipath automatically assigns it the ana prioritizer. If detect_prio is set to no , or if the device does not support ALUA or ANA, the prio attribute sets the prioritizer. The default value is yes . uid_attribute Specifies the udev attribute to use for the device WWID. The default value is device dependent: ID_SERIAL for SCSI devices, ID_UID for DASD devices, and ID_WWN for NVMe devices. force_sync If set to yes , this parameter prevents path checkers from running in async mode. This means that only one checker runs at a time. This is useful in cases where many multipathd checkers run in parallel, and can cause significant CPU pressure. The default value is no . strict_timing If set to yes , the multipathd daemon starts a new path checker loop after exactly one second, so that each path check occurs at the exactly set seconds for polling_interval . On busy systems, path checks might take longer than one second. The missing ticks are accounted for in the round. A warning prints if path checks take longer than the set seconds for polling_interval . The default value is no . retrigger_tries , retrigger_delay Use the retrigger_tries and retrigger_delay parameters in conjunction to make multipathd retrigger uevents. If udev fails to completely process the original uevents , this leaves multipath unable to use the device. The retrigger_tries parameter sets the number of times that multipath tries to retrigger a uevent , in case a device is not completely set up. The retrigger_delay parameter sets the number of seconds between retries. Both of these options accept numbers greater than or equal to 0 . Setting the retrigger_tries parameter to 0 disables retries. Setting the retrigger_delay parameter to 0 causes the uevent to be reissued on the loop of the path checker. The default value of retrigger_tries is 3 . The default value of retrigger_delay is 10. missing_uev_wait_timeout This attribute controls the number of seconds the multipathd daemon waits to receive a change event from udev for a newly created multipath device. After that it automatically enables device reloads. In most cases, multipathd delays reloads on a device, until it receives a change uevent from the initial table load. The default value is 30 . deferred_remove If set to yes , multipathd performs a deferred remove, instead of a regular remove, when the last path device is deleted. This ensures that if a multipathed device is in use when a regular remove is performed and the remove fails, the device is automatically removed, when the last user closes the device. The default value is no . san_path_err_threshold , san_path_err_forget_rate , san_path_err_recovery_time If you set all three of these attributes to integers greater than zero, they enable the multipathd daemon to keep shaky paths from reinstating, by monitoring how frequently the path checker fails. If a path checker fails a path more than the value in the san_path_err_threshold attribute, within san_path_err_forget_rate checks, then the multipathd daemon does not reinstate the path until the value of the san_path_err_recovery_time attribute in seconds passes, without any path checker failures. See the Shaky paths detection section of the multipath.conf(5) for more information. The default value is no . marginal_path_double_failed_time , marginal_path_err_sample_time , marginal_path_err_rate_threshold , marginal_path_err_recheck_gap_time If marginal_path_double_failed_time , marginal_path_err_rate_threshold , and marginal_path_err_recheck_gap_time are set to integers greater than 0 and marginal_path_err_sample_time is set to an integer greater than 120 , they enable the multipathd daemon to keep shaky paths from reinstating, by testing the I/O failure rate of paths that repeatedly fail. If a path fails twice within the value set in the marginal_path_double_failed_time attribute in seconds, the multipathd daemon does not immediately reinstate it, when the path checker determines that it is back up. Instead, multipathd issues a steady stream of read I/Os to the path for the value set in the marginal_path_err_sample_time attribute in seconds. If there are more than the value set in the marginal_path_err_rate_threshold attribute number of errors per thousand I/Os, multipathd waits for marginal_path_err_recheck_gap_time seconds, and then starts another cycle of testing the path with read I/Os. Otherwise, multipathd reinstates the path. See the Shaky paths detection section of the multipath.conf(5) for more information. The default value is no . marginal_pathgroups Possible values include: on : When one of the marginal path detecting methods determines that a path is marginal, the system reinstates the path and places it in a separate pathgroup. This group comes into effect only after all the non-marginal path groups are tried first. This prevents the possibility of IO errors occurring while the system can still use some marginal paths. The path returns to a regular path group as soon as it passes monitoring for a configured time. off : The delay_*_checks , marginal_path_* , and san_path_err_* attributes keep the system from reinstating any marginal , or shaky paths, until they are monitored for a configured time. fpin : The multipathd daemon receives fpin notifications, sets path states to marginal , and regroups paths, as described for the on value. The marginal_path_* and san_path_err_* attributes are implicitly set to no . See the Shaky paths detection section of the multipath.conf(5) for more information. The default value is no . log_checker_err If set to once , multipathd logs the first path checker error at verbosity level 2. The system logs any further errors at verbosity level 3, until the device is restored. If the log_checker_err parameter is set to always , multipathd always logs the path checker error at verbosity level 2. The default value is always . skip_kpartx If set to yes , kpartx does not automatically create partitions on the device. This enables you to create a multipath device, without creating partitions, even if the device has a partition table. The default value of this option is no . max_sectors_kb Using this option, you can set the max_sectors_kb device queue parameter to the specified value on all underlying paths of a multipath device, before the first activation of a multipath device. Whenever the system creates a new multipath device, the device inherits the max_sectors_kb value from the path devices. Manually raising this value for the multipath device, or lowering this value for the path devices, can cause multipath to create I/O operations larger than the path devices allow. Using the max_sectors_kb parameter is an easy way to set these values, before the creation of a multipath device on top of the path devices, and prevent passing any invalid-sized I/O operations. If you do not set this parameter, the path devices driver sets it automatically, and the multipath device inherits it from the path devices. ghost_delay This attribute sets the number of seconds that multipath waits after creating a device with only ghost paths, before marking it ready for use in systemd . This gives the active paths time to appear before the multipath runs the hardware handler to switch the ghost paths to active ones. Setting this to 0 or no makes multipath immediately mark a device with only ghost paths as ready. The default value is no . enable_foreign This attribute enables or disables foreign libraries. The value is a regular expression. Foreign libraries are loaded if their name matches the expression. By default, all libraries are enabled. However, the default configuration file also sets this attribute to "^USD" , which disables all foreign libraries. recheck_wwid If set to yes , when a failed path is restored, the multipathd daemon rechecks the path WWID. If there is a change in the WWID, the path is removed from the current multipath device, and added again as a new path. The multipathd daemon also checks the path WWID again if it is manually re-added. This option only works for SCSI devices with configuration to use the default uid_attribute , ID_SERIAL , or sysfs , for getting their WWID. The default value is no . remove_retries This option sets the number of times multipath retries removing a device that is in use. Between each attempt, multipath becomes inactive for 1 second. The default value is 0 , which means that multipath does not retry the remove. detect_checker If set to yes , multipath checks if the device supports ALUA or Redundant Disk Array Controller (RDAC). If the device supports ALUA, multipath assigns it the tur path_checker . If the device supports RDAC, the multipathd daemon assigns it the rdac path_checker . If the device does not support ALUA or RDAC, or the detect_checker is set to no , the path_checker attribute sets the path checker. The default value is yes . reservation_key The mpathpersist parameter uses this service action reservation key. It must be set for all multipath devices using persistent reservations, and it must be the same as the RESERVATION KEY field of the PERSISTENT RESERVE OUT parameter list, which contains an 8-byte value provided by the application client to the device server to identify the I_T nexus. If you use the --param-aptpl option when registering the key with mpathpersist , you must append :aptpl to the end of the reservation key. This parameter can also be set to file , which causes mpathpersist to automatically store the RESERVATION KEY used to register the multipath device in the prkeys file. The multipathd daemon then uses this key to register additional paths as they appear. When you remove the registration, this automatically removes the RESERVATION KEY from the prkeys file. It is unset by default. If persistent reservations are necessary, it is recommended to set this attribute to file . all_tg_pt If this option is set to yes when mpathpersist registers keys, it treats a registered key from one host to one target port, as going from one host to all target ports. This must be set to yes to successfully use mpathpersist on arrays that automatically set and clear registration keys on all target ports from a host, instead of per target port per host. The default value is no . Additional resources multipath.conf(5) man page 5.3. Configuration file multipaths section Set attributes of individual multipath devices by using the multipaths section of the multipath.conf configuration file. Device Mapper (DM) Multipath uses these attributes to override all other configuration settings, including those from the overrides section. Refer to Configuration file overrides section for a list of attributes from the overrides section. The multipaths section recognizes only the multipath subsection as an attribute. The following table shows the attributes that you can set in the multipath subsection, for each specific multipath device. These attributes apply only to one specified multipath. If several multipath subsections match a specific device World Wide Identifier (WWID), the contents of those subsections merge. The settings from latest entries have priority over any versions. Table 5.2. Multipath subsection attributes Attribute Description wwid Specifies the WWID of the multipath device, to which the multipath attributes apply. This parameter is mandatory for this section of the multipath.conf file. alias Specifies the symbolic name for the multipath device, to which the multipath attributes apply. If you are using user_friendly_names , do not set this value to mpath <n> . This might cause conflicts with an automatically assigned user friendly name, and give you incorrect device node names. The attributes in the following list are optional. If you do not set them, default values from the overrides , devices , or defaults sections apply. Refer to Configuration file defaults for a full description of these attributes. path_grouping_policy path_selector prio prio_args failback no_path_retry rr_min_io rr_min_io_rq flush_on_last_del features reservation_key user_friendly_names deferred_remove san_path_err_threshold san_path_err_forget_rate san_path_err_recovery_time marginal_path_err_sample_time marginal_path_err_rate_threshold marginal_path_err_recheck_gap_time marginal_path_double_failed_time delay_watch_checks delay_wait_checks skip_kpartx max_sectors_kb ghost_delay The following example shows multipath attributes specified in the configuration file for two specific multipath devices. The first device has a WWID of 3600508b4000156d70001200000b0000 and a symbolic name of yellow . The second multipath device in the example has a WWID of 1DEC _ 321816758474 and a symbolic name of red . Example 5.1. Multipath attributes specification Additional resources multipath.conf(5) man page Configuration file defaults Configuration file overrides section 5.4. Configuration file devices section Use the devices section of the multipath.conf configuration file to define settings for individual storage controller types. Values set in this section overwrite specified values in the defaults section. The system identifies the storage controller types by the vendor , product , and revision keywords. These keywords are regular expressions and must match the sysfs information about the specific device. The devices section recognizes only the device subsection as an attribute. If there are multiple keyword matches for a device, the attributes of all matching entries apply to it. If an attribute is specified in several matching device subsections, later versions of entries have priority over any entries. Important Configuration attributes in the latest version of the device subsections override attributes in any devices subsections and from the defaults section. The following table shows the attributes that you can set in the device subsection. Table 5.3. Devices section attributes Attribute Description vendor Specifies the regular expression to match the device vendor name. This is a mandatory attribute. product Specifies the regular expression to match the device product name. This is a mandatory attribute. revision Specifies the regular expression to match the device product revision. If the revision attribute is missing, all device revisions match. product_blacklist Multipath uses this attribute to create a device blacklist entry that has a vendor attribute that matches the vendor attribute of this device entry, and a product attribute that matches this product_blacklist attribute. vpd_vendor Shows the vendor specific Vital Product Data (VPD) page information, using the VPD page abbreviation. The multipathd daemon uses this information to gather device specific information. Currently only the hp3par VPD page is supported. hardware_handler Specifies the hardware handler to use for a particular device type. All possible values are hardware dependent and include: emc : Hardware handler for DGC class arrays, as CLARiiON CX/AX and EMC VNX and Unity families. rdac : Hardware handler for LSI/Engenio/NetApp RDAC class, as NetApp SANtricity E/EF Series, and OEM arrays from IBM DELL SGI STK and SUN. hp_sw : Hardware handler for HP/COMPAQ/DEC HSG80 and MSA/HSV arrays with Active/Standby mode exclusively. alua : Hardware handler for SCSI-3 ALUA compatible arrays. ana : Hardware handler for NVMe ANA compatible arrays. The default value is unset . Important Linux kernels, versions 4.3 and newer, automatically attach a device handler to known devices. This includes all devices supporting SCSI-3 ALUA). The kernel does not enable changing the handler later on. Setting the hardware_handler attribute for such devices on these kernels takes no effect. The attributes in the following list are optional. If you do not set them, the default values from the defaults sections apply. Refer to Configuration file defaults for a full description of these attributes. path_grouping_policy uid_attribute getuid_callout path_selector path_checker prio prio_args failback alias_prefix no_path_retry rr_min_io rr_min_io_rq flush_on_last_del features reservation_key user_friendly_names deferred_remove san_path_err_threshold san_path_err_forget_rate san_path_err_recovery_time marginal_path_err_sample_time marginal_path_err_rate_threshold marginal_path_err_recheck_gap_time marginal_path_double_failed_time delay_watch_checks delay_wait_checks skip_kpartx max_sectors_kb ghost_delay all_tg_pt Additional resources multipath.conf(5) man page Configuration file defaults 5.5. Configuration file overrides section The overrides section recognizes the optional protocol subsection, and can contain multiple protocol subsections. The system matches path devices against the protocol subsection, using the mandatory type attribute. Attributes in a matching protocol subsection have priority over attributes in the rest of the overrides section. If there are multiple matching protocol subsections, later entries have higher priority. The attributes in the following list are optional. If you do not set them, default values from the devices or defaults sections apply. path_grouping_policy uid_attribute getuid_callout path_selector path_checker alias_prefix features prio prio_args failback no_path_retry rr_min_io rr_min_io_rq flush_on_last_del fast_io_fail_tmo dev_loss_tmo eh_deadline user_friendly_names retain_attached_hw_handler detect_prio detect_checker deferred_remove san_path_err_threshold san_path_err_forget_rate san_path_err_recovery_time marginal_path_err_sample_time marginal_path_err_rate_threshold marginal_path_err_recheck_gap_time marginal_path_double_failed_time delay_watch_checks delay_wait_checks skip_kpartx max_sectors_kb ghost_delay all_tg_pt The protocol subsection recognizes the following mandatory attribute: Table 5.4. Protocol subsection attribute Attribute Description type Specifies the protocol string of the path device. Possible values include: scsi:fcp , scsi:spi , scsi:ssa , scsi:sbp , scsi:srp , scsi:iscsi , scsi:sas , scsi:adt , scsi:ata , scsi:unspec , ccw , cciss , nvme , undef This attribute is not a regular expression. The path device protocol string must match exactly. The attributes in the following list are optional for the protocol subsection. If you do not set them, default values from the overrides , devices or defaults sections apply. fast_io_fail_tmo dev_loss_tmo eh_deadline Additional resources multipath.conf(5) man page Configuration file defaults 5.6. DM Multipath overrides of the device timeout The recovery_tmo sysfs option controls the timeout for a particular iSCSI device. The following options globally override the recovery_tmo values: The replacement_timeout configuration option globally overrides the recovery_tmo value for all iSCSI devices. For all iSCSI devices that are managed by DM Multipath, the fast_io_fail_tmo option in DM Multipath globally overrides the recovery_tmo value. The fast_io_fail_tmo option in DM Multipath also overrides the fast_io_fail_tmo option in Fibre Channel devices. The DM Multipath fast_io_fail_tmo option takes precedence over replacement_timeout . Red Hat does not recommend using replacement_timeout to override recovery_tmo in devices managed by DM Multipath because DM Multipath always resets recovery_tmo , when the multipathd service reloads. 5.7. Modifying multipath configuration file defaults The /etc/multipath.conf configuration file includes a defaults section that sets the user_friendly_names parameter to yes , as follows. This overwrites the default value of the user_friendly_names parameter. The default values that are set in the defaults section on the multipath.conf file , are used by DM Multipath unless they are overwritten by the attributes specified in the devices, multipath, or overrides sections of the multipath.conf file. Procedure View the /etc/multipath.conf configuration file, which includes a template of configuration defaults: Overwrite the default value for any of the configuration parameters. You can copy the relevant line from this template into the defaults section and uncomment it. For example, to overwrite the path_grouping_policy parameter to multibus instead of the default value of failover , copy the appropriate line from the template to the initial defaults section of the configuration file, and uncomment it, as follows: Validate the /etc/multipath.conf file after modifying the multipath configuration file by running one of the following commands: To display any configuration errors, run: To display the new configuration with the changes added, run: Reload the /etc/multipath.conf file and reconfigure the multipathd daemon for changes to take effect: Additional resources multipath.conf(5) and multipathd(8) man pages 5.8. Modifying multipath settings for specific devices In the multipaths section of the multipath.conf configuration file, you can add configurations that are specific to an individual multipath device, referenced by the mandatory WWID parameter. These defaults are used by DM Multipath and override attributes set in the overrides , defaults , and devices sections of the multipath.conf file. There can be any number of multipath subsections in the multipaths section. Procedure Modify the multipaths section for specific multipath device. The following example shows multipath attributes specified in the configuration file for two specific multipath devices: The first device has a WWID of 3600508b4000156d70001200000b0000 and a symbolic name of yellow . The second multipath device in the example has a WWID of 1DEC_321816758474 and a symbolic name of red . In this example, the rr_weight attribute is set to priorities . Validate the /etc/multipath.conf file after modifying the multipath configuration file by running one of the following commands: To display any configuration errors, run: To display the new configuration with the changes added, run: Reload the /etc/multipath.conf file and reconfigure the multipathd daemon for changes to take effect: Additional resources multipath.conf(5) man page 5.9. Modifying the multipath configuration for specific devices with protocol You can configure multipath device paths, based on their transport protocol. By using the protocol subsection of the overrides section in the /etc/multipath.conf file, you can override the multipath configuration settings on certain paths. This enables access to multipath devices over multiple transport protocols, like Fiber Channel (FC) or Internet Small Computer Systems Interface (iSCSI). Options set in the protocol subsection override values in the overrides , devices and defaults sections. These options apply only to devices using a transport protocol which matches the type parameter of the subsection. Prerequisites You have configured Device Mapper (DM) multipath in your system. You have multipath devices where not all paths use the same transport protocol. Procedure View the specific path protocol by running the following: Edit the overrides section of the /etc/multipath.conf file, by adding protocol subsections for each multipath type. Settings for path devices, which use the scsi:fcp protocol: Settings for path devices, which use the scsi:iscsi protocol: Settings for path devices, which use all other protocols: The overrides section can include multiple protocol subsections. Important The protocol subsection must include a type parameter. The configuration of all paths with a matching type parameter is then updated with the rest of the parameters listed in the protocol subsection. Additional resources multipath.conf(5) man page 5.10. Modifying multipath settings for storage controllers The devices section of the multipath.conf configuration file sets attributes for individual storage devices. These attributes are used by DM Multipath unless they are overwritten by the attributes specified in the multipaths or overrides sections of the multipath.conf file for paths that contain the device. These attributes override the attributes set in the defaults section of the multipath.conf file. Procedure View the information about the default configuration value, including supported devices: Many devices that support multipathing are included by default in a multipath configuration. Optional: If you need to modify the default configuration values, you can overwrite the default values by including an entry in the configuration file for the device that overwrites those values. You can copy the device configuration defaults for the device that the multipathd show config command displays and override the values that you want to change. Add a device that is not configured automatically by default to the devices section of the configuration file by setting the vendor and product parameters. Find these values by opening the /sys/block/ device_name /device/vendor and /sys/block/ device_name /device/model files where device_name is the device to be multipathed, as mentioned in the following example: Optional: Specify the additional parameters depending on your specific device: active/active device Usually there is no need to set additional parameters in this case. If required, you might set path_grouping_policy to multibus . Other parameters you may need to set are no_path_retry and rr_min_io . active/passive device If it automatically switches paths with I/O to the passive path, you need to change the checker function to one that does not send I/O to the path to test if it is working, otherwise, your device will keep failing over. This means that you have set the path_checker to tur , which works for all SCSI devices that support the Test Unit Ready command, which most do. If the device needs a special command to switch paths, then configuring this device for multipath requires a hardware handler kernel module. The current available hardware handler is emc . If this is not sufficient for your device, you might not be able to configure the device for multipath. The following example shows a device entry in the multipath configuration file: Validate the /etc/multipath.conf file after modifying the multipath configuration file by running one of the following commands: To display any configuration errors, run: To display the new configuration with the changes added, run: Reload the /etc/multipath.conf file and reconfigure the multipathd daemon for changes to take effect: Additional resources multipath.conf(5) and multipathd(8) man pages 5.11. Setting multipath values for all devices Using the overrides section of the multipath.conf configuration file, you can set a configuration value for all of your devices. This section supports all attributes that are supported by both the devices and defaults section of the multipath.conf configuration file, which is all of the devices section attributes except vendor , product , and revision . DM Multipath uses these attributes for all devices unless they are overwritten by the attributes specified in the multipaths section of the multipath.conf file for paths that contain the device. These attributes override the attributes set in the devices and defaults sections of the multipath.conf file. Procedure Override device specific settings. For example, you might want all devices to set no_path_retry to fail . Use the following command to turn off queueing, when all paths have failed. This overrides any device specific settings. Validate the /etc/multipath.conf file after modifying the multipath configuration file by running one of the following commands: To display any configuration errors, run: To display the new configuration with the changes added, run: Reload the /etc/multipath.conf file and reconfigure the multipathd daemon for changes to take effect: Additional resources multipath.conf(5) man page
[ "multipathd show config multipath -t", "multipaths { multipath { wwid 3600508b4000156d70001200000b0000 alias yellow path_grouping_policy multibus path_selector \"round-robin 0\" failback manual no_path_retry 5 } multipath { wwid 1DEC _ 321816758474 alias red } }", "defaults { user_friendly_names yes }", "#defaults { polling_interval 10 path_selector \"round-robin 0\" path_grouping_policy multibus uid_attribute ID_SERIAL prio alua path_checker readsector0 rr_min_io 100 max_fds 8192 rr_weight priorities failback immediate no_path_retry fail user_friendly_names yes #}", "defaults { user_friendly_names yes path_grouping_policy multibus }", "multipath -t > /dev/null", "multipath -t", "service multipathd reload", "multipaths { multipath { wwid 3600508b4000156d70001200000b0000 alias yellow path_grouping_policy multibus path_selector \"round-robin 0\" failback manual rr_weight priorities no_path_retry 5 } multipath { wwid 1DEC _ 321816758474 alias red rr_weight priorities } }", "multipath -t > /dev/null", "multipath -t", "service multipathd reload", "multipathd show paths format \"%d %P\" dev protocol sda scsi:ata sdb scsi:fcp sdc scsi:fcp", "overrides { dev_loss_tmo 60 fast_io_fail_tmo 8 protocol { type \"scsi:fcp\" dev_loss_tmo 70 fast_io_fail_tmo 10 eh_deadline 360 } }", "overrides { dev_loss_tmo 60 fast_io_fail_tmo 8 protocol { type \"scsi:iscsi\" dev_loss_tmo 60 fast_io_fail_tmo 120 } }", "overrides { dev_loss_tmo 60 fast_io_fail_tmo 8 protocol { type \" <type of protocol> \" dev_loss_tmo 60 fast_io_fail_tmo 8 } }", "multipathd show config multipath -t", "cat /sys/block/sda/device/vendor WINSYS cat /sys/block/sda/device/model SF2372", "# } # device { # vendor \"COMPAQ \" # product \"MSA1000 \" # path_grouping_policy multibus # path_checker tur # rr_weight priorities # } #}", "multipath -t > /dev/null", "multipath -t", "service multipathd reload", "overrides { no_path_retry fail }", "multipath -t > /dev/null", "multipath -t", "service multipathd reload" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/configuring_device_mapper_multipath/modifying-the-dm-multipath-configuration-file_configuring-device-mapper-multipath
Chapter 1. Planning for automation mesh in your Red Hat Ansible Automation Platform environment
Chapter 1. Planning for automation mesh in your Red Hat Ansible Automation Platform environment The following topics contain information to help plan an automation mesh deployment in your Ansible Automation Platform environment. The following sections explain the concepts that comprise automation mesh in addition to providing examples on how you can design automation mesh topologies. Simple to complex topology examples are included to illustrate the various ways you can deploy automation mesh. 1.1. About automation mesh Automation mesh is an overlay network intended to ease the distribution of work across a large and dispersed collection of workers through nodes that establish peer-to-peer connections with each other using existing networks. Red Hat Ansible Automation Platform 2 replaces Ansible Tower and isolated nodes with automation controller and automation hub. Automation controller provides the control plane for automation through its UI, Restful API, RBAC, workflows and CI/CD integration, while automation mesh can be used for setting up, discovering, changing or modifying the nodes that form the control and execution layers. Automation mesh introduces: Dynamic cluster capacity that scales independently, allowing you to create, register, group, ungroup and deregister nodes with minimal downtime. Control and execution plane separation that enables you to scale playbook execution capacity independently from control plane capacity. Deployment choices that are resilient to latency, reconfigurable without outage, and that dynamically re-reroute to choose a different path when outages may exist. mesh routing changes. Connectivity that includes bi-directional, multi-hopped mesh communication possibilities which are Federal Information Processing Standards (FIPS) compliant. 1.2. Control and execution planes Automation mesh makes use of unique node types to create both the control and execution plane. Learn more about the control and execution plane and their node types before designing your automation mesh topology. 1.2.1. Control plane The control plane consists of hybrid and control nodes. Instances in the control plane run persistent automation controller services such as the the web server and task dispatcher, in addition to project updates, and management jobs. Hybrid nodes - this is the default node type for control plane nodes, responsible for automation controller runtime functions like project updates, management jobs and ansible-runner task operations. Hybrid nodes are also used for automation execution. Control nodes - control nodes run project and inventory updates and system jobs, but not regular jobs. Execution capabilities are disabled on these nodes. 1.2.2. Execution plane The execution plane consists of execution nodes that execute automation on behalf of the control plane and have no control functions. Hop nodes serve to communicate. Nodes in the execution plane only run user-space jobs, and may be geographically separated, with high latency, from the control plane. Execution nodes - Execution nodes run jobs under ansible-runner with podman isolation. This node type is similar to isolated nodes. This is the default node type for execution plane nodes. Hop nodes - similar to a jump host, hop nodes will route traffic to other execution nodes. Hop nodes cannot execute automation. 1.2.3. Peers Peer relationships define node-to-node connections. You can define peers within the [automationcontroller] and [execution_nodes] groups or using the [automationcontroller:vars] or [execution_nodes:vars] groups 1.2.4. Defining automation mesh node types The examples in this section demonstrate how to set the node type for the hosts in your inventory file. You can set the node_type for single nodes in the control plane or execution plane inventory groups. To define the node type for an entire group of nodes, set the node_type in the vars stanza for the group. The allowed values for node_type in the control plane [automationcontroller] group are hybrid (default) and control . The allowed values for node_type in the [execution_nodes] group are execution (default) and hop . Hybrid node The following inventory consists of a single hybrid node in the control plane: [automationcontroller] control-plane-1.example.com Control node The following inventory consists of a single control node in the control plane: [automationcontroller] control-plane-1.example.com node_type=control If you set node_type to control in the vars stanza for the control plane nodes, then all of the nodes in control plane are control nodes. [automationcontroller] control-plane-1.example.com [automationcontroller:vars] node_type=control Execution node The following stanza defines a single execution node in the execution plane: [execution_nodes] execution-plane-1.example.com Hop node The following stanza defines a single hop node and an execution node in the execution plane. The node_type variable is set for every individual node. [execution_nodes] execution-plane-1.example.com node_type=hop execution-plane-2.example.com If you want to set the node-type at the group level, you must create separate groups for the execution nodes and the hop nodes. [execution_nodes] execution-plane-1.example.com execution-plane-2.example.com [execution_group] execution-plane-2.example.com [execution_group:vars] node_type=execution [hop_group] execution-plane-1.example.com [hop_group:vars] node_type=hop Peer connections Create node-to-node connections using the peers= host variable. The following example connects control-plane-1.example.com to execution-node-1.example.com and execution-node-1.example.com to execution-node-2.example.com : [automationcontroller] control-plane-1.example.com peers=execution-node-1.example.com [automationcontroller:vars] node_type=control [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com Additional resources See the example automation mesh topologies in this guide for more examples of how to implement mesh nodes.
[ "[automationcontroller] control-plane-1.example.com", "[automationcontroller] control-plane-1.example.com node_type=control", "[automationcontroller] control-plane-1.example.com [automationcontroller:vars] node_type=control", "[execution_nodes] execution-plane-1.example.com", "[execution_nodes] execution-plane-1.example.com node_type=hop execution-plane-2.example.com", "[execution_nodes] execution-plane-1.example.com execution-plane-2.example.com [execution_group] execution-plane-2.example.com [execution_group:vars] node_type=execution [hop_group] execution-plane-1.example.com [hop_group:vars] node_type=hop", "[automationcontroller] control-plane-1.example.com peers=execution-node-1.example.com [automationcontroller:vars] node_type=control [execution_nodes] execution-node-1.example.com peers=execution-node-2.example.com execution-node-2.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/red_hat_ansible_automation_platform_automation_mesh_guide/assembly-planning-mesh
B.16. dhcp
B.16. dhcp B.16.1. RHSA-2010:0923 - Moderate: dhcp security update Updated dhcp packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having moderate security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available for each vulnerability from the CVE link(s) associated with each description below. The Dynamic Host Configuration Protocol (DHCP) is a protocol that allows individual devices on an IP network to get their own network configuration information, including an IP address, a subnet mask, and a broadcast address. DHCPv6 is the DHCP protocol version for IPv6 networks. CVE-2010-3611 A NULL pointer dereference flaw was discovered in the way the dhcpd daemon parsed DHCPv6 packets. A remote attacker could use this flaw to crash dhcpd via a specially-crafted DHCPv6 packet, if dhcpd was running as a DHCPv6 server. Users running dhcpd as a DHCPv6 server should upgrade to these updated packages, which contain a backported patch to correct this issue. After installing this update, all DHCP servers will be restarted automatically.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.0_technical_notes/dhcp
Installation Guide
Installation Guide Red Hat Ceph Storage 5 Installing Red Hat Ceph Storage on Red Hat Enterprise Linux Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/installation_guide/index
Chapter 3. Reusing bricks and restoring configuration from backups
Chapter 3. Reusing bricks and restoring configuration from backups 3.1. Host replacement prerequisites Determine which node to use as the Ansible controller node (the node from which all Ansible playbooks are executed). Red Hat recommends using a healthy node in the same cluster as the failed node as the Ansible controller node. If possible, locate a recent backup or create a new backup of the important files (disk configuration or inventory files). See Backing up important files for details. Stop brick processes and unmount file systems on the failed host, to avoid file system inconsistency issues. Check which operating system is running on your hyperconverged hosts by running the following command: Reinstall the same operating system on the failed hyperconverged host. 3.2. Preparing the cluster for host replacement Verify host state in the Administrator Portal. Log in to the Red Hat Virtualization Administrator Portal. The host is listed as NonResponsive in the Administrator Portal. Virtual machines that previously ran on this host are in the Unknown state. Click Compute Hosts and click the Action menu (...). Click Confirm host has been rebooted and confirm the operation. Verify that the virtual machines are now listed with a state of Down . Update the SSH fingerprint for the failed node. Log in to the Ansible controller node as the root user. Remove the existing SSH fingerprint for the failed node. Copy the public key from the Ansible controller node to the freshly installed node. Verify that you can log in to all hosts in the cluster, including the Ansible controller node, using key-based SSH authentication without a password. Test access using all network addresses. The following example assumes that the Ansible controller node is host1 . Use ssh-copy-id to copy the public key to any host you cannot log into without a password using this method. 3.3. Restoring disk configuration from backups Prerequisites This procedure assumes you have already performed the backup process in Chapter 2, Backing up important files and know the location of your backup files and the address of the backup host. Procedure If the new host does not have multipath configuration, blacklist the devices. Create an inventory file for the new host that defines the devices to blacklist. Run the gluster_deployment.yml playbook on this inventory file using the blacklistdevices tag. Copy backed up configuration details to the new host. Create an inventory file for host restoration. Change into the hc-ansible-deployment directory and back up the default archive_config_inventory.yml file. Edit the archive_config_inventory.yml file with details of the cluster you want to back up. hosts The backend FQDN of the host that you want to restore (this host). backup_dir The directory in which to store extracted backup files. nbde_setup If you use Network-Bound Disk Encryption, set this to true . Otherwise, set to false . upgrade Set to false . For example: Execute the archive_config.yml playbook. Run the archive_config.yml playbook using your updated inventory file with the restorefiles tag. (Optional) Configure Network-Bound Disk Encryption (NBDE) on the root disk. Create an inventory file for the new host that defines devices to encrypt. See Understanding the luks_tang_inventory.yml file for more information about these parameters. Run the luks_tang_setup.yml playbook using your inventory file and the bindtang tag. 3.4. Creating the node_replace_inventory.yml file Define your cluster hosts by creating a node_replacement_inventory.yml file. Procedure Back up the node_replace_inventory.yml file. Edit the node_replace_inventory.yml file to define your cluster. See Appendix C, Understanding the node_replace_inventory.yml file for more information about this inventory file and its parameters. 3.5. Executing the replace_node.yml playbook file The replace_node.yml playbook reconfigures a Red Hat Hyperconverged Infrastructure for Virtualization cluster to use a new node after an existing cluster node has failed. Procedure Execute the playbook. 3.6. Finalizing host replacement After you have replaced a failed host with a new host, follow these steps to ensure that the cluster is connected to the new host and properly activated. Procedure Activate the host. Log in to the Red Hat Virtualization Administrator Portal. Click Compute Hosts and observe that the replacement host is listed with a state of Maintenance . Select the host and click Management Activate . Wait for the host to reach the Up state. Attach the gluster network to the host. Click Compute Hosts and select the host. Click Network Interfaces Setup Host Networks . Drag and drop the newly created network to the correct interface. Ensure that the Verify connectivity between Host and Engine checkbox is checked. Ensure that the Save network configuration checkbox is checked. Click OK to save. Verify the health of the network. Click the Network Interfaces tab and check the state of the host's network. If the network interface enters an "Out of sync" state or does not have an IP Address, click Management Refresh Capabilities . 3.7. Verifying healing in progress After replacing a failed host with a new host, verify that your storage is healing as expected. Procedure Verify that healing is in progress. Run the following command on any hyperconverged host: The output shows a summary of healing activity on each brick in each volume, for example: Depending on brick size, volumes can take a long time to heal. You can still run and migrate virtual machines using this node while the underlying storage heals.
[ "pkill glusterfsd umount /gluster_bricks/{engine,vmstore,data}", "nodectl info", "sed -i `/ failed-host-frontend.example.com /d` /root/.ssh/known_hosts sed -i `/ failed-host-backend.example.com /d` /root/.ssh/known_hosts", "ssh-copy-id root@ new-host-backend.example.com ssh-copy-id root@ new-host-frontend.example.com", "ssh root@ host1-backend.example.com ssh root@ host1-frontend.example.com ssh root@ host2-backend.example.com ssh root@ host2-frontend.example.com ssh root@ new-host-backend.example.com ssh root@ new-host-frontend.example.com", "ssh-copy-id root@ host-frontend.example.com ssh-copy-id root@ host-backend.example.com", "hc_nodes: hosts: new-host-backend-fqdn.example.com : blacklist_mpath_devices: - sda - sdb - sdc - sdd", "ansible-playbook -i blacklist-inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/gluster_deployment.yml --tags=blacklistdevices", "mkdir /rhhi-backup scp backup-host.example.com :/backups/rhvh-node-host1-backend.example.com-backup.tar.gz /rhhi-backup tar -xvf /rhhi-backup/rhvh-node-host1-backend.example.com-backup.tar.gz -C /rhhi-backup", "cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp archive_config_inventory.yml archive_config_inventory.yml.bk", "all: hosts: host1-backend.example.com : vars: backup_dir: /rhhi-backup nbde_setup: true upgrade: false", "ansible-playbook -i archive_config_inventory.yml archive_config.yml --tags=restorefiles", "hc_nodes: hosts: new-node-frontend-fqdn.example.com : blacklist_mpath_devices: - sda - sdb - sdc rootpassphrase: stronGpa55 rootdevice: /dev/sda2 networkinterface: eth1 vars: ip_version: IPv4 ip_config_method: dhcp gluster_infra_tangservers: - url: http:// tang-server.example.com : 80", "ansible-playbook -i inventory.yml /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/tasks/luks_tang_setup.yml --tags=bindtang", "cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment cp node_replace_inventory.yml node_replace_inventory.yml.bk", "cd /etc/ansible/roles/gluster.ansible/playbooks/hc-ansible-deployment/ ansible-playbook -i node_replace_inventory.yml tasks/replace_node.yml --tags=restorepeer", "for vol in `gluster volume list`; do gluster volume heal USDvol info summary; done", "Brick brick1 Status: Connected Total Number of entries: 3 Number of entries in heal pending: 2 Number of entries in split-brain: 1 Number of entries possibly healing: 0" ]
https://docs.redhat.com/en/documentation/red_hat_hyperconverged_infrastructure_for_virtualization/1.8/html/replacing_failed_hosts/replacing-hosts_same-fqdn-backup-config
probe::netdev.set_promiscuity
probe::netdev.set_promiscuity Name probe::netdev.set_promiscuity - Called when the device enters/leaves promiscuity Synopsis Values dev_name The device that is entering/leaving promiscuity mode enable If the device is entering promiscuity mode inc Count the number of promiscuity openers disable If the device is leaving promiscuity mode
[ "netdev.set_promiscuity" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-netdev-set-promiscuity
5.338. tzdata
5.338. tzdata 5.338.1. RHEA-2012:1488 - tzdata enhancement update A new tzdata package that updates Daylight Saving Time observations for several countries is now available. The tzdata packages contain data files with rules for various time zones around the world. This updated package adds the following time-zone changes to the zone info database: Bug Fix BZ# 871993 , 871791 , 871994 , 871995 On October 24 2012, the Jordanian Cabinet rescinded a 2012-10-14 instruction to switch from daylight saving time (DST) to standard time on 2012-10-26. Instead, Jordan will remain on local DST (ITC +3) for the 2012-2013 Jordanian winter. Cuba, which was scheduled to move back to standard time on 2012-11-12, switched to standard time on 2012-11-04. BZ# 871993 , 871791 , 871994 , 871995 In Brazil, the North Region state, Tocantins, will observe DST in 2012-2013. This is the first time Tocantins has observed DST since 2003. By contrast, Bahia, a Northeast Region state, will not observe DST in 2012-2013. Like Tocantins, Bahia stopped observing DST in 2003. Bahia re-introduced DST on October 16 2011. On October 17 2012, however, Bahia Governor, Jaques Wagner, announced DST would not be observed in 2012, citing public surveys showing most Bahia residents were opposed to it. BZ# 871993 , 871791 , 871994 , 871995 Israel has new DST rules as of 2013. DST now starts at 02:00 on the Friday before the last Sunday in March. DST now ends at 02:00 on the first Sunday after October 1, unless this day is also the second day of (Rosh Hashanah). In this case, DST ends a day later, at 02:00 on the first Monday after October 2. The Palestinian territories, which were scheduled to move back to standard time on 2012-09-28, switched to standard time on 2012-09-21. Although Western Samoa has observed DST for two consecutive seasons (2010-2011 and 2011-2012), there is no official indication of DST continuing according to a set pattern for the foreseeable future. On 2012-09-04, the Samoan Ministry of Commerce, Industry, and Labour announced Samoa would observe DST from Sunday, 2012-09-30 until Sunday 2012-04-07. All users, especially those in the locale affected by these time changes, and users interacting with people or systems in the affected locale, are advised to upgrade to this updated package, which includes these updates. 5.338.2. RHEA-2012:1101 - tzdata enhancement update Updated tzdata packages that add one enhancement are now available for Red Hat Enterprise Linux. The tzdata packages contain data files with rules for various time zones around the world. Enhancement BZ# 839271 , BZ# 839934 , BZ# 839937 , BZ# 839938 Daylight Saving Time will be interrupted during the holy month of Ramadan in Morocco (that is July 20 - August 19, 2012 in the Gregorian Calendar). This update incorporates the exception so that Daylight Saving Time is turned off and the time setting returned back to the standard time during Ramadan. All users of tzdata are advised to upgrade to these updated packages, which add this enhancement. 5.338.3. RHEA-2013:0182 - tzdata enhancement update New tzdata packages that add one enhancement are now available for Red Hat Enterprise Linux 3, 4, 5, and 6. The tzdata packages contain data files with rules for time zones. Enhancement BZ# 894030 , BZ# 894044 , BZ# 894045 , BZ# 894046 On Nov 10, 2012, Libya changed to the time zone UTC+1. Therefore, starting from the year 2013 Libya will be switching to daylight saving time on the last Friday of March and back to the standard time on the last Friday of October. The time zone setting and the daylight saving time settings for the Africa/Tripoli time zone have been updated accordingly. All users of tzdata are advised to upgrade to these updated packages, which add this enhancement. 5.338.4. RHEA-2012:1338 - tzdata enhancement update Updated tzdata packages that add two enhancements are now available for Red Hat Enterprise Linux. The tzdata packages contain data files with rules for various time zones around the world. Enhancements BZ# 857904 , BZ# 857905 , BZ# 857906 , BZ# 857907 Daylight saving time in Fiji will start at 2:00 a.m. on Sunday, 21st October 2012, and end at 3 am on Sunday, 20th January 2013. BZ# 857904 , BZ# 857905 , BZ# 857906 , BZ# 857907 Tokelau was listed in an incorrect time zone for as long as the Zoneinfo project was in existence. The actual zone was supposed to be GMT-11 hours before Tokelau was moved to the other side of the International Date Line at the end of year 2011. The local time in Tokelau is now GMT+13. All users of tzdata are advised to upgrade to these updated packages, which add these enhancements. 5.338.5. RHEA-2013:0674 - tzdata enhancement update Updated tzdata packages that add one enhancement are now available for Red Hat Enterprise Linux 3, 4, 5 and 6. The tzdata packages contain data files with rules for various time zones. Enhancement BZ# 921173 , BZ# 921174 , BZ# 919628 , BZ# 921176 Time zone rules of tzdata have been modified to reflect the following changes: The period of Daylight Saving Time (DST) in Paraguay will end on March 24 instead of April 14. Haiti will use US daylight-saving rules in the year 2013. Morocco does not observe DST during Ramadan. Therefore, Morocco is expected to switch to Western European Time (WET) on July 9 and resume again to Western European Summer Time (WEST) on August 8. Also, the tzdata packages now provide rules for several new time zones: Asia/Khandyga, Asia/Ust-Nera, and Europe/Busingen. All users of tzdata are advised to upgrade to these updated packages, which add this enhancement. 5.338.6. RHEA-2013:1432 - tzdata enhancement update Updated tzdata packages that add one enhancement are now available for Red Hat Enterprise Linux 3, 4, 5, and 6. The tzdata packages contain data files with rules for various time zones. Enhancement BZ# 1013527 , BZ# 1013875 , BZ# 1013876 , BZ# 1014720 Morocco extended DST by one month requiring an update to these packages. This update includes resynchronization with the latest upstream release in order to pick up the Moroccan DST change. All users of tzdata are advised to upgrade to these updated packages, which add this enhancement. 5.338.7. RHEA-2013:0880 - tzdata enhancement update Updated tzdata packages that add various enhancements are now available for Red Hat Enterprise Linux 3, 4, 5, and 6. The tzdata packages contain data files with rules for various time zones. Enhancement BZ# 928461 , BZ# 928462 , BZ# 928463 , BZ# 928464 The Gaza Strip and the West Bank entered Daylight Saving Time on March 28 at midnight local time. All users of tzdata are advised to upgrade to these updated packages, which add these enhancements. 5.338.8. RHEA-2013:1025 - tzdata enhancement update Updated tzdata packages that add one enhancement are now available for Red Hat Enterprise Linux 3, 4, 5, and 6. The tzdata packages contain data files with rules for various time zones. Enhancement BZ# 980805 , BZ# 980807 , BZ# 981019 , BZ# 981020 Morocco does not observe DST during Ramadan. Therefore, Morocco is expected to switch to Western European Time (WET) on July 7 and resume again to Western European Summer Time (WEST) on August 10. Also, the period of DST in Israel has been extended until the last Sunday in October from the year 2013 onwards. All users of tzdata are advised to upgrade to these updated packages, which add this enhancement. 5.338.9. RHEA-2013:0615 - tzdata enhancement update Updated tzdata packages that add one enhancement are now available for Red Hat Enterprise Linux 3, 4, 5 and 6. The tzdata packages contain data files with rules for various time zones. Enhancement BZ# 912521 , BZ# 916272 , BZ# 916273 , BZ# 916274 The Chilean Government is extending the period of Daylight Saving Time (DST) in the year 2013 until April the 27th. Then, Chile Standard Time (CLT) and Easter Island Standard Time (EAST) will be in effect until September the 7th when switching again to DST. With this update, the rules used for Chile time zones have been adjusted accordingly. All users of tzdata are advised to upgrade to these updated packages, which add this enhancement.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/tzdata
18.13. Network & Hostname
18.13. Network & Hostname To configure essential networking features for your system, select Network & Hostname at the Installation Summary screen. Locally accessible interfaces are automatically detected by the installation program and cannot be manually added or deleted. The detected interfaces are listed in the left pane. Click an interface in the list to display more details about in on the right. To activate or deactivate a network interface, move the switch in the top right corner of the screen to either ON or OFF . Note There are several types of network device naming standards used to identify network devices with persistent names such as em1 or wl3sp0 . For information about these standards, see the Red Hat Enterprise Linux 7 Networking Guide . Figure 18.9. Network & Hostname Configuration Screen Below the list of connections, enter a host name for this computer in the Hostname input field. The host name can be either a fully-qualified domain name (FQDN) in the format hostname . domainname or a short host name in the format hostname . Many networks have a Dynamic Host Configuration Protocol (DHCP) service that automatically supplies connected systems with a domain name. To allow the DHCP service to assign the domain name to this machine, only specify the short host name. The value localhost.localdomain means that no specific static host name for target system is configured, and the actual host name of installed system will be configured during process of network configuration (for example, by NetworkManager using DHCP or DNS). Important If you want to manually assign the host name, make sure you do not use a domain name that is not delegated to you, as this can result in network resources becoming unavailable. For more information, see the recommended naming practices in the Red Hat Enterprise Linux 7 Networking Guide . Change the default setting localhost . localdomain to a unique host name for each of your Linux instances. Once you have finished network configuration, click Done to return to the Installation Summary screen. 18.13.1. Edit Network Connections All network connections on IBM Z are listed in the Network & Hostname screen. By default, the list contains the connection configured earlier in the booting phase and is either OSA, LCS, or HiperSockets. All of these interface types use names in the form of enccw device_id , for example enccw0.0.0a00 . Note that on IBM Z, you cannot add a new connection because the network subchannels need to be grouped and set online beforehand, and this is currently only done in the booting phase. See Chapter 16, Booting the Installation on IBM Z for details. Usually, the network connection configured earlier in the booting phase does not need to be modified during the rest of the installation. However, if you do need to modify the existing connection, click the Configure button. A NetworkManager dialog appears with a set of tabs appropriate to wired connections, as described below. Here, you can configure network connections for the system, not all of which are relevant to IBM Z. This section only details the most important settings for a typical wired connection used during installation. Many of the available options do not have to be changed in most installation scenarios and are not carried over to the installed system. Configuration of other types of network is broadly similar, although the specific configuration parameters are necessarily different. To learn more about network configuration after installation, see the Red Hat Enterprise Linux 7 Networking Guide . To configure a network connection manually, click the Configure button in the lower right corner of the screen. A dialog appears that allows you to configure the selected connection. If required, see the Networking Guide for more detailed information on network settings. The most useful network configuration options to consider during installation are: Mark the Automatically connect to this network when it is available check box if you want to use the connection every time the system boots. You can use more than one connection that will connect automatically. This setting will carry over to the installed system. Figure 18.10. Network Auto-Connection Feature By default, IPv4 parameters are configured automatically by the DHCP service on the network. At the same time, the IPv6 configuration is set to the Automatic method. This combination is suitable for most installation scenarios and usually does not require any changes. Figure 18.11. IP Protocol Settings When you have finished editing network settings, click Save to save the new configuration. If you reconfigured a device that was already active during installation, you must restart the device in order to use the new configuration in the installation environment. Use the ON/OFF switch on the Network & Host Name screen to restart the device. 18.13.2. Advanced Network Interfaces Advanced network interfaces are also available for installation. This includes virtual local area networks ( VLAN s) and three methods to use aggregated links. Detailed description of these interfaces is beyond the scope of this document; read the Red Hat Enterprise Linux 7 Networking Guide for more information. To create an advanced network interface, click the + button in the lower left corner of the Network & Hostname screen. A dialog appears with a drop-down menu with the following options: Bond - represents NIC ( Network Interface Controller ) Bonding, a method to bind multiple network interfaces together into a single, bonded, channel. Bridge - represents NIC Bridging, a method to connect multiple separate network into one aggregate network. Team - represents NIC Teaming, a new implementation to aggregate links, designed to provide a small kernel driver to implement the fast handling of packet flows, and various applications to do everything else in user space. VLAN - represents a method to create multiple distinct broadcast domains, which are mutually isolated. Figure 18.12. Advanced Network Interface Dialog Note Note that locally accessible interfaces, wired or wireless, are automatically detected by the installation program and cannot be manually added or deleted by using these controls. Once you have selected an option and clicked the Add button, another dialog appears for you to configure the new interface. See the respective chapters in the Red Hat Enterprise Linux 7 Networking Guide for detailed instructions. To edit configuration on an existing advanced interface, click the Configure button in the lower right corner of the screen. You can also remove a manually-added interface by clicking the - button.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/installation_guide/sect-network-hostname-configuration-s390
Chapter 5. Setting up the test environment
Chapter 5. Setting up the test environment To certify your product, you must first set up the environment where you can run the tests. The test environment consists of a host system. A test host is a workstation used as a medium for accessing the OpenShift cluster. Note Red Hat recommends partners to enable FIPS mode on both control plane and data plane nodes. Additional resources For more information about RHOSO deployment, see RHOSO Deployment Guide . For more information about enabling FIPS, see Installing a cluster in FIPS node . 5.1. Setting up the test host You can use the test host to start a test run on the OpenShift cluster, display the progress of the tests, and present the final result file after gathering results. Prerequisites Install or use an existing RHEL 9 system. Have access to an OpenShift cluster hosting a RHOSO control plane. Procedure Use your RHN credentials to register your system by using Red Hat Subscription Management. Display the list of available subscriptions for your system. Search for the subscription that provides the Red Hat Certification (for RHEL Server) repository. Note the subscription and its Pool ID. Attach the subscription to your system. Note You don't have to attach the subscription to your system, if you enable the option Simple content access for Red Hat Subscription Management . Replace the pool_ID with the Pool ID of the subscription. Subscribe to the Red Hat Certification channel. Install the certification RPMs. Additional resources For more information about enabling Simple content access for Red Hat Subscription Management, see How do I enable Simple Content Access for Red Hat Subscription Management?
[ "subscription-manager register", "subscription-manager list --available*", "subscription-manager attach --pool=<pool_ID>", "subscription-manager repos --enable=cert-1-for-rhel-9-x86_64-rpms", "dnf install redhat-certification dnf install redhat-certification-rhoso" ]
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_services_on_openshift_certification_workflow_guide/setting-up-the-test-environment_for-openstack-infrastructure-non-containerized-applications
Chapter 12. Converting a connected cluster to a disconnected cluster
Chapter 12. Converting a connected cluster to a disconnected cluster There might be some scenarios where you need to convert your OpenShift Container Platform cluster from a connected cluster to a disconnected cluster. A disconnected cluster, also known as a restricted cluster, does not have an active connection to the internet. As such, you must mirror the contents of your registries and installation media. You can create this mirror registry on a host that can access both the internet and your closed network, or copy images to a device that you can move across network boundaries. For information on how to convert your cluster, see the Converting a connected cluster to a disconnected cluster procedure in the Disconnected environments section.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/postinstallation_configuration/converting-to-disconnected
Chapter 5. LVM Configuration Examples
Chapter 5. LVM Configuration Examples This chapter provides some basic LVM configuration examples. 5.1. Creating an LVM Logical Volume on Three Disks This example creates an LVM logical volume called new_logical_volume that consists of the disks at /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 5.1.1. Creating the Physical Volumes To use disks in a volume group, you label them as LVM physical volumes. Warning This command destroys any data on /dev/sda1 , /dev/sdb1 , and /dev/sdc1 . 5.1.2. Creating the Volume Group The following command creates the volume group new_vol_group . You can use the vgs command to display the attributes of the new volume group. 5.1.3. Creating the Logical Volume The following command creates the logical volume new_logical_volume from the volume group new_vol_group . This example creates a logical volume that uses 2GB of the volume group. 5.1.4. Creating the File System The following command creates a GFS file system on the logical volume. The following commands mount the logical volume and report the file system disk space usage.
[ "pvcreate /dev/sda1 /dev/sdb1 /dev/sdc1 Physical volume \"/dev/sda1\" successfully created Physical volume \"/dev/sdb1\" successfully created Physical volume \"/dev/sdc1\" successfully created", "vgcreate new_vol_group /dev/sda1 /dev/sdb1 /dev/sdc1 Volume group \"new_vol_group\" successfully created", "vgs VG #PV #LV #SN Attr VSize VFree new_vol_group 3 0 0 wz--n- 51.45G 51.45G", "lvcreate -L2G -n new_logical_volume new_vol_group Logical volume \"new_logical_volume\" created", "gfs_mkfs -plock_nolock -j 1 /dev/new_vol_group/new_logical_volume This will destroy any data on /dev/new_vol_group/new_logical_volume. Are you sure you want to proceed? [y/n] y Device: /dev/new_vol_group/new_logical_volume Blocksize: 4096 Filesystem Size: 491460 Journals: 1 Resource Groups: 8 Locking Protocol: lock_nolock Lock Table: Syncing All Done", "mount /dev/new_vol_group/new_logical_volume /mnt df Filesystem 1K-blocks Used Available Use% Mounted on /dev/new_vol_group/new_logical_volume 1965840 20 1965820 1% /mnt" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/LVM_examples
10.2.4. User and System Connections
10.2.4. User and System Connections NetworkManager connections are always either user connections or system connections . Depending on the system-specific policy that the administrator has configured, users may need root privileges to create and modify system connections. NetworkManager 's default policy enables users to create and modify user connections, but requires them to have root privileges to add, modify or delete system connections. User connections are so-called because they are specific to the user who creates them. In contrast to system connections, whose configurations are stored under the /etc/sysconfig/network-scripts/ directory (mainly in ifcfg- <network_type> interface configuration files), user connection settings are stored in the GConf configuration database and the GNOME keyring, and are only available during login sessions for the user who created them. Thus, logging out of the desktop session causes user-specific connections to become unavailable. Note Because NetworkManager uses the GConf and GNOME keyring applications to store user connection settings, and because these settings are specific to your desktop session, it is highly recommended to configure your personal VPN connections as user connections. If you do so, other Non- root users on the system cannot view or access these connections in any way. System connections, on the other hand, become available at boot time and can be used by other users on the system without first logging in to a desktop session. NetworkManager can quickly and conveniently convert user to system connections and vice versa. Converting a user connection to a system connection causes NetworkManager to create the relevant interface configuration files under the /etc/sysconfig/network-scripts/ directory, and to delete the GConf settings from the user's session. Conversely, converting a system to a user-specific connection causes NetworkManager to remove the system-wide configuration files and create the corresponding GConf/GNOME keyring settings. Figure 10.5. The Available to all users check box controls whether connections are user-specific or system-wide Procedure 10.2. Changing a Connection to be User-Specific instead of System-Wide, or Vice-Versa Note Depending on the system's policy, you may need root privileges on the system in order to change whether a connection is user-specific or system-wide. Right-click on the NetworkManager applet icon in the Notification Area and click Edit Connections . The Network Connections window appears. If needed, select the arrow head (on the left hand side) to hide and reveal the types of available network connections. Select the specific connection that you want to configure and click Edit . Check the Available to all users check box to ask NetworkManager to make the connection a system-wide connection. Depending on system policy, you may then be prompted for the root password by the PolicyKit application. If so, enter the root password to finalize the change. Conversely, uncheck the Available to all users check box to make the connection user-specific.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-User_and_System_Connections
Chapter 5. Minimum hardware recommendations for containerized Ceph
Chapter 5. Minimum hardware recommendations for containerized Ceph Ceph can run on non-proprietary commodity hardware. Small production clusters and development clusters can run without performance optimization with modest hardware. NOTE: * Consider future growth and usage patterns when allocating disk space. Regularly monitor disk usage and performance to address any storage limitations. Configuring swap space for virtual memory for daemons is generally not recommended in modern systems. This approach can degrade performance, and your Ceph cluster may operate more effectively with a daemon that crashes rather than one that becomes unresponsive. Important Hardware accelerated compression in Ceph Object Gateway requires RHEL 9.4 on a Sapphire or Emerald Rapids Xeon CPU (or newer) with QAT devices. For more information, see Intel Ark . Process Criteria Minimum Recommended ceph-osd-container Processor 1x AMD64 or Intel 64 CPU CORE per OSD container RAM Minimum of 5 GB of RAM per OSD container OS Disk 1x OS disk per host OSD Storage 1x storage drive per OSD container. Cannot be shared with OS Disk. block.db Optional, but Red Hat recommended, 1x SSD or NVMe or Optane partition or lvm per daemon. Sizing is 4% of block.data for BlueStore for object, file and mixed workloads and 1% of block.data for the BlueStore for Block Device, Openstack cinder, and Openstack cinder workloads. block.wal Optionally, 1x SSD or NVMe or Optane partition or logical volume per daemon. Use a small size, for example 10 GB, and only if it's faster than the block.db device. Network 2x 10 GB Ethernet NICs ceph-mon-container Processor 1x AMD64 or Intel 64 CPU CORE per mon-container RAM 3 GB per mon-container Disk Space Allocation Create a dedicated Storage Partition Size for /var/lib/ceph with a minimum size of 120 GB; 240 GB is recommended. If a dedicated partition is not feasible, ensure that /var is on a dedicated partition with at least the above-mentioned free space. Monitor Disk Optionally, 1x SSD disk for Monitor rocksdb data Network 2x 1GB Ethernet NICs, 10 GB Recommended ceph-mgr-container Processor 1x AMD64 or Intel 64 CPU CORE per mgr-container RAM 3 GB per mgr-container Network 2x 1GB Ethernet NICs, 10 GB Recommended ceph-radosgw-container Processor 1x AMD64 or Intel 64 CPU CORE per radosgw-container RAM 1 GB per daemon Disk Space Allocation 5 GB per daemon. The space refers to the allocation on /var/lib/ceph. Network 1x 1GB Ethernet NICs ceph-mds-container Processor 1x AMD64 or Intel 64 CPU CORE per mds-container RAM 3 GB per mds-container This number is highly dependent on the configurable MDS cache size. The RAM requirement is typically twice as much as the amount set in the mds_cache_memory_limit configuration setting. Note also that this is the memory for your daemon, not the overall system memory. Disk Space Allocation As a best practice, create a dedicated partition for /var/log with a minimum of 20 GB of free space for this service. If a dedicated partition is not possible, ensure that /var is on a dedicated partition with at least the above-mentioned free space. Network 2x 1GB Ethernet NICs, 10 GB Recommended Note that this is the same network as the OSD containers. If you have a 10 GB network on your OSDs you should use the same on your MDS so that the MDS is not disadvantaged when it comes to latency.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/hardware_guide/minimum-hardware-recommendations-for-containerized-ceph_hw
14.7.5. Displaying CPU Statistics
14.7.5. Displaying CPU Statistics The nodecpustats command displays statistical information about the specified CPU, if the CPU is given. If not, it will display the CPU status of the node. If a percent is specified, it will display the percentage of each type of CPU statistics that were recorded over an one (1) second interval. This example shows no CPU specified: This example shows the statistical percentages for CPU number 2: You can control the behavior of the rebooting guest virtual machine by modifying the on_reboot element in the guest virtual machine's configuration file.
[ "virsh nodecpustats user: 1056442260000000 system: 401675280000000 idle: 7549613380000000 iowait: 94593570000000", "virsh nodecpustats 2 --percent usage: 2.0% user: 1.0% system: 1.0% idle: 98.0% iowait: 0.0%" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-numa_node_management-displaying_cpu_statistics
function::print_ustack
function::print_ustack Name function::print_ustack - Print out stack for the current task from string. EXPERIMENTAL! Synopsis Arguments stk String with list of hexadecimal addresses for the current task. Description Perform a symbolic lookup of the addresses in the given string, which is assumed to be the result of a prior call to ubacktrace for the current task. Print one line per address, including the address, the name of the function containing the address, and an estimate of its position within that function. Return nothing.
[ "function print_ustack(stk:string)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-print-ustack
Chapter 51. ReportService
Chapter 51. ReportService 51.1. RunReport POST /v1/report/run/{id} 51.1.1. Description 51.1.2. Parameters 51.1.2.1. Path Parameters Name Description Required Default Pattern id X null 51.1.3. Return Type Object 51.1.4. Content Type application/json 51.1.5. Responses Table 51.1. HTTP Response Codes Code Message Datatype 200 A successful response. Object 0 An unexpected error response. GooglerpcStatus 51.1.6. Samples 51.1.7. Common object reference 51.1.7.1. GooglerpcStatus Field Name Required Nullable Type Description Format code Integer int32 message String details List of ProtobufAny 51.1.7.2. ProtobufAny Any contains an arbitrary serialized protocol buffer message along with a URL that describes the type of the serialized message. Protobuf library provides support to pack/unpack Any values in the form of utility functions or additional generated methods of the Any type. Example 1: Pack and unpack a message in C++. Example 2: Pack and unpack a message in Java. The pack methods provided by protobuf library will by default use 'type.googleapis.com/full.type.name' as the type URL and the unpack methods only use the fully qualified type name after the last '/' in the type URL, for example "foo.bar.com/x/y.z" will yield type name "y.z". 51.1.7.2.1. JSON representation The JSON representation of an Any value uses the regular representation of the deserialized, embedded message, with an additional field @type which contains the type URL. Example: If the embedded message type is well-known and has a custom JSON representation, that representation will be embedded adding a field value which holds the custom JSON in addition to the @type field. Example (for message [google.protobuf.Duration][]): Field Name Required Nullable Type Description Format @type String A URL/resource name that uniquely identifies the type of the serialized protocol buffer message. This string must contain at least one \"/\" character. The last segment of the URL's path must represent the fully qualified name of the type (as in path/google.protobuf.Duration ). The name should be in a canonical form (e.g., leading \".\" is not accepted). In practice, teams usually precompile into the binary all types that they expect it to use in the context of Any. However, for URLs which use the scheme http , https , or no scheme, one can optionally set up a type server that maps type URLs to message definitions as follows: * If no scheme is provided, https is assumed. * An HTTP GET on the URL must yield a [google.protobuf.Type][] value in binary format, or produce an error. * Applications are allowed to cache lookup results based on the URL, or have them precompiled into a binary to avoid any lookup. Therefore, binary compatibility needs to be preserved on changes to types. (Use versioned type names to manage breaking changes.) Note: this functionality is not currently available in the official protobuf release, and it is not used for type URLs beginning with type.googleapis.com. As of May 2023, there are no widely used type server implementations and no plans to implement one. Schemes other than http , https (or the empty scheme) might be used with implementation specific semantics.
[ "Foo foo = ...; Any any; any.PackFrom(foo); if (any.UnpackTo(&foo)) { }", "Foo foo = ...; Any any = Any.pack(foo); if (any.is(Foo.class)) { foo = any.unpack(Foo.class); } // or if (any.isSameTypeAs(Foo.getDefaultInstance())) { foo = any.unpack(Foo.getDefaultInstance()); }", "Example 3: Pack and unpack a message in Python.", "foo = Foo(...) any = Any() any.Pack(foo) if any.Is(Foo.DESCRIPTOR): any.Unpack(foo)", "Example 4: Pack and unpack a message in Go", "foo := &pb.Foo{...} any, err := anypb.New(foo) if err != nil { } foo := &pb.Foo{} if err := any.UnmarshalTo(foo); err != nil { }", "package google.profile; message Person { string first_name = 1; string last_name = 2; }", "{ \"@type\": \"type.googleapis.com/google.profile.Person\", \"firstName\": <string>, \"lastName\": <string> }", "{ \"@type\": \"type.googleapis.com/google.protobuf.Duration\", \"value\": \"1.212s\" }" ]
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.6/html/api_reference/reportservice
Chapter 9. Restricting the desktop session
Chapter 9. Restricting the desktop session You can restrict and control various functionalities on the GNOME desktop environment. You can enforce specific configurations and restrictions to maintain system integrity and prevent unauthorized access. 9.1. Disabling user logout and user switching Disabling user logout and user switching can improve security, prevent user errors, and enforce a specific workflow. This can mitigate unauthorized access to sensitive data and disruptions to the workflow caused by users accidentally logging out or switching to another user. Prerequisites Administrative access. Procedure Create a plain text /etc/dconf/db/local.d/00-logout keyfile in the /etc/dconf/db/local.d/ directory with the following content: Create a new file under the /etc/dconf/db/local.d/locks/ directory and list the keys or subpaths you want to lock down: Apply the changes to the system databases: 9.2. Disabling printing Disabling printing can prevent unauthorized access to sensitive documents and potential breaches and safeguard confidential information. Prerequisites Administrative access. Procedure Create a plain text /etc/dconf/db/local.d/00-printing keyfile in the /etc/dconf/db/local.d/ directory with the following content: Create a new file under the /etc/dconf/db/local.d/locks/ directory and list the keys or subpaths you want to lock down: Apply the changes to the system databases: 9.3. Disabling filesaving Disabling file saving can help to protect sensitive data from unauthorized access and protect against potential data leaks. Prerequisites Administrative access. Procedure Create a plain text /etc/dconf/db/local.d/00-filesaving keyfile in the /etc/dconf/db/local.d/ directory with the following content: Create a new file under the /etc/dconf/db/local.d/locks/ directory and list the keys or subpaths you want to lock down: Apply the changes to the system databases: 9.4. Disabling the command prompt Disabling the command prompt can simplify user interactions with the system, prevent inexperienced users from executing potentially harmful commands that might cause system instability or data loss, and reduce the risk of unauthorized changes to system settings or configurations. Prerequisites Administrative access. Procedure Create a plain text /etc/dconf/db/local.d/00-lockdown keyfile in the /etc/dconf/db/local.d/ directory with the following content: Create a new file under the /etc/dconf/db/local.d/locks/ directory and list the keys or subpaths you want to lock down: Apply the changes to the system databases: For this settings to take effect, users needs to log out and log back in. 9.5. Disabling repartitioning You can override the default system settings that control disk management. Important Avoid modifying the /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy file directly. Any changes you make will be replaced during the package update. Prerequisites Administrative access. Procedure Copy the /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy file under the /etc/share/polkit-1/actions/ directory: In the /etc/polkit-1/actions/org.freedesktop.udisks2.policy file, delete any actions that you do not need and add the following lines: <action id="org.freedesktop.udisks2.modify-device"> <message>Authentication is required to modify the disks settings</message> <defaults> <allow_any>no</allow_any> <allow_inactive>no</allow_inactive> <allow_active>yes</allow_active> </defaults> </action> If you want to restrict access only to the root user, replace <allow_any>no</allow_any> with <allow_any>auth_admin</allow_any> .
[ "Disable user logut disable-log-out=true Disable user switching disable-user-switching=true", "Lock user logout /org/gnome/desktop/lockdown/disable-log-out Lock user switching /org/gnome/desktop/lockdown/disable-user-switching", "dconf update", "Disable printing disable-printing=true", "Lock printing /org/gnome/desktop/lockdown/disable-printing", "dconf update", "Disable saving files on disk disable-save-to-disk=true", "Lock file saving /org/gnome/desktop/lockdown/disable-save-to-disk", "dconf update", "Disable command prompt disable-command-line=true", "Lock command prompt /org/gnome/desktop/lockdown/disable-command-line", "dconf update", "cp /usr/share/polkit-1/actions/org.freedesktop.udisks2.policy /etc/share/polkit-1/actions/org.freedesktop.udisks2.policy", "<action id=\"org.freedesktop.udisks2.modify-device\"> <message>Authentication is required to modify the disks settings</message> <defaults> <allow_any>no</allow_any> <allow_inactive>no</allow_inactive> <allow_active>yes</allow_active> </defaults> </action>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/customizing_the_gnome_desktop_environment/restricting-the-desktop-session_customizing-the-gnome-desktop-environment
Release Notes
Release Notes Red Hat Trusted Artifact Signer 1.1 Release notes for Red Hat's Trusted Artifact Signer 1.1.1 Red Hat Trusted Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_trusted_artifact_signer/1/html/release_notes/index
1.4. Document Overview
1.4. Document Overview This remainder of this document includes the following chapters: Chapter 2, LVM Components describes the components that make up an LVM logical volume. Chapter 3, LVM Administration Overview provides an overview of the basic steps you perform to configure LVM logical volumes, whether you are using the LVM Command Line Interface (CLI) commands or the LVM Graphical User Interface (GUI). Chapter 4, LVM Administration with CLI Commands summarizes the individual administrative tasks you can perform with the LVM CLI commands to create and maintain logical volumes. Chapter 5, LVM Configuration Examples provides a variety of LVM configuration examples. Chapter 6, LVM Troubleshooting provide instructions for troubleshooting a variety of LVM issues. Chapter 7, LVM Administration with the LVM GUI summarizes the operating of the LVM GUI. Appendix A, The Device Mapper describes the Device Mapper that LVM uses to map logical and physical volumes. Appendix B, The LVM Configuration Files describes the LVM configuration files. Appendix C, LVM Object Tags describes LVM object tags and host tags. Appendix D, LVM Volume Group Metadata describes LVM volume group metadata, and includes a sample copy of metadata for an LVM volume group.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/cluster_logical_volume_manager/doc_organization
Release notes
Release notes Red Hat OpenShift AI Cloud Service 1 Features, enhancements, resolved issues, and known issues associated with this release
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/release_notes/index
B.2.2. Installing and Upgrading
B.2.2. Installing and Upgrading RPM packages typically have file names like tree-1.5.3-2.el6.x86_64.rpm . The file name includes the package name ( tree ), version ( 1.5.3 ), release ( 2 ), operating system major version ( el6 ) and CPU architecture ( x86_64 ). You can use rpm 's -U option to: upgrade an existing but older package on the system to a newer version, or install the package even if an older version is not already installed. That is, rpm -U <rpm_file> is able to perform the function of either upgrading or installing as is appropriate for the package. Assuming the tree-1.5.3-2.el6.x86_64.rpm package is in the current directory, log in as root and type the following command at a shell prompt to either upgrade or install the tree package as determined by rpm : Note The -v and -h options (which are combined with -U ) cause rpm to print more verbose output and display a progress meter using hash signs. If the upgrade/installation is successful, the following output is displayed: Warning rpm provides two different options for installing packages: the aforementioned -U option (which historically stands for upgrade ), and the -i option, historically standing for install . Because the -U option subsumes both install and upgrade functions, we recommend to use rpm -Uvh with all packages except kernel packages. You should always use the -i option to install a new kernel package instead of upgrading it. This is because using the -U option to upgrade a kernel package removes the (older) kernel package, which could render the system unable to boot if there is a problem with the new kernel. Therefore, use the rpm -i <kernel_package> command to install a new kernel without replacing any older kernel packages. For more information on installing kernel packages, see Chapter 30, Manually Upgrading the Kernel . The signature of a package is checked automatically when installing or upgrading a package. The signature confirms that the package was signed by an authorized party. For example, if the verification of the signature fails, an error message such as the following is displayed: If it is a new, header-only, signature, an error message such as the following is displayed: If you do not have the appropriate key installed to verify the signature, the message contains the word NOKEY : See Section B.3, "Checking a Package's Signature" for more information on checking a package's signature. B.2.2.1. Package Already Installed If a package of the same name and version is already installed, the following output is displayed: However, if you want to install the package anyway, you can use the --replacepkgs option, which tells RPM to ignore the error: This option is helpful if files installed from the RPM were deleted or if you want the original configuration files from the RPM to be installed. B.2.2.2. Conflicting Files If you attempt to install a package that contains a file which has already been installed by another package, the following is displayed: To make RPM ignore this error, use the --replacefiles option: B.2.2.3. Unresolved Dependency RPM packages may sometimes depend on other packages, which means that they require other packages to be installed to run properly. If you try to install a package which has an unresolved dependency, output similar to the following is displayed: If you are installing a package from the Red Hat Enterprise Linux installation media, such as from a CD-ROM or DVD, the dependencies may be available. Find the suggested package(s) on the Red Hat Enterprise Linux installation media or on one of the active Red Hat Enterprise Linux mirrors and add it to the command: If installation of both packages is successful, output similar to the following is displayed: You can try the --whatprovides option to determine which package contains the required file. If the package that contains bar.so.3 is in the RPM database, the name of the package is displayed: Warning Although we can force rpm to install a package that gives us a Failed dependencies error (using the --nodeps option), this is not recommended, and will usually result in the installed package failing to run. Installing or removing packages with rpm --nodeps can cause applications to misbehave and/or crash, and can cause serious package management problems or, possibly, system failure. For these reasons, it is best to heed such warnings; the package manager-whether RPM , Yum or PackageKit -shows us these warnings and suggests possible fixes because accounting for dependencies is critical. The Yum package manager can perform dependency resolution and fetch dependencies from online repositories, making it safer, easier and smarter than forcing rpm to carry out actions without regard to resolving dependencies.
[ "-Uvh tree-1.5.3-2.el6.x86_64.rpm", "Preparing... ########################################### [100%] 1:tree ########################################### [100%]", "error: tree-1.5.3-2.el6.x86_64.rpm: Header V3 RSA/SHA256 signature: BAD, key ID d22e77f2", "error: tree-1.5.3-2.el6.x86_64.rpm: Header V3 RSA/SHA256 signature: BAD, key ID d22e77f2", "warning: tree-1.5.3-2.el6.x86_64.rpm: Header V3 RSA/SHA1 signature: NOKEY, key ID 57bbccba", "Preparing... ########################################### [100%] package tree-1.5.3-2.el6.x86_64 is already installed", "-Uvh --replacepkgs tree-1.5.3-2.el6.x86_64.rpm", "Preparing... ################################################## file /usr/bin/foobar from install of foo-1.0-1.el6.x86_64 conflicts with file from package bar-3.1.1.el6.x86_64", "-Uvh --replacefiles foo-1.0-1.el6.x86_64.rpm", "error: Failed dependencies: bar.so.3()(64bit) is needed by foo-1.0-1.el6.x86_64", "-Uvh foo-1.0-1.el6.x86_64.rpm bar-3.1.1.el6.x86_64.rpm", "Preparing... ########################################### [100%] 1:foo ########################################### [ 50%] 2:bar ########################################### [100%]", "-q --whatprovides \"bar.so.3\"", "bar-3.1.1.el6.i586.rpm" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Installing_and_Upgrading
A.2. Cluster Creation with Pacemaker in Red Hat Enterprise Linux Release 6.5 and Red Hat Enterprise Linux Release 6.6 (and later)
A.2. Cluster Creation with Pacemaker in Red Hat Enterprise Linux Release 6.5 and Red Hat Enterprise Linux Release 6.6 (and later) To create a Pacemaker cluster in Red Hat Enterprise Linux 6.5, you must create the cluster and start the cluster services on each node in the cluster. For example, to create a cluster named my_cluster that consists of nodes z1-rhel65.example.com and z2-rhel65.example.com and start cluster services on those nodes, run the following commands from both z1-rhel65.example.com and z2-rhel65.example.com . In Red Hat Enterprise Linux 6.6 and later, you run the cluster creation command from one node of the cluster. The following command, run from one node only, creates the cluster named my_cluster that consists of nodes z1-rhel66.example.com and z2-rhel66.example.com and starts cluster services on those nodes.
[ "pcs cluster setup --name my_cluster z1-rhel65.example.com z2-rhel65.example.com pcs cluster start", "pcs cluster setup --name my_cluster z1-rhel65.example.com z2-rhel65.example.com pcs cluster start", "pcs cluster setup --start --name my_cluster z1-rhel66.example.com z2-rhel66.example.com" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/configuring_the_red_hat_high_availability_add-on_with_pacemaker/s1-pacemaker65-66-HAAR
Chapter 11. Accessing the CUPS logs in the systemd journal
Chapter 11. Accessing the CUPS logs in the systemd journal By default, CUPS stores log messages in the systemd journal. This includes: Error messages Access log entries Page log entries Prerequisites CUPS is installed . Procedure Display the log entries: To display all log entries, enter: To display the log entries for a specific print job, enter: To display log entries within a specific time frame, enter: Replace YYYY with the year, MM with the month, and DD with the day. Additional resources journalctl(1) man page on your system
[ "journalctl -u cups", "journalctl -u cups JID= <print_job_id>", "journalectl -u cups --since= <YYYY-MM-DD> --until= <YYYY-MM-DD>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/configuring_and_using_a_cups_printing_server/accessing-the-cups-logs-in-the-systemd-journal_configuring-printing
Chapter 11. Intercepting Messages
Chapter 11. Intercepting Messages With AMQ Broker you can intercept packets entering or exiting the broker, allowing you to audit packets or filter messages. Interceptors can change the packets they intercept, which makes them powerful, but also potentially dangerous. You can develop interceptors to meet your business requirements. Interceptors are protocol specific and must implement the appropriate interface. Interceptors must implement the intercept() method, which returns a boolean value. If the value is true , the message packet continues onward. If false , the process is aborted, no other interceptors are called, and the message packet is not processed further. 11.1. Creating Interceptors You can create your own incoming and outgoing interceptors. All interceptors are protocol specific and are called for any packet entering or exiting the server respectively. This allows you to create interceptors to meet business requirements such as auditing packets. Interceptors can change the packets they intercept. This makes them powerful as well as potentially dangerous, so be sure to use them with caution. Interceptors and their dependencies must be placed in the Java classpath of the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure The following examples demonstrate how to create an interceptor that checks the size of each packet passed to it. Note that the examples implement a specific interface for each protocol. Implement the appropriate interface and override its intercept() method. If you are using the AMQP protocol, implement the org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor interface. package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This AMQPMessage has an acceptable size."); return true; } return false; } } If you are using Core Protocol, your interceptor must implement the org.apache.artemis.activemq.api.core.Interceptor interface. package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This Packet has an acceptable size."); return true; } return false; } } If you are using the MQTT protocol, implement the org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println("This MqttMessage has an acceptable size."); return true; } return false; } } If you are using the STOMP protocol, implement the org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor interface. package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println("This StompFrame has an acceptable size."); return true; } return false; } } 11.2. Configuring the Broker to Use Interceptors Once you have created an interceptor, you must configure the broker to use it. Prerequisites You must create an interceptor class and add it (and its dependencies) to the Java classpath of the broker before you can configure it for use by the broker. You can use the <broker_instance_dir> /lib directory since it is part of the classpath by default. Procedure Configure the broker to use an interceptor by adding configuration to <broker_instance_dir> /etc/broker.xml If your interceptor is intended for incoming messages, add its class-name to the list of remoting-incoming-interceptors . <configuration> <core> ... <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> ... </core> </configuration> If your interceptor is intended for outgoing messages, add its class-name to the list of remoting-outgoing-interceptors . <configuration> <core> ... <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration> Additional resources To learn how to configure interceptors in the AMQ Core Protocol JMS client, see Using message interceptors in the AMQ Core Protocol JMS documentation.
[ "package com.example; import org.apache.activemq.artemis.protocol.amqp.broker.AMQPMessage; import org.apache.activemq.artemis.protocol.amqp.broker.AmqpInterceptor; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements AmqpInterceptor { private final int ACCEPTABLE_SIZE = 1024; @Override public boolean intercept(final AMQPMessage message, RemotingConnection connection) { int size = message.getEncodeSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This AMQPMessage has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.artemis.activemq.api.core.Interceptor; import org.apache.activemq.artemis.core.protocol.core.Packet; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(Packet packet, RemotingConnection connection) throws ActiveMQException { int size = packet.getPacketSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This Packet has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.activemq.artemis.core.protocol.mqtt.MQTTInterceptor; import io.netty.handler.codec.mqtt.MqttMessage; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(MqttMessage mqttMessage, RemotingConnection connection) throws ActiveMQException { byte[] msg = (mqttMessage.toString()).getBytes(); int size = msg.length; if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This MqttMessage has an acceptable size.\"); return true; } return false; } }", "package com.example; import org.apache.activemq.artemis.core.protocol.stomp.StompFrameInterceptor; import org.apache.activemq.artemis.core.protocol.stomp.StompFrame; import org.apache.activemq.artemis.spi.core.protocol.RemotingConnection; public class MyInterceptor implements Interceptor { private final int ACCEPTABLE_SIZE = 1024; @Override boolean intercept(StompFrame stompFrame, RemotingConnection connection) throws ActiveMQException { int size = stompFrame.getEncodedSize(); if (size <= ACCEPTABLE_SIZE) { System.out.println(\"This StompFrame has an acceptable size.\"); return true; } return false; } }", "<configuration> <core> <remoting-incoming-interceptors> <class-name>org.example.MyIncomingInterceptor</class-name> </remoting-incoming-interceptors> </core> </configuration>", "<configuration> <core> <remoting-outgoing-interceptors> <class-name>org.example.MyOutgoingInterceptor</class-name> </remoting-outgoing-interceptors> </core> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/configuring_amq_broker/interceptors
Chapter 2. Setting up your environment
Chapter 2. Setting up your environment This tutorial walks you through the process of creating a Fuse Integration project. The project includes an initial route and a default CamelContext. A route is a chain of processors through which a message travels. A CamelContext is a single routing rule base that defines the context for configuring routes, and specifies the policies to use during message exchanges between endpoints (message sources and targets). You must complete this tutorial before you follow any of the other tutorials. Goals In this tutorial you complete the following tasks: Create a Fuse Integration project Download test messages (XML files) for your project View the test messages Before your begin Before you can set up a Fuse Integration project, you must install Red Hat CodeReady Studio with Fuse Tooling. For information on how to install CodeReady Studio, go to the Red Hat customer portal for the installation guide for your platform. Before you can follow the steps in the Chapter 10, Publishing your project to Red Hat Fuse tutorial, you must install Java 8. Creating a Fuse Integration project Open Red Hat CodeReady Studio. When you start CodeReady Studio for the first time, it opens in the JBoss perspective: Otherwise, it opens in the perspective that you were using in your CodeReady Studio session. From the menu , select File New Fuse Integration Project to open the New Fuse Integration Project wizard: In the Project Name field, enter ZooOrderApp . Leave the Use default workspace location option checked. Click to open the Select a Target Runtime page: Select Standalone for the deployment platform. Choose Karaf/Fuse on Karaf and accept None selected for the runtime. Note You add the runtime later in the Chapter 10, Publishing your project to Red Hat Fuse tutorial . Accept the default Apache Camel version . Click to open the Advanced Project Setup page, and then select the Empty - Blueprint DSL template: Click Finish . Fuse Tooling starts downloading-from the Maven repository-all of the files that it needs to build the project, and then it adds the new project to the Project Explorer view. If CodeReady Studio is not already showing the Fuse Integration perspective, it asks whether you want to switch to it now: Click Yes . The new ZooOrderApp project opens in the Fuse Integration perspective: The ZooOrderApp project contains all of the files that you need to create and run routes, including: ZooOrderApp/pom.xml - A Maven project file. ZooOrderApp/src/main/resources/OSGI-INF/blueprint/blueprint.xml - A Blueprint XML file that contains a Camel routing context and an initial empty route. To view the preliminary routing context, open the blueprint.xml file in the Editor view, and then click the Source tab. Setting component labels to display ID values To ensure that the labels of the patterns and components that you place on the Design canvas are the same as the labels shown in the Tooling Tutorials: Open the Editor preferences page: On Linux and Windows machines, select Windows Preferences Fuse Tooling Editor . On OS X, select CodeReady Studio Preferences Fuse Tooling Editor . Check the Use ID values for all component labels option. Click Apply and Close . Downloading test messages for your project Sample XML message files are provided so that you can test your ZooOrderApp project as you work through the Tooling Tutorials. The messages contain order information for zoo animals. For example, an order of five wombats for the Chicago zoo. To download and copy the provided test messages (XML files) to your project: In the CodeReady Studio Project Explorer view, create a folder to contain the test messages: Right-click the ZooOrderApp/src folder and then select New Folder . The New Folder wizard opens. For Folder name , type data . Click Finish . Click here to open a web browser to the location of the provided Tooling Tutorial resource Fuse-tooling-tutorials-jbds-10.3.zip file. Download the Fuse-tooling-tutorials-jbds-10.3.zip file to a convenient location that is external to the ZooOrderApp project's workspace, and then unzip it. It contains two folders as described in Chapter 1, About the Fuse Tooling Tutorials . From the messages folder, copy the six XML files to your ZooOrderApp project's src/data folder. Note You can safely ignore the on the XML files. Viewing the test messages Each XML message file contains an order from a zoo (a customer) for a quantity of animals. For example, the 'message1.xml' file contains an order from the Brooklyn Zoo for 12 wombats. You can open any of the message XML files in the Editor view to examine the contents. In the Project Explorer view, right-click a message file. From the popup menu, select Open . Click the Source tab. The XML file opens in the Editor view. For example, the contents of the message1.xml file shows an order from the Bronx Zoo for 12 wombats: Note You can safely ignore the on the first line of the newly created message1.xml file, which advises you that there are no grammar constraints (DTD or XML Schema) referenced by the document. The following table provides a summary of the contents of all six message files: Table 2.1. Provided test messages msg# <name> <city> <country> <animal> <quantity> 1 Bronx Zoo Bronx NY USA wombat 12 2 San Diego Zoo San Diego CA USA giraffe 3 3 Sea Life Centre Munich Germany penguin 15 4 Berlin Zoo Berlin Germany emu 6 5 Philadelphia Zoo Philapelphia PA USA giraffe 2 6 St Louis Zoo St Loius MO USA penguin 10 steps Now that you have set up your CodeReady Studio project, you can continue to the Chapter 3, Defining a Route tutorial in which you define the route that processes the XML messages.
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <order> <customer> <name>Bronx Zoo</name> <city>Bronx NY</city> <country>USA</country> </customer> <orderline> <animal>wombat</animal> <quantity>12</quantity> </orderline> </order>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_tutorials/RiderTutorialSetup
Chapter 11. Enabling encryption on a vSphere cluster
Chapter 11. Enabling encryption on a vSphere cluster You can encrypt your virtual machines after installing OpenShift Container Platform 4.16 on vSphere by draining and shutting down your nodes one at a time. While each virtual machine is shutdown, you can enable encryption in the vCenter web interface. 11.1. Encrypting virtual machines You can encrypt your virtual machines with the following process. You can drain your virtual machines, power them down and encrypt them using the vCenter interface. Finally, you can create a storage class to use the encrypted storage. Prerequisites You have configured a Standard key provider in vSphere. For more information, see Adding a KMS to vCenter Server . Important The Native key provider in vCenter is not supported. For more information, see vSphere Native Key Provider Overview . You have enabled host encryption mode on all of the ESXi hosts that are hosting the cluster. For more information, see Enabling host encryption mode . You have a vSphere account which has all cryptographic privileges enabled. For more information, see Cryptographic Operations Privileges . Procedure Drain and cordon one of your nodes. For detailed instructions on node management, see "Working with Nodes". Shutdown the virtual machine associated with that node in the vCenter interface. Right-click on the virtual machine in the vCenter interface and select VM Policies Edit VM Storage Policies . Select an encrypted storage policy and select OK . Start the encrypted virtual machine in the vCenter interface. Repeat steps 1-5 for all nodes that you want to encrypt. Configure a storage class that uses the encrypted storage policy. For more information about configuring an encrypted storage class, see "VMware vSphere CSI Driver Operator". 11.2. Additional resources Working with nodes vSphere encryption Requirements for encrypting virtual machines
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_vsphere/vsphere-post-installation-encryption
5.181. mailman
5.181. mailman 5.181.1. RHBA-2012:1474 - mailman bug fix update Updated mailman packages that fix multiple bugs are now available for Red Hat Enterprise Linux 6. Mailman is a program used to help manage e-mail discussion lists. Bug Fixes BZ# 772998 The reset_pw.py script contained a typo, which could cause the mailman utility to fail with a traceback. The typo has been corrected, and mailman now works as expected. BZ# 799323 The "urlhost" argument was not handled in the newlist script. When running the "newlist" command with the "--urlhost" argument specified, the contents of the index archive page was not created using proper URLs; the hostname was used instead. With this update, "urlhost" is now handled in the newlist script. If the "--urlhost" argument is specified on the command line, the host URL is used when creating the index archive page instead of the hostname. BZ# 832920 Previously, long lines in e-mails were not wrapped in the web archive, sometimes requiring excessive horizontal scrolling. The "white-space: pre-wrap;" CSS style has been added to all templates, so that long lines are now wrapped in browsers that support that style. BZ# 834023 The "From" string in the e-mail body was not escaped properly. A message containing the "From" string at the beginning of a line was split and displayed in the web archive as two or more messages. The "From" string is now correctly escaped, and messages are no longer split in the described scenario. All users of mailman are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.3_technical_notes/mailman
Using the Automation Calculator
Using the Automation Calculator Red Hat Ansible Automation Platform 2.3 Evaluate costs and automated processes that determine the return on investment automation brings to your organization. Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.3/html/using_the_automation_calculator/index
1.5. Pacemaker Configuration and Management Tools
1.5. Pacemaker Configuration and Management Tools Pacemaker features two configuration tools for cluster deployment, monitoring, and management. pcs pcs can control all aspects of Pacemaker and the Corosync heartbeat daemon. A command-line based program, pcs can perform the following cluster management tasks: Create and configure a Pacemaker/Corosync cluster Modify configuration of the cluster while it is running Remotely configure both Pacemaker and Corosync remotely as well as start, stop, and display status information of the cluster pcsd Web UI A graphical user interface to create and configure Pacemaker/Corosync clusters, with the same features and abilities as the command-line based pcs utility.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_overview/s1-pacemakertools-haao
Providing feedback on Red Hat documentation
Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Bugzilla ticket: Go to the Bugzilla website. In the Component section, choose documentation . Fill in the Description field with your suggestion for improvement. Include a link to the relevant part(s) of documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/configuring_openshift_data_foundation_disaster_recovery_for_openshift_workloads/providing-feedback-on-red-hat-documentation_common
3.4. Configuring IP Networking with GNOME GUI
3.4. Configuring IP Networking with GNOME GUI In Red Hat Enterprise Linux 7, NetworkManager does not have its own graphical user interface (GUI). The network connection icon on the top right of the desktop is provided as part of the GNOME Shell and the Network settings configuration tool is provided as part of the new GNOME control-center GUI which supports the wired, wireless, vpn connections. The nm-connection-editor is the main tool for GUI configuration. Besides control-center 's features, it also applies the functionality which is not provided by the GNOME control-center such as configuring bond, team, bridge connections. In this section, you can configure a network interface using: the GNOME control-center application the GNOME nm-connection-editor application 3.4.1. Connecting to a Network Using the control-center GUI There are two ways to access the Network settings window of the control-center application: Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Network tab on the left-hand side, and the Network settings tool appears. Proceed to the section called "Configuring New Connections with control-center" . Click on the GNOME Shell network connection icon in the top right-hand corner of the screen to open its menu. Figure 3.5. Network Configuration using the control-center application When you click on the GNOME Shell network connection icon, you are presented with: A list of categorized networks you are currently connected to (such as Wired and Wi-Fi ). A list of all Available Networks that NetworkManager has detected. Options for connecting to any configured Virtual Private Networks (VPNs) and An option for selecting the Network Settings menu entry. If you are connected to a network, this is indicated by a black bullet on the left of the connection name. If you click on Network Settings , the Network settings tool appears. Proceed to the section called "Configuring New Connections with control-center" . 3.4.2. Configuring New and Editing Existing Connections Using a GUI As a system administrator, you can configure a network connection. This enables users to apply or change settings of an interface. For doing that, you can use one of the following two ways: the GNOME control-center application the GNOME nm-connection-editor application 3.4.2.1. Configuring New and Editing Existing Connections Using control-center You can create and configure a network connection using the GNOME control-center application. Configuring New Connections with control-center To configure a new wired, wireless, vpn connection using the control-center application, proceed as follows: Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Network tab on the left-hand side. The Network settings tool appears on the right-hand side menu: Figure 3.6. Opening the Network Settings Window Click the plus button to add a new connection. To configure: Wired connections , click the plus button to Wired entry and proceed to Section 3.4.6, "Configuring a Wired (Ethernet) Connection with a GUI" . VPN connections , click the plus button to VPN entry and proceed to Section 3.4.8.1, "Establishing a VPN Connection with control-center" For Wi-Fi connections , click the Wi-fi entry in the Settings menu and proceed to Section 3.4.7, "Configuring a Wi-Fi Connection with a GUI" Editing an Existing Connection with control-center Clicking on the gear wheel icon of an existing connection profile in the Network settings window opens the Details window, from where you can perform most network configuration tasks such as IP addressing, DNS , and routing configuration. Figure 3.7. Configure Networks Using the Network Connection Details Window For any connection type you add or configure, you can choose NetworkManager to connect to that network automatically when it is available. For doing that, select Connect automatically to cause NetworkManager to auto-connect to the connection whenever NetworkManager detects that it is available. Clear the check box if you do not want NetworkManager to connect automatically. If the check box is clear, you will have to select that connection manually in the network connection icon's menu to cause it to connect. To make a connection available to other users, select the Make available to other users check box. To apply changes after a connection modification, you can click the Apply button in the top right-hand corner of the connection window. You can delete a connection by clicking the Remove Connection Profile red box. 3.4.2.2. Configuring New and Editing Existing Connections Using nm-connection-editor Using the nm-connection-editor GUI application, you can configure any connection you want with additional features than control-center provides. In addition, nm-connection-editor applies the functionality which is not provided by the GNOME control-center such as configuring bond, bridge, VLAN, team connections. Configuring a New Connection with nm-connection-editor To add a new connection type using nm-connection-editor : Procedure Enter nm-connection-editor in a terminal: The Network Connections window appears. Click the plus button to choose a connection type: Figure 3.8. Adding a connection type using nm-connection-editor Figure 3.9. Choosing a connection type with nm-connection-editor To create and configure: Bond connections , click the Bond entry and proceed to Section 7.8.1, "Establishing a Bond Connection" ; Bridge connections , click the Bridge entry and proceed to Section 9.4.1, "Establishing a Bridge Connection with a GUI" ; VLAN connections , click the VLAN entry and proceed to Section 10.5.1, "Establishing a VLAN Connection" ; or, Team connections , click the Team entry and proceed to Section 8.14, "Creating a Network Team Using a GUI" . Editing an Existing Connection with nm-connection-editor For an existing connection type, click the gear wheel icon from the Network Connections dialog, see the section called "Configuring a New Connection with nm-connection-editor" . 3.4.3. Common Configuration Options Using nm-connection-editor If you use the nm-connection-editor utility, there are five common configuration options to the most connection types (ethernet, wifi, mobile broadband, DSL) following the procedure below: Procedure Enter nm-connection-editor in a terminal: The Network Connections window appears. Click the plus button to choose a connection type or the gear wheel icon to edit an existing connection. Select the General tab in the Editing dialog: Figure 3.10. Configuration options in nm-connection-editor Connection name - Enter a descriptive name for your network connection. This name is used to list this connection in the menu of the Network window. Connection priority for auto-activation - If the connection is set to autoconnect, the number is activated ( 0 by default). The higher number means higher priority. Automatically connect to this network when it is available - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. All users may connect to this network - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Automatically connect to VPN when using this connection - Select this box if you want NetworkManager to auto-connect to a VPN connection when it is available. Select the VPN from the drop-down menu. Firewall Zone - Select the firewall zone from the drop-down menu. See the Red Hat Enterprise Linux 7 Security Guide for more information on firewall zones. Note For the VPN connection type, only three of the above configuration options are available: Connection name , All users may connect to this network and Firewall Zone . 3.4.4. Connecting to a Network Automatically with a GUI For any connection type you add or configure, you can choose whether you want NetworkManager to try to connect to that network automatically when it is available. You can use one of the following ways: the GNOME control-center application the GNOME nm-connection-editor application 3.4.4.1. Connecting to a Network Automatically with control-center You can connect to a network automatically using control-center : Procedure Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Network tab on the left-hand side. The Network settings tool appears on the right-hand side menu, see the section called "Configuring New Connections with control-center" . Select the network interface from the right-hand-side menu. Click on the gear wheel icon of a connection profile on the right-hand side menu. The Network details window appears. Select the Details menu entry, see the section called "Editing an Existing Connection with control-center" . Select Connect automatically to cause NetworkManager to auto-connect to the connection whenever NetworkManager detects that it is available. Clear the check box if you do not want NetworkManager to connect automatically. If the check box is clear, you will have to select that connection manually in the network connection icon's menu to cause it to connect. 3.4.4.2. Connecting to a Network Automatically with nm-connection-editor You can also use the GNOME nm-connection-editor application for connecting to a network automatically. For doing that, follow the procedure descibed in Section 3.4.3, "Common Configuration Options Using nm-connection-editor" , and check the Automatically connect to this network when it is available check box in the General tab. 3.4.5. Managing System-wide and Private Connection Profiles with a GUI NetworkManager stores all connection profiles . A profile is a named collection of settings that can be applied to an interface. NetworkManager stores these connection profiles for system-wide use ( system connections ), as well as all user connection profiles. Access to the connection profiles is controlled by permissions which are stored by NetworkManager . See the nm-settings(5) man page for more information on the connection settings permissions property. You can control access to a connection profile using the following graphical user interface tools: the nm-connection-editor application the control-center application 3.4.5.1. Managing Permissions for a Connection Profile with nm-connection-editor To create a connection available to all users on the system, follow the procedure descibed in Section 3.4.3, "Common Configuration Options Using nm-connection-editor" , and check the All users may connect to this network check box in the General tab. 3.4.5.2. Managing Permissions for a Connection Profile with control-center To make a connection available to other users, follow the procedure described in the section called "Editing an Existing Connection with control-center" , and select the Make available to other users check box in the GNOME control-center Network settings Details window. Conversely, clear the Make available to other users check box to make the connection user-specific instead of system-wide. Note Depending on the system's policy, you may need root privileges on the system in order to change whether a connection is user-specific or system-wide. NetworkManager 's default policy is to allow all users to create and modify system-wide connections. Profiles that are available at boot time cannot be private because they will not be visible until the user logs in. For example, if a user creates a connection profile user-em2 with the Connect Automatically check box selected but with the Make available to other users not selected, then the connection will not be available at boot time. To restrict connections and networking, there are two options which can be used alone or in combination: Clear the Make available to other users check box, which changes the connection to be modifiable and usable only by the user doing the changing. Use the polkit framework to restrict permissions of general network operations on a per-user basis. The combination of these two options provides fine-grained security and control over networking. See the polkit(8) man page for more information on polkit . Note that VPN connections are always created as private-per-user, since they are assumed to be more private than a Wi-Fi or Ethernet connection. 3.4.6. Configuring a Wired (Ethernet) Connection with a GUI You can configure a wired connection using GUI in two ways: the control-center application the nm-connection-editor application 3.4.6.1. Configuring a Wired Connection Using control-center Procedure Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Network menu entry on the left-hand side, and the Network settings tool appears, see the section called "Configuring New Connections with control-center" . Select the Wired network interface if it is not already highlighted. The system creates and configures a single wired connection profile called Wired by default. A profile is a named collection of settings that can be applied to an interface. More than one profile can be created for an interface and applied as needed. The default profile cannot be deleted but its settings can be changed. Edit the default Wired profile by clicking the gear wheel icon. Basic Configuration Options You can see the following configuration settings in the Wired dialog, by selecting the Identity menu entry: Figure 3.11. Basic Configuration options of a Wired Connection Name - Enter a descriptive name for your network connection. This name will be used to list this connection in the menu of the Network window. MAC Address - Select the MAC address of the interface this profile must be applied to. Cloned Address - If required, enter a different MAC address to use. MTU - If required, enter a specific maximum transmission unit ( MTU ) to use. The MTU value represents the size in bytes of the largest packet that the link layer will transmit. This value defaults to 1500 and does not generally need to be specified or changed. Making Further Wired Configurations You can further configure an existing connection in the editing dialog. To configure: IPv4 settings for the connection, click the IPv4 menu entry and proceed to Section 5.4, "Configuring IPv4 Settings" or IPv6 settings for the connection, click the IPv6 menu entry and proceed to Section 5.5, "Configuring IPv6 Settings" . port-based Network Access Control (PNAC) , click the 802.1X Security menu entry and proceed to Section 5.2, "Configuring 802.1X Security" ; Saving Your New (or Modified) Wired Connection Once you have finished editing your wired connection, click the Apply button to save your customized configuration. If the profile was in use while being edited, restart the connection to make NetworkManager apply the changes. If the profile is OFF, set it to ON or select it in the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for information on using your new or altered connection. Creating a New Wired Connection To create a new wired connection profile, click the plus button, see the section called "Configuring New Connections with control-center" . When you add a new connection by clicking the plus button, NetworkManager creates a new configuration file for that connection and then opens the same dialog that is used for editing an existing connection, see the section called "Editing an Existing Connection with control-center" . The difference between these dialogs is that an existing connection profile has a Details menu entry. 3.4.6.2. Configuring a Wired Connection with nm-connection-editor The nm-connection-editor GUI application provides more configuration options than the control-center GUI application. To configure a wired connection using nm-connection-editor : Enter the nm-connection-editor in a terminal. The Network Connections window appears. Select the ethernet connection you want to edit and click the gear wheel icon: Figure 3.12. Edit a wired connection The Editing dialog appears. To connect to a network automatically and restrict connections, click the General tab, see Section 3.4.3, "Common Configuration Options Using nm-connection-editor" . To configure the networking settings, click the Ethernet tab, see the section called "Configuring 802.3 Link Settings with nm-connection-editor" . To configure 802.1X Security for a wired connection, click the 802.1X Security tab, see Section 5.2.4, "Configuring 802.1X Security for Wired with nm-connection-editor" . To configure the IPV4 settings, click the IPV4 Settings tab, see the section called "Setting the Method for IPV4 Using nm-connection-editor" . To configure the IPV6 settings, click the IPV6 Settings tab, see Section 5.5, "Configuring IPv6 Settings" . 3.4.7. Configuring a Wi-Fi Connection with a GUI This section explains how to use NetworkManager to configure a Wi-Fi (also known as wireless or 802.11 a/b/g/n ) connection to an Access Point. An Access Point is a device that allows wireless devices to connect to a network. To configure a mobile broadband (such as 3G) connection, see Section 3.4.9, "Configuring a Mobile Broadband Connection with a GUI" . Connecting Quickly to an Available Access Point Procedure Click on the network connection icon to activate the network connection icon's menu, see Section 3.4.1, "Connecting to a Network Using the control-center GUI" . Locate the Service Set Identifier ( SSID ) of the access point in the list of Wi-Fi networks. Click on the SSID of the network. A padlock symbol indicates the access point requires authentication. If the access point is secured, a dialog prompts you for an authentication key or password. NetworkManager tries to auto-detect the type of security used by the access point. If there are multiple possibilities, NetworkManager guesses the security type and presents it in the Wi-Fi security drop-down menu. For WPA-PSK security (WPA with a passphrase) no choice is necessary. For WPA Enterprise (802.1X) you have to specifically select the security, because that cannot be auto-detected. Note that if you are unsure, try connecting to each type in turn. Enter the key or passphrase in the Password field. Certain password types, such as a 40-bit WEP or 128-bit WPA key, are invalid unless they are of a requisite length. The Connect button will remain inactive until you enter a key of the length required for the selected security type. To learn more about wireless security, see Section 5.2, "Configuring 802.1X Security" . If NetworkManager connects to the access point successfully, the network connection icon will change into a graphical indicator of the wireless connection's signal strength. You can also edit the settings for one of these auto-created access point connections just as if you had added it yourself. The Wi-Fi page of the Network window has a History button. Clicking it reveals a list of all the connections you have ever tried to connect to. See the section called "Editing an Existing Wi-Fi Connection" Connecting to a Hidden Wi-Fi Network All access points have a Service Set Identifier ( SSID ) to identify them. However, an access point may be configured not to broadcast its SSID, in which case it is hidden , and will not show up in NetworkManager 's list of Available networks. You can still connect to a wireless access point that is hiding its SSID as long as you know its SSID, authentication method, and secrets. To connect to a hidden wireless network: Procedure Press the Super key to enter the Activities Overview, type Settings and then press Enter . Then, select the Wi-Fi menu entry on the left-hand side. Select Connect to Hidden Network . There are two options: If you have connected to the hidden network before: Use the Connection drop-down to select the network. Click Connect . If not, proceed as follows: Leave the Connection drop-down as New . Enter the SSID of the hidden network. Select its Wi-Fi security method. Enter the correct authentication secrets. Click Connect . For more information on wireless security settings, see Section 5.2, "Configuring 802.1X Security" . Configuring a New Wi-Fi Connection Procedure Select the Wi-Fi menu entry of Settings . Click the Wi-Fi connection name that you want to connect to (by default, the same as the SSID). If the SSID is not in range, see the section called "Connecting to a Hidden Wi-Fi Network" for more information. If the SSID is in range, click the Wi-Fi connection profile on the right-hand side menu. A padlock symbol indicates a key or password is required. If requested, enter the authentication details. Editing an Existing Wi-Fi Connection You can edit an existing connection that you have tried or succeeded in connecting to in the past. Procedure Press the Super key to enter the Activities Overview, type Settings and press Enter . Select Wi-Fi from the left-hand-side menu entry. Select the gear wheel icon to the right of the Wi-Fi connection name that you want to edit, and the editing connection dialog appears. Note that if the network is not currently in range, click History to display past connections. The Details window shows the connection details. Basic Configuration Options for a Wi-Fi Connection To edit a Wi-Fi connection's settings, select Identity from the editing connection dialog. The following settings are available: Figure 3.13. Basic Configuration Options for a Wi-Fi Connection SSID The Service Set Identifier ( SSID ) of the access point (AP). BSSID The Basic Service Set Identifier ( BSSID ) is the MAC address, also known as a hardware address , of the specific wireless access point you are connecting to when in Infrastructure mode. This field is blank by default, and you are able to connect to a wireless access point by SSID without having to specify its BSSID . If the BSSID is specified, it will force the system to associate to a specific access point only. For ad-hoc networks, the BSSID is generated randomly by the mac80211 subsystem when the ad-hoc network is created. It is not displayed by NetworkManager MAC address Select the MAC address, also known as a hardware address , of the Wi-Fi interface to use. A single system could have one or more wireless network adapters connected to it. The MAC address field therefore allows you to associate a specific wireless adapter with a specific connection (or connections). Cloned Address A cloned MAC address to use in place of the real hardware address. Leave blank unless required. The following settings are common to the most connection types: Connect automatically - Select this box if you want NetworkManager to auto-connect to this connection when it is available. See the section called "Editing an Existing Connection with control-center" for more information. Make available to other users - Select this box to create a connection available to all users on the system. Changing this setting may require root privileges. See Section 3.4.5, "Managing System-wide and Private Connection Profiles with a GUI" for details. Making Further Wi-Fi Configurations You can further configure an existing connection in the editing dialog. To configure: security authentication for the wireless connection, click Security and proceed to Section 5.2, "Configuring 802.1X Security" . IPv4 settings for the connection, click IPv4 and proceed to Section 5.4, "Configuring IPv4 Settings" or IPv6 settings for the connection, click IPv6 and proceed to Section 5.5, "Configuring IPv6 Settings" . Saving Your New (or Modified) Connection Once you have finished editing the wireless connection, click the Apply button to save your configuration. Given a correct configuration, you can connect to your modified connection by selecting it from the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for details on selecting and connecting to a network. 3.4.8. Configuring a VPN Connection with a GUI IPsec , provided by Libreswan , is the preferred method for creating a VPN. Libreswan is an open-source, user-space IPsec implementation for VPN. Configuring an IPsec VPN using the command line is documented in the Red Hat Enterprise Linux 7 Security Guide . 3.4.8.1. Establishing a VPN Connection with control-center IPsec , provided by Libreswan , is the preferred method for creating a VPN in Red Hat Enterprise Linux 7. For more information, see Section 3.4.8, "Configuring a VPN Connection with a GUI" . The GNOME graphical user interface tool described below requires the NetworkManager-libreswan-gnome package. To install the package, run the following command as root : See Red Hat Enterprise Linux System Administrator's Guide for more information on how to install new packages in Red Hat Enterprise Linux. Establishing a Virtual Private Network (VPN) enables communication between your Local Area Network (LAN), and another, remote LAN. This is done by setting up a tunnel across an intermediate network such as the Internet. The VPN tunnel that is set up typically uses authentication and encryption. After successfully establishing a VPN connection using a secure tunnel, a VPN router or gateway performs the following actions upon the packets you transmit: it adds an Authentication Header for routing and authentication purposes; it encrypts the packet data; and, it encloses the data in packets according to the Encapsulating Security Payload (ESP) protocol, which constitutes the decryption and handling instructions. The receiving VPN router strips the header information, decrypts the data, and routes it to its intended destination (either a workstation or other node on a network). Using a network-to-network connection, the receiving node on the local network receives the packets already decrypted and ready for processing. The encryption and decryption process in a network-to-network VPN connection is therefore transparent to clients. Because they employ several layers of authentication and encryption, VPNs are a secure and effective means of connecting multiple remote nodes to act as a unified intranet. Adding a New IPsec VPN Connection Procedure Press the Super key to enter the Activities Overview, type Settings and press Enter . Then, select the Network menu entry and the Network settings tool appears, see the section called "Configuring New Connections with control-center" . Click the plus button in the VPN entry. The Add VPN window appears. For manually configuration, select IPsec based VPN . Figure 3.14. Configuring VPN on IPsec mode In the Identity configuration form, you can specify the fields in the General and Advanced sections: Figure 3.15. General and Advanced sections In General section, you can specify: Gateway The name or IP address of the remote VPN gateway. User name If required, enter the user name associated with the VPN user's identity for authentication. User password If required, enter the password associated with the VPN user's identity for authentication. Group name The name of a VPN group configured on the remote gateway. In case it is blank, the IKEv1 Main mode is used instead of the default Aggressive mode. Secret It is a pre-shared key which is used to initialize the encryption before the user's authentication. If required, enter the password associated with the group name. The following configuration settings are available under the Advanced section: Phase1 Algorithms If required, enter the algorithms to be used to authenticate and set up an encrypted channel. Phase2 Algorithms If required, enter the algorithms to be used for the IPsec negotiations. Domain If required, enter the Domain Name. Note Configuring an IPsec VPN without using NetworkManager , see Section 3.4.8, "Configuring a VPN Connection with a GUI" . Editing an Existing VPN Connection Procedure Press the Super key to enter the Activities Overview, type Settings and press Enter . Then, select the Network menu entry and the Network settings tool appears, see the section called "Configuring New Connections with control-center" . Select the VPN connection you want to edit and click the gear wheel icon and edit the General and Advanced sections, see Section 3.4.8.1, "Establishing a VPN Connection with control-center" . Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your new VPN connection, click the Save button to save your customized configuration. If the profile was in use while being edited, power cycle the connection to make NetworkManager apply the changes. If the profile is OFF, set it to ON or select it in the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network window and clicking Configure to return to the Editing dialog. Then, to configure: IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" . 3.4.8.2. Configuring a VPN Connection with nm-connection-editor You can also use nm-connection-editor to add and configure a VPN connection. For doing that, proceed as follows: Procedure Enter nm-connection-editor in a terminal. The Network Connections window appears, see Section 3.4.3, "Common Configuration Options Using nm-connection-editor" . Click the plus button. The Choose a Connection Type menu opens. Select from the VPN menu entry, the IPsec based VPN option. Click Create to open the Editing dialog and proceed to the section called "Adding a New IPsec VPN Connection" to edit the General and Advanced sections. 3.4.9. Configuring a Mobile Broadband Connection with a GUI You can use NetworkManager 's mobile broadband connection abilities to connect to the following 2G and 3G services: 2G - GPRS ( General Packet Radio Service ), EDGE ( Enhanced Data Rates for GSM Evolution ), or CDMA (Code Division Multiple Access). 3G - UMTS ( Universal Mobile Telecommunications System ), HSPA ( High Speed Packet Access ), or EVDO (EVolution Data-Only). Your computer must have a mobile broadband device (modem), which the system has discovered and recognized, in order to create the connection. Such a device may be built into your computer (as is the case on many notebooks and netbooks), or may be provided separately as internal or external hardware. Examples include PC card, USB Modem or Dongle, mobile or cellular telephone capable of acting as a modem. 3.4.9.1. Configuring a Mobile Broadband Connection with nm-connection-editor You can configure a mobile broadband connection using the GNOME nm-connection-editor . Adding a New Mobile Broadband Connection Procedure Enter nm-connection-editor in a terminal. The Network Connections window appears, see Section 3.4.3, "Common Configuration Options Using nm-connection-editor" . Click the plus button. The Choose a Connection Type menu opens. Select the Mobile Broadband menu entry. Click Create to open the Set up a Mobile Broadband Connection assistant. Under Create a connection for this mobile broadband device , choose the 2G- or 3G-capable device you want to use with the connection. If the drop-down menu is inactive, this indicates that the system was unable to detect a device capable of mobile broadband. In this case, click Cancel , ensure that you do have a mobile broadband-capable device attached and recognized by the computer and then retry this procedure. Click the Continue button. Select the country where your service provider is located from the list and click the Continue button. Select your provider from the list or enter it manually. Click the Continue button. Select your payment plan from the drop-down menu and confirm the Access Point Name ( APN ) is correct. Click the Continue button. Review and confirm the settings and then click the Apply button. Edit the mobile broadband-specific settings by referring to the section called "Configuring the Mobile Broadband Tab" Editing an Existing Mobile Broadband Connection Procedure Enter nm-connection-editor in a terminal. The Network Connections window appears. Select the Mobile Broadband tab. Select the connection you want to edit and click the gear wheel icon. See Section 3.4.3, "Common Configuration Options Using nm-connection-editor" for more information. Edit the mobile broadband-specific settings by referring to the section called "Configuring the Mobile Broadband Tab" Configuring the Mobile Broadband Tab If you have already added a new mobile broadband connection using the assistant (see the section called "Adding a New Mobile Broadband Connection" for instructions), you can edit the Mobile Broadband tab to disable roaming if home network is not available, assign a network ID, or instruct NetworkManager to prefer a certain technology (such as 3G or 2G) when using the connection. Number The number that is dialed to establish a PPP connection with the GSM-based mobile broadband network. This field may be automatically populated during the initial installation of the broadband device. You can usually leave this field blank and enter the APN instead. Username Enter the user name used to authenticate with the network. Some providers do not provide a user name, or accept any user name when connecting to the network. Password Enter the password used to authenticate with the network. Some providers do not provide a password, or accept any password. APN Enter the Access Point Name ( APN ) used to establish a connection with the GSM-based network. Entering the correct APN for a connection is important because it often determines: how the user is billed for their network usage; whether the user has access to the Internet, an intranet, or a subnetwork. Network ID Entering a Network ID causes NetworkManager to force the device to register only to a specific network. This can be used to ensure the connection does not roam when it is not possible to control roaming directly. Type Any - The default value of Any leaves the modem to select the fastest network. 3G (UMTS/HSPA) - Force the connection to use only 3G network technologies. 2G (GPRS/EDGE) - Force the connection to use only 2G network technologies. Prefer 3G (UMTS/HSPA) - First attempt to connect using a 3G technology such as HSPA or UMTS, and fall back to GPRS or EDGE only upon failure. Prefer 2G (GPRS/EDGE) - First attempt to connect using a 2G technology such as GPRS or EDGE, and fall back to HSPA or UMTS only upon failure. Allow roaming if home network is not available Uncheck this box if you want NetworkManager to terminate the connection rather than transition from the home network to a roaming one, thereby avoiding possible roaming charges. If the box is checked, NetworkManager will attempt to maintain a good connection by transitioning from the home network to a roaming one, and vice versa. PIN If your device's SIM ( Subscriber Identity Module ) is locked with a PIN ( Personal Identification Number ), enter the PIN so that NetworkManager can unlock the device. NetworkManager must unlock the SIM if a PIN is required in order to use the device for any purpose. CDMA and EVDO have fewer options. They do not have the APN , Network ID , or Type options. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your mobile broadband connection, click the Apply button to save your customized configuration. If the profile was in use while being edited, power cycle the connection to make NetworkManager apply the changes. If the profile is OFF, set it to ON or select it in the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. Then, to configure: Point-to-point settings for the connection, click the PPP Settings tab and proceed to Section 5.6, "Configuring PPP (Point-to-Point) Settings" ; IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" ; or, IPv6 settings for the connection, click the IPv6 Settings tab and proceed to Section 5.5, "Configuring IPv6 Settings" . 3.4.10. Configuring a DSL Connection with a GUI This section is intended for those installations which have a DSL card fitted within a host rather than the external combined DSL modem router combinations typical of private consumer or SOHO installations. 3.4.10.1. Configuring a DSL Connection with nm-connection-editor You can configure a DSL connection using the GNOME nm-connection-editor . Adding a New DSL Connection Procedure Enter nm-connection-editor in a terminal. The Network Connections window appears, see Section 3.4.3, "Common Configuration Options Using nm-connection-editor" . Click the plus button. The Choose a Connection Type list appears. Select DSL and press the Create button. The Editing DSL Connection 1 window appears. Editing an Existing DSL Connection Procedure Enter nm-connection-editor in a terminal. The Network Connections window appears. Select the connection you want to edit and click the gear wheel icon. See Section 3.4.3, "Common Configuration Options Using nm-connection-editor" for more information. Configuring the DSL Tab Username Enter the user name used to authenticate with the service provider. Service Leave blank unless otherwise directed by your service provider. Password Enter the password supplied by the service provider. Saving Your New (or Modified) Connection and Making Further Configurations Once you have finished editing your DSL connection, click the Apply button to save your customized configuration. If the profile was in use while being edited, power cycle the connection to make NetworkManager apply the changes. If the profile is OFF, set it to ON or select it in the network connection icon's menu. See Section 3.4.1, "Connecting to a Network Using the control-center GUI" for information on using your new or altered connection. You can further configure an existing connection by selecting it in the Network Connections window and clicking Edit to return to the Editing dialog. To configure: The MAC address and MTU settings, click the Wired tab and proceed to the section called "Basic Configuration Options " . Point-to-point settings for the connection, click the PPP Settings tab and proceed to Section 5.6, "Configuring PPP (Point-to-Point) Settings" . IPv4 settings for the connection, click the IPv4 Settings tab and proceed to Section 5.4, "Configuring IPv4 Settings" .
[ "~]USD nm-connection-editor", "~]USD nm-connection-editor", "~]USD nm-connection-editor", "~]# yum install NetworkManager-libreswan-gnome" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/networking_guide/sec-configuring_ip_networking_with_gnome_gui
Appendix B. Preparing a Local Manually Configured PostgreSQL Database
Appendix B. Preparing a Local Manually Configured PostgreSQL Database Use this procedure to set up the Manager database. Set up this database before you configure the Manager; you must supply the database credentials during engine-setup . Note The engine-setup and engine-backup --mode=restore commands only support system error messages in the en_US.UTF8 locale, even if the system locale is different. The locale settings in the postgresql.conf file must be set to en_US.UTF8 . Important The database name must contain only numbers, underscores, and lowercase letters. Enabling the Red Hat Virtualization Manager Repositories You need to log in and register the Manager machine with Red Hat Subscription Manager, attach the Red Hat Virtualization Manager subscription, and enable the Manager repositories. Procedure Register your system with the Content Delivery Network, entering your Customer Portal user name and password when prompted: # subscription-manager register Note If you are using an IPv6 network, use an IPv6 transition mechanism to access the Content Delivery Network and subscription manager. Find the Red Hat Virtualization Manager subscription pool and record the pool ID: # subscription-manager list --available Use the pool ID to attach the subscription to the system: # subscription-manager attach --pool= pool_id Note To view currently attached subscriptions: # subscription-manager list --consumed To list all enabled repositories: # dnf repolist Configure the repositories: # subscription-manager repos \ --disable='*' \ --enable=rhel-8-for-x86_64-baseos-eus-rpms \ --enable=rhel-8-for-x86_64-appstream-eus-rpms \ --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms \ --enable=fast-datapath-for-rhel-8-x86_64-rpms \ --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms \ --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms \ --enable=rhel-8-for-x86_64-appstream-tus-rpms \ --enable=rhel-8-for-x86_64-baseos-tus-rpms Set the RHEL version to 8.6: # subscription-manager release --set=8.6 Enable version 12 of the postgresql module. # dnf module -y enable postgresql:12 Enable version 14 of the nodejs module: # dnf module -y enable nodejs:14 Synchronize installed packages to update them to the latest available versions. # dnf distro-sync --nobest Additional resources For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components Module streams Selecting a stream before installation of packages Resetting module streams Switching to a later stream Initializing the PostgreSQL Database Install the PostgreSQL server package: # dnf install postgresql-server postgresql-contrib Initialize the PostgreSQL database instance: Start the postgresql service, and ensure that this service starts on boot: Connect to the psql command line interface as the postgres user: Create a default user. The Manager's default user is engine and Data Warehouse's default user is ovirt_engine_history : postgres=# create role user_name with login encrypted password ' password '; Create a database. The Manager's default database name is engine and Data Warehouse's default database name is ovirt_engine_history : postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8'; Connect to the new database: postgres=# \c database_name Add the uuid-ossp extension: database_name =# CREATE EXTENSION "uuid-ossp"; Add the plpgsql language if it does not exist: database_name =# CREATE LANGUAGE plpgsql; Quit the psql interface: database_name =# \q Edit the /var/lib/pgsql/data/pg_hba.conf file to enable md5 client authentication, so the engine can access the database locally. Add the following line immediately below the line that starts with local at the bottom of the file: host database_name user_name 0.0.0.0/0 md5 host database_name user_name ::0/0 md5 Update the PostgreSQL server's configuration. Edit the /var/lib/pgsql/data/postgresql.conf file and add the following lines to the bottom of the file: autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192 Restart the postgresql service: # systemctl restart postgresql Optionally, set up SSL to secure database connections. Return to Configuring the Manager , and answer Local and Manual when asked about the database.
[ "subscription-manager register", "subscription-manager list --available", "subscription-manager attach --pool= pool_id", "subscription-manager list --consumed", "dnf repolist", "subscription-manager repos --disable='*' --enable=rhel-8-for-x86_64-baseos-eus-rpms --enable=rhel-8-for-x86_64-appstream-eus-rpms --enable=rhv-4.4-manager-for-rhel-8-x86_64-rpms --enable=fast-datapath-for-rhel-8-x86_64-rpms --enable=jb-eap-7.4-for-rhel-8-x86_64-rpms --enable=openstack-16.2-cinderlib-for-rhel-8-x86_64-rpms --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms --enable=rhel-8-for-x86_64-appstream-tus-rpms --enable=rhel-8-for-x86_64-baseos-tus-rpms", "subscription-manager release --set=8.6", "dnf module -y enable postgresql:12", "dnf module -y enable nodejs:14", "dnf distro-sync --nobest", "dnf install postgresql-server postgresql-contrib", "postgresql-setup --initdb", "systemctl enable postgresql systemctl start postgresql", "su - postgres -c psql", "postgres=# create role user_name with login encrypted password ' password ';", "postgres=# create database database_name owner user_name template template0 encoding 'UTF8' lc_collate 'en_US.UTF-8' lc_ctype 'en_US.UTF-8';", "postgres=# \\c database_name", "database_name =# CREATE EXTENSION \"uuid-ossp\";", "database_name =# CREATE LANGUAGE plpgsql;", "database_name =# \\q", "host database_name user_name 0.0.0.0/0 md5 host database_name user_name ::0/0 md5", "autovacuum_vacuum_scale_factor=0.01 autovacuum_analyze_scale_factor=0.075 autovacuum_max_workers=6 maintenance_work_mem=65536 max_connections=150 work_mem=8192", "systemctl restart postgresql" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/preparing_a_local_manually-configured_postgresql_database_sm_localdb_deploy
7.5. Deleting a Template
7.5. Deleting a Template If you have used a template to create a virtual machine using the thin provisioning storage allocation option, the template cannot be deleted as the virtual machine needs it to continue running. However, cloned virtual machines do not depend on the template they were cloned from and the template can be deleted. Deleting a Template Click Compute Templates and select a template. Click Remove . Click OK .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/deleting_a_template
Chapter 12. Management of Ceph object gateway using the dashboard
Chapter 12. Management of Ceph object gateway using the dashboard As a storage administrator, the Ceph Object Gateway functions of the dashboard allow you to manage and monitor the Ceph Object Gateway. You can also create the Ceph Object Gateway services with Secure Sockets Layer (SSL) using the dashboard. For example, monitoring functions allow you to view details about a gateway daemon such as its zone name, or performance graphs of GET and PUT rates. Management functions allow you to view, create, and edit both users and buckets. Ceph object gateway functions are divided between user functions and bucket functions. 12.1. Manually adding Ceph object gateway login credentials to the dashboard The Red Hat Ceph Storage Dashboard can manage the Ceph Object Gateway, also known as the RADOS Gateway, or RGW. When Ceph Object Gateway is deployed with cephadm , the Ceph Object Gateway credentials used by the dashboard is automatically configured. You can also manually force the Ceph object gateway credentials to the Ceph dashboard using the command-line interface. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. Ceph Object Gateway is installed. Procedure Log into the Cephadm shell: Example Set up the credentials manually: Example This creates a Ceph Object Gateway user with UID dashboard for each realm in the system. Optional: If you have configured a custom admin resource in your Ceph Object Gateway admin API, you have to also set the the admin resource: Syntax Example Optional: If you are using HTTPS with a self-signed certificate, disable certificate verification in the dashboard to avoid refused connections. Refused connections can happen when the certificate is signed by an unknown Certificate Authority, or if the host name used does not match the host name in the certificate. Syntax Example Optional: If the Object Gateway takes too long to process requests and the dashboard runs into timeouts, you can set the timeout value: Syntax The default value of 45 seconds. Example 12.2. Creating the Ceph Object Gateway services with SSL using the dashboard After installing a Red Hat Ceph Storage cluster, you can create the Ceph Object Gateway service with SSL using two methods: Using the command-line interface. Using the dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. SSL key from Certificate Authority (CA). Note Obtain the SSL certificate from a CA that matches the hostname of the gateway host. Red Hat recommends obtaining a certificate from a CA that has subject alternate name fields and a wildcard for use with S3-style subdomains. Procedure Log in to the Dashboard. From the Cluster drop-down menu, select Services . Click +Create . In the Create Service window, select rgw service. Select SSL and upload the Certificate in .pem format. Figure 12.1. Creating Ceph Object Gateway service Click Create Service . Check the Ceph Object Gateway service is up and running. Additional Resources See the Configuring SSL for Beast section in the Red Hat Ceph Storage Object Gateway Guide . 12.3. Configuring high availability for the Ceph Object Gateway on the dashboard The ingress service provides a highly available endpoint for the Ceph Object Gateway. You can create and configure the ingress service using the Ceph Dashboard. Prerequisites A running Red Hat Ceph Storage cluster. A minimum of two Ceph Object Gateway daemons running on different hosts. Dashboard is installed. A running rgw service. Procedure Log in to the Dashboard. From the Cluster drop-down menu, select Services . Click +Create . In the Create Service window, select ingress service. Select backend service and edit the required parameters. Figure 12.2. Creating ingress service Click Create Service . You get a notification that the ingress service was created successfully. Additional Resources See High availability for the Ceph Object Gateway for more information about the ingress service. 12.4. Management of Ceph object gateway users on the dashboard As a storage administrator, the Red Hat Ceph Storage Dashboard allows you to view and manage Ceph Object Gateway users. 12.4.1. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. 12.4.2. Creating Ceph object gateway users on the dashboard You can create Ceph object gateway users on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users and then Click Create . In the Create User window, set the following parameters: Set the user name, full name, and edit the maximum number of buckets if required. Optional: Set an email address or suspended status. Optional: Set a custom access key and secret key by unchecking Auto-generate key . Optional: Set a user quota. Check Enabled under User quota . Uncheck Unlimited size or Unlimited objects . Enter the required values for Max. size or Max. objects . Optional: Set a bucket quota. Check Enabled under Bucket quota . Uncheck Unlimited size or Unlimited objects : Enter the required values for Max. size or Max. objects : Click Create User . Figure 12.3. Create Ceph object gateway user You get a notification that the user was created successfully. Additional Resources See the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. See the Red Hat Ceph Storage Object Gateway Guide for more information. 12.4.3. Creating Ceph object gateway subusers on the dashboard A subuser is associated with a user of the S3 interface. You can create a sub user for a specific Ceph object gateway user on the Red Hat Ceph Storage dashboard. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users . Select the user by clicking its row. From Edit drop-down menu, select Edit . In the Edit User window, click +Create Subuser . In the Create Subuser dialog box, enter the user name and select the appropriate permissions. Check the Auto-generate secret box and then click Create Subuser . Figure 12.4. Create Ceph object gateway subuser Note By clicking Auto-generate-secret checkbox, the secret key for object gateway is generated automatically. In the Edit User window, click the Edit user button You get a notification that the user was updated successfully. 12.4.4. Editing Ceph object gateway users on the dashboard You can edit Ceph object gateway users on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. A Ceph object gateway user is created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users . To edit the user capabilities, click its row. From the Edit drop-down menu, select Edit . In the Edit User window, edit the required parameters. Click Edit User . Figure 12.5. Edit Ceph object gateway user You get a notification that the user was updated successfully. Additional Resources See the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. See the Red Hat Ceph Storage Object Gateway Guide for more information. 12.4.5. Deleting Ceph object gateway users on the dashboard You can delete Ceph object gateway users on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. A Ceph object gateway user is created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Users . To delete the user, click its row. From the Edit drop-down menu, select Delete . In the Edit User window, edit the required parameters. In the Delete user dialog window, Click the Yes, I am sure box and then Click Delete User to save the settings: Figure 12.6. Delete Ceph object gateway user Additional Resources See the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide for more information. See the Red Hat Ceph Storage Object Gateway Guide for more information. 12.5. Management of Ceph object gateway buckets on the dashboard As a storage administrator, the Red Hat Ceph Storage Dashboard allows you to view and manage Ceph Object Gateway buckets. 12.5.1. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. At least one Ceph object gateway user is created. Object gateway login credentials are added to the dashboard. 12.5.2. Creating Ceph object gateway buckets on the dashboard You can create Ceph object gateway buckets on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created and not suspended. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Buckets and then click Create . In the Create Bucket window, enter a value for Name and select a user that is not suspended. Select a placement target. Figure 12.7. Create Ceph object gateway bucket Note A bucket's placement target is selected on creation and can not be modified. Optional: Enable Locking for the objects in the bucket. Locking can only be enabled while creating a bucket. Once locking is enabled, you also have to choose the lock mode, Compliance or Governance and the lock retention period in either days or years, not both. Click Create bucket . You get a notification that the bucket was created successfully. 12.5.3. Editing Ceph object gateway buckets on the dashboard You can edit Ceph object gateway buckets on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created and not suspended. A Ceph Object Gateway bucket created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Buckets . To edit the bucket, click it's row. From the Edit drop-down select Edit . In the Edit bucket window, edit the Owner by selecting the user from the dropdown. Figure 12.8. Edit Ceph object gateway bucket Optional: Enable Versioning if you want to enable versioning state for all the objects in an existing bucket. To enable versioning, you must be the owner of the bucket. If Locking is enabled during bucket creation, you cannot disable the versioning. All objects added to the bucket will receive a unique version ID. If the versioning state has not been set on a bucket, then the bucket will not have a versioning state. Optional: Check Delete enabled for Multi-Factor Authentication . Multi-Factor Authentication(MFA) ensures that users need to use a one-time password(OTP) when removing objects on certain buckets. Enter a value for Token Serial Number and Token PIN . Note The buckets must be configured with versioning and MFA enabled which can be done through the S3 API. Click Edit Bucket . You get a notification that the bucket was updated successfully. 12.5.4. Deleting Ceph object gateway buckets on the dashboard You can delete Ceph object gateway buckets on the Red Hat Ceph Storage once the credentials are set-up using the CLI. Prerequisites A running Red Hat Ceph Storage cluster. Dashboard is installed. The Ceph Object Gateway is installed. Object gateway login credentials are added to the dashboard. Object gateway user is created and not suspended. A Ceph Object Gateway bucket created. Procedure Log in to the Dashboard. On the navigation bar, click Object Gateway . Click Buckets . To delete the bucket, click it's row. From the Edit drop-down select Delete . In the Delete Bucket dialog box, Click the Yes, I am sure box and then Click Delete bucket to save the settings: Figure 12.9. Delete Ceph object gateway bucket 12.6. Monitoring multisite object gateway configuration on the Ceph dashboard The Red Hat Ceph Storage dashboard supports monitoring the users and buckets of one zone in another zone in a multisite object gateway configuration. For example, if the users and buckets are created in a zone in the primary site, you can monitor those users and buckets in the secondary zone in the secondary site. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. Procedure On the Dashboard landing page of the secondary site, in the vertical menu bar, click Object Gateway drop-down list. Select Buckets . You can see those object gateway buckets on the secondary landing page that were created for the object gateway users on the primary site. Figure 12.10. Multisite object gateway monitoring Additional Resources For more information on configuring multisite, see the Multi-site configuration and administration section of the Red Hat Ceph Storage Dashboard Guide . For more information on adding Ceph Object Gateway login credentials to the dashboard, see the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard Guide . For more information on creating Ceph Object Gateway users on the dashboard, see the Creating object gateway users on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . For more information on creating Ceph Object Gateway buckets on the dashboard, see the Creating object gateway buckets on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . 12.7. Management of buckets of a multisite object configuration on the Ceph dashboard As a storage administrator, you can edit buckets of one zone in another zone on the Red Hat Ceph Storage Dashboard. However, you can delete buckets of secondary sites in the primary site. You cannot delete the buckets of master zones of primary sites in other sites. For example, If the buckets are created in a zone in the secondary site, you can edit and delete those buckets in the master zone in the primary site. 12.7.1. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. At least rgw-manager level of access on the Ceph dashboard. 12.7.2. Editing buckets of a multisite object gateway configuration on the Ceph dashboard You can edit and update the details of the buckets of one zone in another zone on the Red Hat Ceph Storage Dashboard in a multiste object gateway configuration. You can edit the owner, versioning, multi-factor authentication and locking features of the buckets with this feature of the dashboard. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. At least rgw-manager level of access on the Ceph dashboard. Procedure On the Dashboard landing page of the secondary site, in the vertical menu bar, click Object Gateway drop-down list. Select Buckets . You can see those object gateway buckets on the secondary landing page that were created for the object gateway users on the primary site. Figure 12.11. Monitoring object gateway monitoring Click the row of the bucket that you want to edit. From the Edit drop-down menu, select Edit . In the Edit Bucket window, edit the required parameters and click Edit Bucket . Figure 12.12. Edit buckets in a multisite Verification You will get a notification that the bucket is updated successfully. Additional Resources For more information on configuring multisite, see the Multi-site configuration and administration section of the Red Hat Ceph Storage Object Gateway guide. For more information on adding Ceph Object Gateway login credentials to the dashboard, see the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating Ceph Object Gateway users on the dashboard, see the Creating Ceph object gateway users on the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating Ceph Object Gateway buckets on the dashboard, see the Creating Ceph object gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on system roles, see the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide . 12.7.3. Deleting buckets of a multisite object gateway configuration on the Ceph dashboard You can delete buckets of secondary sites in primary sites on the Red Hat Ceph Storage Dashboard in a multiste object gateway configuration. IMPORTANT: Red hat does not recommend to delete buckets of primary site from secondary sites. Prerequisites At least one running Red Hat Ceph Storage cluster deployed on both the sites. Dashboard is installed. The multi-site object gateway is configured on the primary and secondary sites. Object gateway login credentials of the primary and secondary sites are added to the dashboard. Object gateway users are created on the primary site. Object gateway buckets are created on the primary site. At least rgw-manager level of access on the Ceph dashboard. Procedure On the Dashboard landing page of the primary site, in the vertical menu bar, click Object Gateway drop-down list. Select Buckets . You can see those object gateway buckets of the secondary site here. Click the row of the bucket that you want to delete. From the Edit drop-down menu, select Delete . In the Delete Bucket dialog box, select Yes, I am sure checkbox, and click Delete Bucket . Verification The selected row of the bucket is deleted successfully. Additional Resources For more information on configuring multisite, see the Multi-site configuration and administration section of the Red Hat Ceph Storage Object Gateway guide. For more information on adding Ceph Object Gateway login credentials to the dashboard, see the Manually adding Ceph object gateway login credentials to the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating Ceph Object Gateway users on the dashboard, see the Creating Ceph object gateway users on the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on creating Ceph Object Gateway buckets on the dashboard, see the Creating Ceph object gateway buckets on the dashboard section in the Red Hat Ceph Storage Dashboard guide. For more information on system roles, see the User roles and permissions on the Ceph dashboard section in the Red Hat Ceph Storage Dashboard Guide .
[ "cephadm shell", "ceph dashboard set-rgw-credentials", "ceph dashboard set-rgw-api-admin-resource RGW_API_ADMIN_RESOURCE", "ceph dashboard set-rgw-api-admin-resource admin Option RGW_API_ADMIN_RESOURCE updated", "ceph dashboard set-rgw-api-ssl-verify false", "ceph dashboard set-rgw-api-ssl-verify False Option RGW_API_SSL_VERIFY updated", "ceph dashboard set-rest-requests-timeout _TIME_IN_SECONDS_", "ceph dashboard set-rest-requests-timeout 240" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/5/html/dashboard_guide/management-of-ceph-object-gateway-using-the-dashboard
14.2.6. Extending a Volume Group
14.2.6. Extending a Volume Group In this example, the objective was to extend the new volume group to include an uninitialized entity (partition). Doing so increases the size or number of extents for the volume group. To extend the volume group, ensure that on the left pane the Physical View option is selected within the desired Volume Group. Then click on the Extend Volume Group button. This will display the 'Extend Volume Group' window as illustrated below. On the 'Extend Volume Group' window, you can select disk entities (partitions) to add to the volume group. Ensure that you check the contents of any 'Uninitialized Disk Entities' (partitions) to avoid deleting any critical data (see Figure 14.13, "Uninitialized hard disk" ). In the example, the disk entity (partition) /dev/hda6 was selected as illustrated below. Figure 14.15. Select disk entities Once added, the new volume will be added as 'Unused Space' in the volume group. The figure below illustrates the logical and physical view of the volume group after it was extended. Figure 14.16. Logical and physical view of an extended volume group
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/storage_administration_guide/s1-system-config-lvm-ext-volumegrp
18.3. Defining a Different Attribute Value for a User Account on Different Hosts
18.3. Defining a Different Attribute Value for a User Account on Different Hosts An administrator can create multiple ID views that override an attribute value used by a user account and apply these ID views to different client hosts. Example: A service account is configured to use different SSH public keys when authenticating on different hosts. This section includes the following procedures: Section 18.3.1, "Web UI: Overriding an Attribute Value for a Specific Host" Section 18.3.2, "Command Line: Overriding an Attribute Value for a Specific Host" The procedures show how to create an ID view for a client host named host1.example.com . To override the attribute values on the other hosts as well, use the procedures to create multiple ID views, one for each host. In the following procedures: user is the user account whose attribute needs to be overridden host1.example.com is the host on which the ID view will be applied Important After you create a new ID view, restart SSSD on all clients where the ID view is applied. If the new ID view changes a UID or GID, clear the SSSD cache on these clients as well. 18.3.1. Web UI: Overriding an Attribute Value for a Specific Host To manage ID views, first log in to the IdM web UI as an IdM administrator. Creating a New ID View Under the Identity tab, select the ID Views subtab. Click Add and provide a name for the ID view. Figure 18.1. Adding an ID View Click Add to confirm. The new ID view is now displayed in the list of ID views. Figure 18.2. List of ID Views Adding a User Override to the ID View In the list of ID views, click the name of the ID view. Figure 18.3. Editing an ID View Under the Users tab, click Add to add the user override. Select the user account whose attribute value to override, and click Add . The user override is now displayed on the example_for_host1 ID view page. Figure 18.4. List of Overrides Specifying the Attribute to Override Click the override that you want to use to change the attribute value. Figure 18.5. Editing an Override Define the new value for the attribute. For example, to override the SSH public key used by the user account: Click SSH public keys: Add . Figure 18.6. Adding an SSH Public Key Paste in the public key. Note For details on adding SSH keys to IdM, see Section 22.5, "Managing Public SSH Keys for Users" . Click Save to update the override. Applying the ID View to a Specific Host In the list of ID views, click the name of the ID view. Figure 18.7. Editing an ID View Under the Hosts tab, click Apply to hosts . Select the host1.example.com host, and move it to the Prospective column. Click Apply . The host is now displayed in the list of hosts to which the ID view applies. Figure 18.8. Listing Hosts to Which an ID View Applies 18.3.2. Command Line: Overriding an Attribute Value for a Specific Host Before managing ID views, request a ticket as an IdM administrator. For example: Create a new ID view. For example, the create an ID view named example_for_host1 : Add a user override to the example_for_host1 ID view. The ipa idoverrideuser-add command requires the name of the ID view and the user to override. To specify the new attribute value, add the corresponding command-line option as well. For a list of the available options, run ipa idoverrideuser-add --help . For example, use the --sshpubkey option to override the SSH public key value: Note For details on adding SSH keys to IdM, see Section 22.5, "Managing Public SSH Keys for Users" . The ipa idoverrideuser-add --certificate command replaces all existing certificates for the account in the specified ID view. To append an additional certificate, use the ipa idoverrideuser-add-cert command instead: Using the ipa idoverrideuser-mod command, you can also specify new attribute values for an existing user override. Use the ipa idoverrideuser-del command to delete a user override. Note If you use this command to delete SSH keys overrides, it does not delete the SSH keys from the cache immediately. With the default cache timeout value ( entry_cache_timeout = 5400 ), the keys remain in cache for one and a half hours. Apply example_for_host1 to the host1.example.com host: Note The ipa idview-apply command also accepts the --hostgroups option. The option applies the ID view to hosts that belong to the specified host group, but does not associate the ID view with the host group itself. Instead, the --hostgroups option expands the members of the specified host group and applies the --hosts option individually to every one of them.
[ "kinit admin", "ipa idview-add example_for_host1 --------------------------- Added ID View \"example_for_host1\" --------------------------- ID View Name: example_for_host1", "ipa idoverrideuser-add example_for_host1 user --sshpubkey=\" ssh-rsa AAAAB3NzaC1yrRqFE...gWRL71/miPIZ [email protected] \" ----------------------------- Added User ID override \"user\" ----------------------------- Anchor to override: user SSH public key: ssh-rsa AAAB3NzaC1yrRqFE...gWRL71/miPIZ [email protected]", "ipa idoverrideuser-add-cert example_for_host1 user --certificate=\"MIIEATCC...\"", "ipa idview-apply example_for_host1 --hosts=host1.example.com ----------------------------- Applied ID View \"example_for_host1\" ----------------------------- hosts: host1.example.com --------------------------------------------- Number of hosts the ID View was applied to: 1 ---------------------------------------------" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/id-views-different
Chapter 2. Training a model
Chapter 2. Training a model RHEL AI can use your taxonomy tree and synthetic data to create a newly trained model with your domain-specific knowledge or skills using multi-phase training and evaluation. You can run the full training and evaluation process using the synthetic dataset you generated. The LAB optimized technique of multi-phase training is a type of LLM training that goes through multiple stages of training and evaluation. In these various stages, RHEL AI runs the training process and produces model checkpoints. The best checkpoint is selected for the phase. This process creates many checkpoints and selects the best scored checkpoint. This best scored checkpoint is your newly trained LLM. The entire process creates a newly generated model that is trained and evaluated using the synthetic data from your taxonomy tree. 2.1. Training the model on your data You can use Red Hat Enterprise Linux AI to train a model with your synthetically generated data. The following procedures show how to do this using the LAB multi-phase training strategy. Important Red Hat Enterprise Linux AI general availability does not support training and inference serving at the same time. If you have an inference server running, you must close it before you start the training process. Prerequisites You installed RHEL AI with the bootable container image. You downloaded the granite-7b-starter model. You created a custom qna.yaml file with knowledge data. You ran the synthetic data generation (SDG) process. You downloaded the prometheus-8x7b-v2-0 judge model. You have root user access on your machine. Procedure You can run multi-phase training and evaluation by running the following command with the data files generated from SDG. Note You can use the --enable-serving-output flag with the ilab model train commmand to display the training logs. USD ilab model train --strategy lab-multiphase \ --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> \ --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file> where <knowledge-train-messages-file> The location of the knowledge_messages.jsonl file generated during SDG. RHEL AI trains the student model granite-7b-starter using the data from this .jsonl file. Example path: ~/.local/share/instructlab/datasets/knowledge_train_msgs_2024-08-13T20_54_21.jsonl . <skills-train-messages-file> The location of the skills_messages.jsonl file generated during SDG. RHEL AI trains the student model granite-7b-starter using the data from the .jsonl file. Example path: ~/.local/share/instructlab/datasets/skills_train_msgs_2024-08-13T20_54_21.jsonl . Important This process can be time consuming depending on your hardware specifications. The first phase trains the model using the synthetic data from your knowledge contribution. Example output of training knowledge Training Phase 1/2... TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>)) Then, RHEL AI selects the best checkpoint to use for the phase. The phase trains the model using the synthetic data from the skills data. Example output of training skills Training Phase 2/2... TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>)) Then, RHEL AI evaluates all of the checkpoints from phase 2 model training using the Multi-turn Benchmark (MT-Bench) and returns the best performing checkpoint as the fully trained output model. Example output of evaluating skills MT-Bench evaluation for Phase 2... Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1 After training is complete, a confirmation appears and displays your best performed checkpoint. Example output of a complete multi-phase training run Make a note of this checkpoint because the path is necessary for evaluation and serving. Verification When training a model with ilab model train , multiple checkpoints are saved with the samples_ prefix based on how many data points they have been trained on. These are saved to the ~/.local/share/instructlab/phase/ directory. USD ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/ Example output of the new models samples_1711 samples_1945 samples_1456 samples_1462 samples_1903 2.1.1. Continuing or restarting a training run RHEL AI allows you to continue a training run that may have failed during multi-phase training. There are a few ways a training run can fail: The vLLM server may not start correctly. A accelerator or GPU may freeze, causing training to abort. There may be an error in your InstructLab config.yaml file. When you run multi-phase training for the first time, the initial training data gets saved into a journalfile.yaml file. If necessary, this metadata in the file can be used to restart a failed training. You can also restart a training run which clears the training data by following the CLI prompts when running multi-phase training. Prerequisites You ran multi-phase training with your synthetic data and that failed. Procedure Run the multi-phase training command again. USD ilab model train --strategy lab-multiphase \ --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> \ --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file> The Red Hat Enterprise Linux AI CLI reads if the journalfile.yaml file exists and continues the training run from that point. The CLI prompts you to continue for the training run, or start from the beginning. Type n in your shell to continue from your previews training run. Metadata (checkpoints, the training journal) may have been saved from a training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? n Type y into the terminal to restart a training run. Metadata (checkpoints, the training journal) may have been saved from a training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? y Restarting also clears your systems cache of checkpoints, journal files and other training data.
[ "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>", "Training Phase 1/2 TrainingArgs for current phase: TrainingArgs(model_path='/opt/app-root/src/.cache/instructlab/models/granite-7b-starter', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/tmp/jul19-knowledge-26k.jsonl', ckpt_output_dir='/tmp/e2e/phase1/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=128, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "Training Phase 2/2 TrainingArgs for current phase: TrainingArgs(model_path='/tmp/e2e/phase1/checkpoints/hf_format/samples_52096', chat_tmpl_path='/opt/app-root/lib64/python3.11/site-packages/instructlab/training/chat_templates/ibm_generic_tmpl.py', data_path='/usr/share/instructlab/sdg/datasets/skills.jsonl', ckpt_output_dir='/tmp/e2e/phase2/checkpoints', data_output_dir='/opt/app-root/src/.local/share/instructlab/internal', max_seq_len=4096, max_batch_len=55000, num_epochs=2, effective_batch_size=3840, save_samples=0, learning_rate=2e-05, warmup_steps=25, is_padding_free=True, random_seed=42, checkpoint_at_epoch=True, mock_data=False, mock_data_len=0, deepspeed_options=DeepSpeedOptions(cpu_offload_optimizer=False, cpu_offload_optimizer_ratio=1.0, cpu_offload_optimizer_pin_memory=False, save_samples=None), disable_flash_attn=False, lora=LoraOptions(rank=0, alpha=32, dropout=0.1, target_modules=('q_proj', 'k_proj', 'v_proj', 'o_proj'), quantize_data_type=<QuantizeDataType.NONE: None>))", "MT-Bench evaluation for Phase 2 Using gpus from --gpus or evaluate config and ignoring --tensor-parallel-size configured in serve vllm_args INFO 2024-08-15 10:04:51,065 instructlab.model.backends.backends:437: Trying to connect to model server at http://127.0.0.1:8000/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.vllm:208: vLLM starting up on pid 79388 at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:450: Starting a temporary vLLM server at http://127.0.0.1:54265/v1 INFO 2024-08-15 10:04:53,580 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 1/300 INFO 2024-08-15 10:04:58,003 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 2/300 INFO 2024-08-15 10:05:02,314 instructlab.model.backends.backends:465: Waiting for the vLLM server to start at http://127.0.0.1:54265/v1, this might take a moment... Attempt: 3/300 moment... Attempt: 3/300 INFO 2024-08-15 10:06:07,611 instructlab.model.backends.backends:472: vLLM engine successfully started at http://127.0.0.1:54265/v1", "Training finished! Best final checkpoint: samples_1945 with score: 6.813759384", "ls ~/.local/share/instructlab/phase/<phase1-or-phase2>/checkpoints/", "samples_1711 samples_1945 samples_1456 samples_1462 samples_1903", "ilab model train --strategy lab-multiphase --phased-phase1-data ~/.local/share/instructlab/datasets/<knowledge-train-messages-jsonl-file> --phased-phase2-data ~/.local/share/instructlab/datasets/<skills-train-messages-jsonl-file>", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? n", "Metadata (checkpoints, the training journal) may have been saved from a previous training run. By default, training will resume from this metadata if it exists Alternatively, the metadata can be cleared, and training can start from scratch Would you like to START TRAINING FROM THE BEGINNING? y" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.3/html/generating_a_custom_llm_using_rhel_ai/train_and_eval
Chapter 3. Options for embedding applications in a RHEL for Edge image
Chapter 3. Options for embedding applications in a RHEL for Edge image You can embed microservices-based workloads and applications in a Red Hat Enterprise Linux for Edge (RHEL for Edge) image to run in a MicroShift cluster. Embedded applications can be installed directly on edge devices to run in disconnected or offline environments. 3.1. Adding application RPMs to an rpm-ostree image If you have an application that includes APIs, container images, and configuration files for deployment such as manifests, you can build application RPMs. You can then add the RPMs to your RHEL for Edge system image. The following is an outline of the procedures to embed applications or workloads in a fully self-contained operating system image: Build your own RPM that includes your application manifest. Add the RPM to the blueprint you used to install Red Hat build of MicroShift. Add the workload container images to the same blueprint. Create a bootable ISO. For a step-by-step tutorial about preparing and embedding applications in a RHEL for Edge image, use the following tutorial: Embedding applications tutorial 3.2. Adding application manifests to an image for offline use If you have a simple application that includes a few files for deployment such as manifests, you can add those manifests directly to a RHEL for Edge system image. See the "Create a custom file blueprint customization" section of the following RHEL for Edge documentation for an example: Create a custom file blueprint customization 3.3. Embedding applications for offline use If you have an application that includes more than a few files, you can embed the application for offline use. See the following procedure: Embedding applications for offline use 3.4. Additional resources Embedding Red Hat build of MicroShift in an RPM-OSTree image Composing, installing, and managing RHEL for Edge images Preparing for image building Meet Red Hat Device Edge Composing a RHEL for Edge image using image builder command-line Image Builder system requirements
null
https://docs.redhat.com/en/documentation/red_hat_build_of_microshift/4.18/html/running_applications/microshift-embedded-apps-on-rhel-edge
Chapter 7. Admission plugins
Chapter 7. Admission plugins Admission plugins are used to help regulate how Red Hat OpenShift Service on AWS functions. 7.1. About admission plugins Admission plugins intercept requests to the master API to validate resource requests. After a request is authenticated and authorized, the admission plugins ensure that any associated policies are followed. For example, they are commonly used to enforce security policy, resource limitations or configuration requirements. Admission plugins run in sequence as an admission chain. If any admission plugin in the sequence rejects a request, the whole chain is aborted and an error is returned. Red Hat OpenShift Service on AWS has a default set of admission plugins enabled for each resource type. These are required for proper functioning of the cluster. Admission plugins ignore resources that they are not responsible for. In addition to the defaults, the admission chain can be extended dynamically through webhook admission plugins that call out to custom webhook servers. There are two types of webhook admission plugins: a mutating admission plugin and a validating admission plugin. The mutating admission plugin runs first and can both modify resources and validate requests. The validating admission plugin validates requests and runs after the mutating admission plugin so that modifications triggered by the mutating admission plugin can also be validated. Calling webhook servers through a mutating admission plugin can produce side effects on resources related to the target object. In such situations, you must take steps to validate that the end result is as expected. Warning Dynamic admission should be used cautiously because it impacts cluster control plane operations. When calling webhook servers through webhook admission plugins in Red Hat OpenShift Service on AWS 4, ensure that you have read the documentation fully and tested for side effects of mutations. Include steps to restore resources back to their original state prior to mutation, in the event that a request does not pass through the entire admission chain. 7.2. Default admission plugins Default validating and admission plugins are enabled in Red Hat OpenShift Service on AWS 4. These default plugins contribute to fundamental control plane functionality, such as ingress policy, cluster resource limit override and quota policy. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. The following lists contain the default admission plugins: Example 7.1. Validating admission plugins LimitRanger ServiceAccount PodNodeSelector Priority PodTolerationRestriction OwnerReferencesPermissionEnforcement PersistentVolumeClaimResize RuntimeClass CertificateApproval CertificateSigning CertificateSubjectRestriction autoscaling.openshift.io/ManagementCPUsOverride authorization.openshift.io/RestrictSubjectBindings scheduling.openshift.io/OriginPodNodeEnvironment network.openshift.io/ExternalIPRanger network.openshift.io/RestrictedEndpointsAdmission image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/SCCExecRestrictions route.openshift.io/IngressAdmission config.openshift.io/ValidateAPIServer config.openshift.io/ValidateAuthentication config.openshift.io/ValidateFeatureGate config.openshift.io/ValidateConsole operator.openshift.io/ValidateDNS config.openshift.io/ValidateImage config.openshift.io/ValidateOAuth config.openshift.io/ValidateProject config.openshift.io/DenyDeleteClusterConfiguration config.openshift.io/ValidateScheduler quota.openshift.io/ValidateClusterResourceQuota security.openshift.io/ValidateSecurityContextConstraints authorization.openshift.io/ValidateRoleBindingRestriction config.openshift.io/ValidateNetwork operator.openshift.io/ValidateKubeControllerManager ValidatingAdmissionWebhook ResourceQuota quota.openshift.io/ClusterResourceQuota Example 7.2. Mutating admission plugins NamespaceLifecycle LimitRanger ServiceAccount NodeRestriction TaintNodesByCondition PodNodeSelector Priority DefaultTolerationSeconds PodTolerationRestriction DefaultStorageClass StorageObjectInUseProtection RuntimeClass DefaultIngressClass autoscaling.openshift.io/ManagementCPUsOverride scheduling.openshift.io/OriginPodNodeEnvironment image.openshift.io/ImagePolicy security.openshift.io/SecurityContextConstraint security.openshift.io/DefaultSecurityContextConstraints MutatingAdmissionWebhook 7.3. Webhook admission plugins In addition to Red Hat OpenShift Service on AWS default admission plugins, dynamic admission can be implemented through webhook admission plugins that call webhook servers, to extend the functionality of the admission chain. Webhook servers are called over HTTP at defined endpoints. There are two types of webhook admission plugins in Red Hat OpenShift Service on AWS: During the admission process, the mutating admission plugin can perform tasks, such as injecting affinity labels. At the end of the admission process, the validating admission plugin can be used to make sure an object is configured properly, for example ensuring affinity labels are as expected. If the validation passes, Red Hat OpenShift Service on AWS schedules the object as configured. When an API request comes in, mutating or validating admission plugins use the list of external webhooks in the configuration and call them in parallel: If all of the webhooks approve the request, the admission chain continues. If any of the webhooks deny the request, the admission request is denied and the reason for doing so is based on the first denial. If more than one webhook denies the admission request, only the first denial reason is returned to the user. If an error is encountered when calling a webhook, the request is either denied or the webhook is ignored depending on the error policy set. If the error policy is set to Ignore , the request is unconditionally accepted in the event of a failure. If the policy is set to Fail , failed requests are denied. Using Ignore can result in unpredictable behavior for all clients. The following diagram illustrates the sequential admission chain process within which multiple webhook servers are called. Figure 7.1. API admission chain with mutating and validating admission plugins An example webhook admission plugin use case is where all pods must have a common set of labels. In this example, the mutating admission plugin can inject labels and the validating admission plugin can check that labels are as expected. Red Hat OpenShift Service on AWS would subsequently schedule pods that include required labels and reject those that do not. Some common webhook admission plugin use cases include: Namespace reservation. Limiting custom network resources managed by the SR-IOV network device plugin. Pod priority class validation. Note The maximum default webhook timeout value in Red Hat OpenShift Service on AWS is 13 seconds, and it cannot be changed. 7.4. Types of webhook admission plugins Cluster administrators can call out to webhook servers through the mutating admission plugin or the validating admission plugin in the API server admission chain. 7.4.1. Mutating admission plugin The mutating admission plugin is invoked during the mutation phase of the admission process, which allows modification of resource content before it is persisted. One example webhook that can be called through the mutating admission plugin is the Pod Node Selector feature, which uses an annotation on a namespace to find a label selector and add it to the pod specification. Sample mutating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None 1 Specifies a mutating admission plugin configuration. 2 The name for the MutatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. Important In Red Hat OpenShift Service on AWS 4, objects created by users or control loops through a mutating admission plugin might return unexpected results, especially if values set in an initial request are overwritten, which is not recommended. 7.4.2. Validating admission plugin A validating admission plugin is invoked during the validation phase of the admission process. This phase allows the enforcement of invariants on particular API resources to ensure that the resource does not change again. The Pod Node Selector is also an example of a webhook which is called by the validating admission plugin, to ensure that all nodeSelector fields are constrained by the node selector restrictions on the namespace. Sample validating admission plugin configuration apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - "" apiVersions: - "*" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown 1 Specifies a validating admission plugin configuration. 2 The name for the ValidatingWebhookConfiguration object. Replace <webhook_name> with the appropriate value. 3 The name of the webhook to call. Replace <webhook_name> with the appropriate value. 4 Information about how to connect to, trust, and send data to the webhook server. 5 The namespace where the front-end service is created. 6 The name of the front-end service. 7 The webhook URL used for admission requests. Replace <webhook_url> with the appropriate value. 8 A PEM-encoded CA certificate that signs the server certificate that is used by the webhook server. Replace <ca_signing_certificate> with the appropriate certificate in base64 format. 9 Rules that define when the API server should use this webhook admission plugin. 10 One or more operations that trigger the API server to call this webhook admission plugin. Possible values are create , update , delete or connect . Replace <operation> and <resource> with the appropriate values. 11 Specifies how the policy should proceed if the webhook server is unavailable. Replace <policy> with either Ignore (to unconditionally accept the request in the event of a failure) or Fail (to deny the failed request). Using Ignore can result in unpredictable behavior for all clients. 7.5. Additional resources Pod priority names
[ "apiVersion: admissionregistration.k8s.io/v1beta1 kind: MutatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: None", "apiVersion: admissionregistration.k8s.io/v1beta1 kind: ValidatingWebhookConfiguration 1 metadata: name: <webhook_name> 2 webhooks: - name: <webhook_name> 3 clientConfig: 4 service: namespace: default 5 name: kubernetes 6 path: <webhook_url> 7 caBundle: <ca_signing_certificate> 8 rules: 9 - operations: 10 - <operation> apiGroups: - \"\" apiVersions: - \"*\" resources: - <resource> failurePolicy: <policy> 11 sideEffects: Unknown" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/architecture/admission-plug-ins
Chapter 3. Upgrading the Red Hat build of Keycloak server
Chapter 3. Upgrading the Red Hat build of Keycloak server You upgrade the server before you upgrade the adapters. 3.1. Preparing for upgrading Perform the following steps before you upgrade the server. Procedure Back up the old installation, such as configuration, themes, and so on. Handle any open transactions and delete the data/tx-object-store/ transaction directory. Back up the database using instructions in the documentation for your relational database. The database will no longer be compatible with the old server after you upgrade the server. If you need to revert the upgrade, first restore the old installation, and then restore the database from the backup copy. Warning After upgrade of Red Hat build of Keycloak, except for offline user sessions, user sessions are lost. Users will have to log in again. 3.2. Downloading the Red Hat build of Keycloak server Once you have prepared for the upgrade, you can download the server. Procedure Download and extract rhbk-24.0.10.zip from the Red Hat build of Keycloak website. After extracting this file, you should have a directory that is named rhbk-24.0.10 . Move this directory to the desired location. Copy conf/ , providers/ and themes/ from the installation to the new installation. 3.3. Migrating the database Red Hat build of Keycloak can automatically migrate the database schema, or you can choose to do it manually. By default the database is automatically migrated when you start the new installation for the first time. 3.3.1. Automatic relational database migration To perform an automatic migration, start the server connected to the desired database. If the database schema has changed for the new server version, the migration starts automatically unless the database has too many records. For example, creating an index on tables with millions of records can be time-consuming and cause a major service disruption. Therefore, a threshold of 300000 records exists for automatic migration. If the number of records exceeds this threshold, the index is not created. Instead, you find a warning in the server logs with the SQL commands that you can apply manually. To change the threshold, set the index-creation-threshold property, value for the connections-liquibase provider: kc.[sh|bat] start --spi-connections-liquibase-quarkus-index-creation-threshold=300000 3.3.2. Manual relational database migration To enable manual upgrading of the database schema, set the migration-strategy property value to "manual" for the default connections-jpa provider: kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual When you start the server with this configuration, the server checks if the database needs to be migrated. The required changes are written to the bin/keycloak-database-update.sql SQL file that you can review and manually run against the database. To change the path and name of the exported SQL file, set the migration-export property for the default connections-jpa provider: kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql> For further details on how to apply this file to the database, see the documentation for your relational database. After the changes have been written to the file, the server exits. 3.4. Migrating themes If you created custom themes, those themes must be migrated to the new server. Also, any changes to the built-in themes might need to be reflected in your custom themes, depending on which aspects you customized. Procedure Copy your custom themes from the old server themes directory to the new server themes directory. Use the following sections to migrate templates, messages, and styles. If you customized any of the updated templates listed in Migration Changes , compare the template from the base theme to check for any changes you need to apply. If you customized messages, you might need to change the key or value or to add additional messages. If you customized any styles and you are extending the Red Hat build of Keycloak themes, review the changes to the styles. If you are extending the base theme, you can skip this step. 3.4.1. Migrating templates If you customized any template, review the new version to decide about updating your customized template. If you made minor changes, you could compare the updated template to your customized template. However, if you made many changes, consider comparing the new template to your customized template. This comparison will show you what changes you need to make. You can use a diff tool to compare the templates. The following screenshot compares the info.ftl template from the Login theme and an example custom theme: Updated version of a Login theme template versus a custom Login theme template This comparison shows that the first change ( Hello world!! ) is a customization, while the second change ( if pageRedirectUri ) is a change to the base theme. By copying the second change to your custom template, you have successfully updated your customized template. In an alternative approach, the following screenshot compares the info.ftl template from the old installation with the updated info.ftl template from the new installation: Login theme template from the old installation versus the updated Login theme template This comparison shows what has been changed in the base template. You can then manually make the same changes to your modified template. Since this approach is more complex, use this approach only if the first approach is not feasible. 3.4.2. Migrating messages If you added support for another language, you need to apply all the changes listed above. If you have not added support for another language, you might not need to change anything. You need to make changes only if you have changed an affected message in your theme. Procedure For added values, review the value of the message in the base theme to determine if you need to customize that message. For renamed keys, rename the key in your custom theme. For changed values, check the value in the base theme to determine if you need to make changes to your custom theme. 3.4.3. Migrating styles You might need to update your custom styles to reflect changes made to the styles from the built-in themes. Consider using a diff tool to compare the changes to stylesheets between the old server installation and the new server installation. For example: USD diff RHSSO_HOME_OLD/themes/keycloak/login/resources/css/login.css \ RHSSO_HOME_NEW/themes/keycloak/login/resources/css/login.css Review the changes and determine if they affect your custom styling.
[ "kc.[sh|bat] start --spi-connections-liquibase-quarkus-index-creation-threshold=300000", "kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-strategy=manual", "kc.[sh|bat] start --spi-connections-jpa-quarkus-migration-export=<path>/<file.sql>", "diff RHSSO_HOME_OLD/themes/keycloak/login/resources/css/login.css RHSSO_HOME_NEW/themes/keycloak/login/resources/css/login.css" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/24.0/html/upgrading_guide/upgrading
Chapter 19. JMX Navigator
Chapter 19. JMX Navigator The JMX Navigator view, shown in Figure 19.1, "JMX Navigator view" , displays all processes that are running in your application and it drives all interactions with the monitoring and testing features. Other areas of the Fuse Integration perspective adapt to display information related to the node selected in the JMX Navigator view. In the JMX Navigator view, its context menu provides the commands needed to activate route tracing and to add JMS destinations. Figure 19.1. JMX Navigator view By default, the JMX Navigator view discovers all JMX servers running on the local machine and lists them under the following categories: Local Processes Server Connections User-Defined Connections Note You can add other JMX servers by using a server's JMX URL. For details, see Section 19.2, "Adding a JMX server" . 19.1. Viewing Processes in JMX Overview The JMX Navigator view lists all known processes in a series of trees. The root for each tree is a JMX server. The first tree in the list is a special Local Processes tree that contains all JMX servers that are running on the local machine. You must connect to one of the JMX servers to see the processes it contains. Viewing processes in a local JMX server To view information about processes in a local JMX server: In the JMX Navigator view, expand Local Processes . Under Local Processes , double-click one of the top-level entries to connect to it. Click the icon that appears to the entry to display a list of its components that are running in the JVM. Viewing processes in alternate JMX servers To view information about processes in an alternate JMX server: Section 19.2, "Adding a JMX server" the JMX server to the JMX Navigator view. In the JMX Navigator view, expand the server's entry by using the icon that appears to the entry. This displays a list of that JMX server's components that are running in the JVM. 19.2. Adding a JMX server Overview In the JMX Navigator view, under the Local Processes branch of the tree, you can see a list of all local JMX servers. You may need to connect to specific JMX servers to see components deployed on other machines. To add a JMX server, you must know the JMX URL of the server you want to add. Procedure To add a JMX server to the JMX Navigator view: In the JMX Navigator view, click New Connection . In the Create a new JMX connection wizard, select Default JMX Connection . Click . Select the Advanced tab. In the Name field, enter a name for the JMX server. The name can be any string. It is used to label the entry in the JMX Navigator tree. In the JMX URL field, enter the JMX URL of the server. If the JMX server requires authentication, enter your user name and password in the Username and Password fields. Click Finish . The new JMX server appears as a branch in the User-Defined Connections tree.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/tooling_user_guide/fidejmxexplore
Chapter 52. EntityOperatorSpec schema reference
Chapter 52. EntityOperatorSpec schema reference Used in: KafkaSpec Property Property type Description topicOperator EntityTopicOperatorSpec Configuration of the Topic Operator. userOperator EntityUserOperatorSpec Configuration of the User Operator. tlsSidecar TlsSidecar The tlsSidecar property has been deprecated. TLS sidecar was removed in Streams for Apache Kafka 2.8. This property is ignored. TLS sidecar configuration. template EntityOperatorTemplate Template for Entity Operator resources. The template allows users to specify how a Deployment and Pod is generated.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-EntityOperatorSpec-reference
Chapter 6. CronJob [batch/v1]
Chapter 6. CronJob [batch/v1] Description CronJob represents the configuration of a single cron job. Type object 6.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta Standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object CronJobSpec describes how the job execution will look like and when it will actually run. status object CronJobStatus represents the current state of a cron job. 6.1.1. .spec Description CronJobSpec describes how the job execution will look like and when it will actually run. Type object Required schedule jobTemplate Property Type Description concurrencyPolicy string Specifies how to treat concurrent executions of a Job. Valid values are: - "Allow" (default): allows CronJobs to run concurrently; - "Forbid": forbids concurrent runs, skipping run if run hasn't finished yet; - "Replace": cancels currently running job and replaces it with a new one Possible enum values: - "Allow" allows CronJobs to run concurrently. - "Forbid" forbids concurrent runs, skipping run if hasn't finished yet. - "Replace" cancels currently running job and replaces it with a new one. failedJobsHistoryLimit integer The number of failed finished jobs to retain. Value must be non-negative integer. Defaults to 1. jobTemplate object JobTemplateSpec describes the data a Job should have when created from a template schedule string The schedule in Cron format, see https://en.wikipedia.org/wiki/Cron . startingDeadlineSeconds integer Optional deadline in seconds for starting the job if it misses scheduled time for any reason. Missed jobs executions will be counted as failed ones. successfulJobsHistoryLimit integer The number of successful finished jobs to retain. Value must be non-negative integer. Defaults to 3. suspend boolean This flag tells the controller to suspend subsequent executions, it does not apply to already started executions. Defaults to false. timeZone string The time zone name for the given schedule, see https://en.wikipedia.org/wiki/List_of_tz_database_time_zones . If not specified, this will default to the time zone of the kube-controller-manager process. The set of valid time zone names and the time zone offset is loaded from the system-wide time zone database by the API server during CronJob validation and the controller manager during execution. If no system-wide time zone database can be found a bundled version of the database is used instead. If the time zone name becomes invalid during the lifetime of a CronJob or due to a change in host configuration, the controller will stop creating new new Jobs and will create a system event with the reason UnknownTimeZone. More information can be found in https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/#time-zones 6.1.2. .spec.jobTemplate Description JobTemplateSpec describes the data a Job should have when created from a template Type object Property Type Description metadata ObjectMeta Standard object's metadata of the jobs created from this template. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata spec object JobSpec describes how the job execution will look like. 6.1.3. .spec.jobTemplate.spec Description JobSpec describes how the job execution will look like. Type object Required template Property Type Description activeDeadlineSeconds integer Specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it; value must be positive integer. If a Job is suspended (at creation or through an update), this timer will effectively be stopped and reset when the Job is resumed again. backoffLimit integer Specifies the number of retries before marking this job failed. Defaults to 6 backoffLimitPerIndex integer Specifies the limit for the number of retries within an index before marking this index as failed. When enabled the number of failures per index is kept in the pod's batch.kubernetes.io/job-index-failure-count annotation. It can only be set when Job's completionMode=Indexed, and the Pod's restart policy is Never. The field is immutable. This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). completionMode string completionMode specifies how Pod completions are tracked. It can be NonIndexed (default) or Indexed . NonIndexed means that the Job is considered complete when there have been .spec.completions successfully completed Pods. Each Pod completion is homologous to each other. Indexed means that the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1), available in the annotation batch.kubernetes.io/job-completion-index. The Job is considered complete when there is one successfully completed Pod for each index. When value is Indexed , .spec.completions must be specified and .spec.parallelism must be less than or equal to 10^5. In addition, The Pod name takes the form USD(job-name)-USD(index)-USD(random-string) , the Pod hostname takes the form USD(job-name)-USD(index) . More completion modes can be added in the future. If the Job controller observes a mode that it doesn't recognize, which is possible during upgrades due to version skew, the controller skips updates for the Job. Possible enum values: - "Indexed" is a Job completion mode. In this mode, the Pods of a Job get an associated completion index from 0 to (.spec.completions - 1). The Job is considered complete when a Pod completes for each completion index. - "NonIndexed" is a Job completion mode. In this mode, the Job is considered complete when there have been .spec.completions successfully completed Pods. Pod completions are homologous to each other. completions integer Specifies the desired number of successfully finished pods the job should be run with. Setting to null means that the success of any pod signals the success of all pods, and allows parallelism to have any positive value. Setting to 1 means that parallelism is limited to 1 and the success of that pod signals the success of the job. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ manualSelector boolean manualSelector controls generation of pod labels and pod selectors. Leave manualSelector unset unless you are certain what you are doing. When false or unset, the system pick labels unique to this job and appends those labels to the pod template. When true, the user is responsible for picking unique labels and specifying the selector. Failure to pick a unique label may cause this and other jobs to not function correctly. However, You may see manualSelector=true in jobs that were created with the old extensions/v1beta1 API. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/#specifying-your-own-pod-selector maxFailedIndexes integer Specifies the maximal number of failed indexes before marking the Job as failed, when backoffLimitPerIndex is set. Once the number of failed indexes exceeds this number the entire Job is marked as Failed and its execution is terminated. When left as null the job continues execution of all of its indexes and is marked with the Complete Job condition. It can only be specified when backoffLimitPerIndex is set. It can be null or up to completions. It is required and must be less than or equal to 10^4 when is completions greater than 10^5. This field is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). parallelism integer Specifies the maximum desired number of pods the job should run at any given time. The actual number of pods running in steady state will be less than this number when ((.spec.completions - .status.successful) < .spec.parallelism), i.e. when the work left to do is less than max parallelism. More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ podFailurePolicy object PodFailurePolicy describes how failed pods influence the backoffLimit. podReplacementPolicy string podReplacementPolicy specifies when to create replacement Pods. Possible values are: - TerminatingOrFailed means that we recreate pods when they are terminating (has a metadata.deletionTimestamp) or failed. - Failed means to wait until a previously created Pod is fully terminated (has phase Failed or Succeeded) before creating a replacement Pod. When using podFailurePolicy, Failed is the the only allowed value. TerminatingOrFailed and Failed are allowed values when podFailurePolicy is not in use. This is an beta field. To use this, enable the JobPodReplacementPolicy feature toggle. This is on by default. Possible enum values: - "Failed" means to wait until a previously created Pod is fully terminated (has phase Failed or Succeeded) before creating a replacement Pod. - "TerminatingOrFailed" means that we recreate pods when they are terminating (has a metadata.deletionTimestamp) or failed. selector LabelSelector A label query over pods that should match the pod count. Normally, the system sets this field for you. More info: https://kubernetes.io/docs/concepts/overview/working-with-objects/labels/#label-selectors suspend boolean suspend specifies whether the Job controller should create Pods or not. If a Job is created with suspend set to true, no Pods are created by the Job controller. If a Job is suspended after creation (i.e. the flag goes from false to true), the Job controller will delete all active Pods associated with this Job. Users must design their workload to gracefully handle this. Suspending a Job will reset the StartTime field of the Job, effectively resetting the ActiveDeadlineSeconds timer too. Defaults to false. template PodTemplateSpec Describes the pod that will be created when executing a job. The only allowed template.spec.restartPolicy values are "Never" or "OnFailure". More info: https://kubernetes.io/docs/concepts/workloads/controllers/jobs-run-to-completion/ ttlSecondsAfterFinished integer ttlSecondsAfterFinished limits the lifetime of a Job that has finished execution (either Complete or Failed). If this field is set, ttlSecondsAfterFinished after the Job finishes, it is eligible to be automatically deleted. When the Job is being deleted, its lifecycle guarantees (e.g. finalizers) will be honored. If this field is unset, the Job won't be automatically deleted. If this field is set to zero, the Job becomes eligible to be deleted immediately after it finishes. 6.1.4. .spec.jobTemplate.spec.podFailurePolicy Description PodFailurePolicy describes how failed pods influence the backoffLimit. Type object Required rules Property Type Description rules array A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. rules[] object PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of onExitCodes and onPodConditions, but not both, can be used in each rule. 6.1.5. .spec.jobTemplate.spec.podFailurePolicy.rules Description A list of pod failure policy rules. The rules are evaluated in order. Once a rule matches a Pod failure, the remaining of the rules are ignored. When no rule matches the Pod failure, the default handling applies - the counter of pod failures is incremented and it is checked against the backoffLimit. At most 20 elements are allowed. Type array 6.1.6. .spec.jobTemplate.spec.podFailurePolicy.rules[] Description PodFailurePolicyRule describes how a pod failure is handled when the requirements are met. One of onExitCodes and onPodConditions, but not both, can be used in each rule. Type object Required action Property Type Description action string Specifies the action taken on a pod failure when the requirements are satisfied. Possible values are: - FailJob: indicates that the pod's job is marked as Failed and all running pods are terminated. - FailIndex: indicates that the pod's index is marked as Failed and will not be restarted. This value is beta-level. It can be used when the JobBackoffLimitPerIndex feature gate is enabled (enabled by default). - Ignore: indicates that the counter towards the .backoffLimit is not incremented and a replacement pod is created. - Count: indicates that the pod is handled in the default way - the counter towards the .backoffLimit is incremented. Additional values are considered to be added in the future. Clients should react to an unknown action by skipping the rule. Possible enum values: - "Count" This is an action which might be taken on a pod failure - the pod failure is handled in the default way - the counter towards .backoffLimit, represented by the job's .status.failed field, is incremented. - "FailIndex" This is an action which might be taken on a pod failure - mark the Job's index as failed to avoid restarts within this index. This action can only be used when backoffLimitPerIndex is set. This value is beta-level. - "FailJob" This is an action which might be taken on a pod failure - mark the pod's job as Failed and terminate all running pods. - "Ignore" This is an action which might be taken on a pod failure - the counter towards .backoffLimit, represented by the job's .status.failed field, is not incremented and a replacement pod is created. onExitCodes object PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. onPodConditions array Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. onPodConditions[] object PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. 6.1.7. .spec.jobTemplate.spec.podFailurePolicy.rules[].onExitCodes Description PodFailurePolicyOnExitCodesRequirement describes the requirement for handling a failed pod based on its container exit codes. In particular, it lookups the .state.terminated.exitCode for each app container and init container status, represented by the .status.containerStatuses and .status.initContainerStatuses fields in the Pod status, respectively. Containers completed with success (exit code 0) are excluded from the requirement check. Type object Required operator values Property Type Description containerName string Restricts the check for exit codes to the container with the specified name. When null, the rule applies to all containers. When specified, it should match one the container or initContainer names in the pod template. operator string Represents the relationship between the container exit code(s) and the specified values. Containers completed with success (exit code 0) are excluded from the requirement check. Possible values are: - In: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is in the set of specified values. - NotIn: the requirement is satisfied if at least one container exit code (might be multiple if there are multiple containers not restricted by the 'containerName' field) is not in the set of specified values. Additional values are considered to be added in the future. Clients should react to an unknown operator by assuming the requirement is not satisfied. Possible enum values: - "In" - "NotIn" values array (integer) Specifies the set of values. Each returned container exit code (might be multiple in case of multiple containers) is checked against this set of values with respect to the operator. The list of values must be ordered and must not contain duplicates. Value '0' cannot be used for the In operator. At least one element is required. At most 255 elements are allowed. 6.1.8. .spec.jobTemplate.spec.podFailurePolicy.rules[].onPodConditions Description Represents the requirement on the pod conditions. The requirement is represented as a list of pod condition patterns. The requirement is satisfied if at least one pattern matches an actual pod condition. At most 20 elements are allowed. Type array 6.1.9. .spec.jobTemplate.spec.podFailurePolicy.rules[].onPodConditions[] Description PodFailurePolicyOnPodConditionsPattern describes a pattern for matching an actual pod condition type. Type object Required type status Property Type Description status string Specifies the required Pod condition status. To match a pod condition it is required that the specified status equals the pod condition status. Defaults to True. type string Specifies the required Pod condition type. To match a pod condition it is required that specified type equals the pod condition type. 6.1.10. .status Description CronJobStatus represents the current state of a cron job. Type object Property Type Description active array (ObjectReference) A list of pointers to currently running jobs. lastScheduleTime Time Information when was the last time the job was successfully scheduled. lastSuccessfulTime Time Information when was the last time the job successfully completed. 6.2. API endpoints The following API endpoints are available: /apis/batch/v1/cronjobs GET : list or watch objects of kind CronJob /apis/batch/v1/watch/cronjobs GET : watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/cronjobs DELETE : delete collection of CronJob GET : list or watch objects of kind CronJob POST : create a CronJob /apis/batch/v1/watch/namespaces/{namespace}/cronjobs GET : watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} DELETE : delete a CronJob GET : read the specified CronJob PATCH : partially update the specified CronJob PUT : replace the specified CronJob /apis/batch/v1/watch/namespaces/{namespace}/cronjobs/{name} GET : watch changes to an object of kind CronJob. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status GET : read status of the specified CronJob PATCH : partially update status of the specified CronJob PUT : replace status of the specified CronJob 6.2.1. /apis/batch/v1/cronjobs HTTP method GET Description list or watch objects of kind CronJob Table 6.1. HTTP responses HTTP code Reponse body 200 - OK CronJobList schema 401 - Unauthorized Empty 6.2.2. /apis/batch/v1/watch/cronjobs HTTP method GET Description watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. Table 6.2. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.3. /apis/batch/v1/namespaces/{namespace}/cronjobs HTTP method DELETE Description delete collection of CronJob Table 6.3. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.4. HTTP responses HTTP code Reponse body 200 - OK Status schema 401 - Unauthorized Empty HTTP method GET Description list or watch objects of kind CronJob Table 6.5. HTTP responses HTTP code Reponse body 200 - OK CronJobList schema 401 - Unauthorized Empty HTTP method POST Description create a CronJob Table 6.6. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.7. Body parameters Parameter Type Description body CronJob schema Table 6.8. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 202 - Accepted CronJob schema 401 - Unauthorized Empty 6.2.4. /apis/batch/v1/watch/namespaces/{namespace}/cronjobs HTTP method GET Description watch individual changes to a list of CronJob. deprecated: use the 'watch' parameter with a list operation instead. Table 6.9. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.5. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name} Table 6.10. Global path parameters Parameter Type Description name string name of the CronJob HTTP method DELETE Description delete a CronJob Table 6.11. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed Table 6.12. HTTP responses HTTP code Reponse body 200 - OK Status schema 202 - Accepted Status schema 401 - Unauthorized Empty HTTP method GET Description read the specified CronJob Table 6.13. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 401 - Unauthorized Empty HTTP method PATCH Description partially update the specified CronJob Table 6.14. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.15. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty HTTP method PUT Description replace the specified CronJob Table 6.16. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.17. Body parameters Parameter Type Description body CronJob schema Table 6.18. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty 6.2.6. /apis/batch/v1/watch/namespaces/{namespace}/cronjobs/{name} Table 6.19. Global path parameters Parameter Type Description name string name of the CronJob HTTP method GET Description watch changes to an object of kind CronJob. deprecated: use the 'watch' parameter with a list operation instead, filtered to a single item with the 'fieldSelector' parameter. Table 6.20. HTTP responses HTTP code Reponse body 200 - OK WatchEvent schema 401 - Unauthorized Empty 6.2.7. /apis/batch/v1/namespaces/{namespace}/cronjobs/{name}/status Table 6.21. Global path parameters Parameter Type Description name string name of the CronJob HTTP method GET Description read status of the specified CronJob Table 6.22. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 401 - Unauthorized Empty HTTP method PATCH Description partially update status of the specified CronJob Table 6.23. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.24. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty HTTP method PUT Description replace status of the specified CronJob Table 6.25. Query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default in v1.23+ - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. Table 6.26. Body parameters Parameter Type Description body CronJob schema Table 6.27. HTTP responses HTTP code Reponse body 200 - OK CronJob schema 201 - Created CronJob schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/workloads_apis/cronjob-batch-v1
Chapter 11. Integrating with Amazon S3
Chapter 11. Integrating with Amazon S3 You can integrate Red Hat Advanced Cluster Security for Kubernetes with Amazon S3 to enable data backups. You can use these backups for data restoration in the case of an infrastructure disaster or corrupt data. After you integrate with Amazon S3, you can schedule daily or weekly backups and do manual on-demand backups. The backup includes the entire Red Hat Advanced Cluster Security for Kubernetes database, which includes all configurations, resources, events, and certificates. Make sure that backups are stored securely. Important If you are using Red Hat Advanced Cluster Security for Kubernetes version 3.0.53 or older, the backup does not include certificates. If your Amazon S3 is part of an air-gapped environment, you must add your AWS root CA as a trusted certificate authority in Red Hat Advanced Cluster Security for Kubernetes. 11.1. Configuring Amazon S3 integration in Red Hat Advanced Cluster Security for Kubernetes To configure Amazon S3 backups, create a new integration in Red Hat Advanced Cluster Security for Kubernetes. Prerequisites An existing S3 Bucket. To create a new bucket with required permissions, see the Amazon documentation topic Creating a bucket . Read , write , and delete permissions for the S3 bucket, the Access key ID , and the Secret access key . If you are using KIAM , kube2iam or another proxy, then an IAM role that has the read , write , and delete permissions. Procedure In the RHACS portal, go to Platform Configuration Integrations . Scroll down to the External backups section and select Amazon S3 . Click New Integration ( add icon). Enter a name for Integration Name . Enter the number of backups to retain in the Backups To Retain box. For Schedule , select the backup frequency as daily or weekly and the time to run the backup process. Enter the Bucket name where you want to store the backup. Optionally, enter an Object Prefix if you want to save the backups in a specific folder structure. For more information, see the Amazon documentation topic Working with object metadata . Enter the Endpoint for the bucket if you are using a non-public S3 instance, otherwise leave it blank. Enter the Region for the bucket. Turn on the Use Container IAM Role toggle or enter the Access Key ID , and the Secret Access Key . Select Test to confirm that the integration with Amazon S3 is working. Select Create to generate the configuration. Once configured, Red Hat Advanced Cluster Security for Kubernetes automatically backs up all data according to the specified schedule. 11.2. Performing on-demand backups on Amazon S3 Uses the RHACS portal to trigger manual backups of Red Hat Advanced Cluster Security for Kubernetes on Amazon S3. Prerequisites You must have already integrated Red Hat Advanced Cluster Security for Kubernetes with Amazon S3. Procedure In the RHACS portal, go to Platform Configuration Integrations . Under the External backups section, click Amazon S3 . Select the integration name for the S3 bucket where you want to do a backup. Click Trigger Backup . Note Currently, when you select the Trigger Backup option, there is no notification. However, Red Hat Advanced Cluster Security for Kubernetes begins the backup task in the background. 11.3. Additional resources Backing up Red Hat Advanced Cluster Security for Kubernetes Restoring from a backup
null
https://docs.redhat.com/en/documentation/red_hat_advanced_cluster_security_for_kubernetes/4.7/html/integrating/integrate-with-amazon-s3
10.5.37. LogFormat
10.5.37. LogFormat The LogFormat directive configures the format of the various Web server log files. The actual LogFormat used depends on the settings given in the CustomLog directive (refer to Section 10.5.38, " CustomLog " ). The following are the format options if the CustomLog directive is set to combined : %h (remote host's IP address or hostname) Lists the remote IP address of the requesting client. If HostnameLookups is set to on , the client hostname is recorded unless it is not available from DNS. %l (rfc931) Not used. A hyphen - appears in the log file for this field. %u (authenticated user) Lists the username of the user recorded if authentication was required. Usually, this is not used, so a hyphen - appears in the log file for this field. %t (date) Lists the date and time of the request. %r (request string) Lists the request string exactly as it came from the browser or client. %s (status) Lists the HTTP status code which was returned to the client host. %b (bytes) Lists the size of the document. %\"%{Referer}i\" (referrer) Lists the URL of the webpage which referred the client host to Web server. %\"%{User-Agent}i\" (user-agent) Lists the type of Web browser making the request.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-apache-logformat
Chapter 2. Requirements for scaling storage
Chapter 2. Requirements for scaling storage Before you proceed to scale the storage nodes, refer to the following sections to understand the node requirements for your specific Red Hat OpenShift Data Foundation instance: Platform requirements Resource requirements Storage device requirements Dynamic storage devices Local storage devices Capacity planning Important Always ensure that you have plenty of storage capacity. If storage ever fills completely, it is not possible to add capacity or delete or migrate content away from the storage to free up space completely. Full storage is very difficult to recover. Capacity alerts are issued when cluster storage capacity reaches 75% (near-full) and 85% (full) of total capacity. Always address capacity warnings promptly, and review your storage regularly to ensure that you do not run out of storage space. If storage capacity reaches 85% full state, Ceph may report HEALTH_ERR and prevent IO operations. In this case, you can increase the full ratio temporarily so that cluster rebalance can take place. For steps to increase the full ratio, see Setting Ceph OSD full thresholds using the ODF CLI tool . If you do run out of storage space completely, contact Red Hat Customer Support .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html/scaling_storage/requirements-for-scaling-storage-nodes
Serverless
Serverless OpenShift Container Platform 4.12 Create and deploy serverless, event-driven applications using OpenShift Serverless Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html-single/serverless/index
Chapter 11. Configuring certificate profiles
Chapter 11. Configuring certificate profiles As part of the installation process of a CA, the certificate enrollment profiles can be modified directly on the file system by modifying the profiles' configuration files. Default files exist for the default profiles at installation; when new profiles are needed, new profile configuration files are to be created. The configuration files are stored in the CA profile directory, instance_directory /ca/profiles/ca/ , such as /var/lib/pki/pki-ca/ca/profiles/ca/ . The file is named profile_name .cfg . All of the parameters for profile rules can be set or modified in those profile configuration files. Profile rules can be inputs, outputs, authentication, authorization, defaults, and constraints. The enrollment profiles for the CA certificates are located in the /var/lib/pki/instance_name/ca/conf directory with the name *.profile . Note For audit reasons, use this method only during the CA installation prior to deployment. Restart the server after editing the profile configuration file for the changes to take effect. Section 11.1.1, "Profile configuration parameters" Section 11.1.2, "Modifying certificate extensions directly on the file system" Section 11.1.3, "Adding profile inputs directly on the file system" 11.1. Configuring non-CA system certificate profiles 11.1.1. Profile configuration parameters All of the parameters for a profile rule - defaults, inputs, outputs, and constraints - are configured within a single policy set. A policy set for a profile has the name policyset. policyName.policyNumber . For example: The common profile configuration parameters are described in Table 11.1, "Profile configuration file parameters" . Table 11.1. Profile configuration file parameters Parameter Description desc Gives a free text description of the certificate profile, which is shown on the end-entities page. For example, desc=This certificate profile is for enrolling server certificates with agent authentication. enable Sets whether the profile is enabled, and therefore accessible through the end-entities page. For example, enable=true . auth.instance_id Sets which authentication manager plugin to use to authenticate the certificate request submitted through the profile. For automatic enrollment, the CA issues a certificate immediately if the authentication is successful. If authentication fails or there is no authentication plugin specified, the request is queued to be manually approved by an agent. For example, auth.instance_id=CMCAuth . The authentication method must be one of the registered authentication instances from CS.cfg . authz.acl Specifies the authorization constraint. Most commonly, this us used to set the group evaluation ACL. For example, this caCMCUserCert parameter requires that the signer of the CMC request belong to the Certificate Manager Agents group: authz.acl=group="Certificate Manager Agents" In directory-based user certificate renewal, this option is used to ensure that the original requester and the currently-authenticated user are the same. An entity must authenticate (bind or, essentially, log into the system) before authorization can be evaluated. The authorization method specified must be one of the registered authorization instances from CS.cfg . name Gives the name of the profile. For example, name=Agent-Authenticated Server Certificate Enrollment . This name is displayed in the end users enrollment or renewal page. input.list Lists the allowed inputs for the profile by name. For example, input.list=i1,i2 . input. input_id .class_id Gives the java class name for the input by input ID (the name of the input listed in input.list ). For example, input.i1.class_id=cmcCertReqInputImpl . output.list Lists the possible output formats for the profile by name. For example, output.list=o1 . output. output_id .class_id Gives the java class name for the output format named in output.list . For example, output.o1.class_id=certOutputImpl . policyset.list Lists the configured profile rules. For dual certificates, one set of rules applies to the signing key and the other to the encryption key. Single certificates use only one set of profile rules. For example, policyset.list=serverCertSet . policyset. policyset_id .list Lists the policies within the policy set configured for the profile by policy ID number in the order in which they should be evaluated. For example, policyset.serverCertSet.list=1,2,3,4,5,6,7,8 . policyset._policyset_id.policy_number._constraint.class_id Gives the java class name of the constraint plugin set for the default configured in the profile rule. For example, policyset.serverCertSet.1.constraint.class_id=subjectNameConstraintImpl . policyset._policyset_id.policy_number._constraint.name Gives the user-defined name of the constraint. For example, policyset.serverCertSet.1.constraint.name=Subject Name Constraint . policyset. policyset_id.policy_number._constraint.params._attribute Specifies a value for an allowed attribute for the constraint. The possible attributes vary depending on the type of constraint. For example, policyset.serverCertSet.1.constraint.params.pattern=CN=.* . policyset._policyset_id.policy_number._default.class_id Gives the java class name for the default set in the profile rule. For example, policyset.serverCertSet.1.default.class_id=userSubjectNameDefaultImpl policyset._policyset_id.policy_number._default.name Gives the user-defined name of the default. For example, policyset.serverCertSet.1.default.name=Subject Name Default policyset. policyset_id.policy_number._default.params._attribute Specifies a value for an allowed attribute for the default. The possible attributes vary depending on the type of default. For example, policyset.serverCertSet.1.default.params.name=CN=(Name)USDrequest.requestor_nameUSD . 11.1.2. Modifying certificate extensions directly on the file system Changing constraints changes the restrictions on the type of information which can be supplied. Changing the defaults and constraints can also add, delete, or modify the extensions which are accepted or required from a certificate request. For example, the default caFullCMCUserCert profile is set to create a Key Usage extension from information in the request. The default is updated to allow user-supplied key extensions: This sets the server to accept the extension OID 2.5.29.15 in the certificate request. Other constraints and defaults can be changed similarly. Make sure that any required constraints are included with appropriate defaults, that defaults are changed when a different constraint is required, and that only allowed constraints are used with the default. For more information, see the B.1 Defaults Reference and B.2 Constraints Reference reference sections in the Administration Guide (Common Criteria Edition) . 11.1.2.1. key usage and extended key usage consistency Red Hat Certificate System provides a flexible infrastructure for administrators to create customized enrollment profiles to meet the requirements of their environment. However, it is important that profiles do not allow issuing certificates that violate the requirements defined in RFC 5280. When creating an enrollment profile where both Key Usage (KU) and Extended Key Usage (EKU) extensions are present, it is important to make sure that the consistency between the two extensions is maintained as per section 4.2.1.12. Extended Key Usage of RFC 5280. For details about the KU extension, see: The following table provides the guidelines that maps consistent Key Usage bits to the Extended Key Usage Extension for each purpose: Purpose / Extended Key Usages Key Usages TLS Server Authentication command id-kp-serverAuth digitalSignature, keyEncipherment, or KeyAgreement TLS Client (Mutual) Authentication id-kp-clientAuth digitalSignature, keyEncipherment, and/or KeyAgreement Code Signing id-kp-codeSigning digitalSignature Email Protection id-kp-emailProtection digitalSignature, nonRepudiation, and/or (keyEncipherment or keyAgreement) OCSP Response Signing id-kp-OCSPSigning KeyAgreement and/or nonRepudiation The following shows two examples of inconsistent EKU/KU: An enrollment profile that is intended for purpose of OCSP response signing contains Extended key usage id-kp-OCSPSigning but with keyEncipherment key usage bit: An enrollment profile that is intended for the purpose of TLS server authentication contains Extended key usage id-kp-serverAuth but with CRL signing key usage bit: For details about the KU extension, see: B.2.3 Key Usage Extension Constraint in the Administration Guide (Common Criteria Edition) . B.3.8 keyUsage in the Administration Guide (Common Criteria Edition) . For details about the EKU extension, see: B.2.3 Extended Key Usage Extension Constraint in the Administration Guide (Common Criteria Edition) . B.3.6 extKeyUsage in the Administration Guide (Common Criteria Edition) . 11.1.2.2. Configuring cross-pair profiles Cross-pair certificates are distinct CA signing certificates that establish a trust partner relationship whereby entities from these two distinct PKIs will trust each other. Both partner CAs store the other CA signing certificate in its database, so all of the certificates issued within the other PKI are trusted and recognized. Two extensions supported by the Certificate System can be used to establish such a trust partner relationship (cross-certification): The Certificate Policies Extension ( CertificatePoliciesExtension ) specifies the terms that the certificate fall under, which is often unique for each PKI. The Policy Mapping Extension ( PolicyMappingExtension ) seals the trust between two PKI's by mapping the certificate profiles of the two environments. Issuing cross-pair certificates requires the Certificate Policies Extension, explained in the B.3.4 certificatePoliciesExt annex in the Administration Guide (Common Criteria Edition) . To ensure that the issued certificate contains the CertificatePoliciesExtension, the enrollment profile needs to include an appropriate policy rule, for example: Certificates issued with the enrollment profile in this example would contain the following information: For more information on using cross-pair certificates, see 13.4 Using Cross-Pair Certificates in the Administration Guide (Common Criteria Edition) . For more information on publishing cross-pair certificates, see 7.8 Publishing Cross-Pair Certificates in the Administration Guide (Common Criteria Edition) . 11.1.3. Adding profile inputs directly on the file system The certificate profile configuration file in the CA's profiles/ca directory contains the input information for that particular certificate profile form. Inputs are the fields in the end-entities page enrollment forms. There is a parameter, input.list , which lists the inputs included in that profile. Other parameters define the inputs; these are identified by the format input. ID . For example, this adds a generic input to a profile: For more information on what inputs, or form fields, are available, see the A.1 Input Reference annex in the Administration Guide (Common Criteria Edition) . 11.2. Changing the default validity time of certificates In each profile on a Certificate Authority (CA), you can set how long certificates issued using a profile are valid. You can change this value for security reasons. For example, to set the validity of the generated Certificate Authority (CA) signing certificate to 825 days (approximately 27 months), open the /var/lib/pki/instance_name/ca/profiles/ca/caCACert.cfg file in an editor and set: 11.3. Setting the signing algorithm default in a profile Each profile has a Signing Algorithm Default extension defined. The default has two settings: a default algorithm and a list of allowed algorithms, if the certificate request specifies a different algorithm. If no signing algorithms are specified, then the profile uses whatever is set as the default for the CA. In the profile's .cfg file, the algorithm is set with two parameters: Note The - value for the policyset.cmcUserCertSet.8.default.params.signingAlg parameter means that the default signing algorithm will be used. 11.4. Configuring CA system certificate profiles Unlike the non-CA subsystems, the enrollment profiles for CA's own system certificates are kept in the /var/lib/pki/[instance name]/ca/conf file. Those profiles are: caAuditSigningCert.profile eccAdminCert.profile rsaAdminCert.profile caCert.profile eccServerCert.profile saServerCert.profile caOCSPCert.profile eccSubsystemCert.profile rsaSubsystemCert.profile 11.4.1. Changing the default values of certificate profiles If you wish to change the default values in the profiles above, make changes to the profiles before you perform the second step of a two-step installation (as described in Between step customization ). The following is an example that demonstrates: How to change validity to CA signing certificate. How to add extensions (e.g. Certificate policies extension). Back up the original CA certificate profile used by pkispawn . Open the CA certificate profile used by the configuration wizard. Reset the validity period in the Validity Default to your desired value. For example, to change the period to two years: Add any extensions by creating a new default entry in the profile and adding it to the list. For example, to add the Certificate Policies Extension , add the default (which, in this example, is default #9): Then, add the default number to the list of defaults to use the new default: 11.4.2. Allowing a CA certificate to be renewed past the validity period Normally, a certificate cannot be issued with a validity period that ends after the issuing CA certificate's expiration date. If a CA certificate has an expiration date of December 31, 2023, then all of the certificates it issues must expire by or before December 31, 2023. This rule applies to other CA signing certificates issued by a CA - and this makes renewing a root CA certificate almost impossible. Renewing a CA signing certificate means it would necessarily have to have a validity period past its own expiration date. This behavior can be altered using the CA Validity Default. This default allows a setting ( bypassCAnotafter ) which allows a CA certificate to be issued with a validity period that extends past the issuing CA's expiration ( notAfter ) date. Figure 11.1. CA validity default configuration In real deployments, what this means is that a CA certificate for a root CA can be renewed, when it might otherwise be prevented. To enable CA certificate renewals past the original CA's validity date: Stop the CA: OR if using the Nuxwdog watchdog: Open the caCACert.cfg file. The CA Validity Default should be present by default. Set the value to true to allow a CA certificate to be renewed past the issuing CA's validity period. Start the CA to apply the changes. OR if using the Nuxwdog watchdog: When an agent reviews a renewal request, there is an option in the Extensions/Fields area that allows the agent to choose to bypass the normal validity period constraint. If the agent selects false , the constraint is enforced, even if bypassCAnotafter=true is set in the profile. If the agent selects true when the bypassCAnotafter value is not enabled, then the renewal request is rejected by the CA. Figure 11.2. Bypass CA constraints option in the agent services page NOTE The CA Validity Default only applies to CA signing certificate renewals. Other certificates must still be issued and renewed within the CA's validity period. A separate configuration setting for the CA, ca.enablePastCATime , can be used to allow certificates to be renewed past the CA's validity period. However, this applies to every certificate issued by that CA. Because of the potential security issues, this setting is not recommended for production environments. 11.5. Managing smart card CA profiles Note Features in this section on TMS are not tested in the evaluation. This section is for reference only. The TPS does not generate or approve certificate requests; it sends any requests approved through the Enterprise Security Client to the configured CA to issue the certificate. This means that the CA actually contains the profiles to use for tokens and smart cards. The profiles to use can be automatically assigned, based on the card type. The profile configuration files are in the /var/lib/instance_name/ca/profiles/ca/ directory with the other CA profiles. The default profiles are listed in Table 11.2, "Default token certificate profiles" . Table 11.2. Default token certificate profiles Profile Name Configuration File Description Regular Enrollment Profiles Token Device Key Enrollment caTokenDeviceKeyEnrollment.cfg For enrolling tokens used for devices or servers. Token User Encryption Certificate Enrollment caTokenUserEncryptionKeyEnrollment.cfg For enrolling encryption certificates on the token for a user. Token User Signing Certificate Enrollment caTokenUserSigningKeyEnrollment.cfg For enrolling signing certificates on the token for a user. Token User MS Login Certificate Enrollment caTokenMSLoginEnrollment.cfg For enrolling user certificates to use for single sign-on to a Windows domain or PC. Temporary Token Profiles Temporary Device Certificate Enrollment caTempTokenDeviceKeyEnrollment.cfg For enrolling certificates for a device on a temporary token. Temporary Token User Encryption Certificate Enrollment caTempTokenUserEncryptionKeyEnrollment.cfg For enrolling an encryption certificate on a temporary token for a user. Temporary Token User Signing Certificate Enrollment caTempTokenUserSigningKeyEnrollment.cfg For enrolling a signing certificates on a temporary token for a user. Renewal Profiles Token User Encryption Certificate Enrollment (Renewal) caTokenUserEncryptionKeyRenewal.cfg For renewing encryption certificates on the token for a user, if renewal is allowed. Token User Signing Certificate Enrollment (Renewal) caTokenUserSigningKeyRenewal.cfg For renewing signing certificates on the token for a user, if renewal is allowed. Note Renewal profiles can only be used in conjunction with the profile that issued the original certificate. There are two settings that are beneficial: It is important the original enrollment profile name does not change. The Renew Grace Period Constraint should be set in the original enrollment profile. This defines the amount of time before and after the certificate's expiration date when the user is allowed to renew the certificate. There are only a few examples of these in the default profiles, and they are mostly not enabled by default. 11.5.1. Editing enrollment profiles for the tps Administrators have the ability to customize the default smart card enrollment profiles, used with the TPS. For instance, a profile could be edited to include the user's email address in the Subject Alternative Name extension. The email address for the user is retrieved from the authentication directory. To configure the CA for LDAP access, change the following parameters in the profile files, with the appropriate directory information: These CA profiles come with LDAP lookup disabled by default. The ldapStringAttributes parameter tells the CA which LDAP attributes to retrieve from the company directory. For example, if the directory contains uid as an LDAP attribute name, and this will be used in the subject name of the certificate, then uid must be listed in the ldapStringAttributes parameter, and request.uid listed as one of the components in the dnpattern . Editing certificate profiles is covered in 3.2 Setting up Certificate Profiles in the Administration Guide (Common Criteria Edition) . The format for the dnpattern parameter is covered in the B.2.11 Subject Name Constraint annex and B.1.27 Subject Name Default references in the Administration Guide (Common Criteria Edition) . 11.5.2. Creating custom TPS profiles Certificate profiles are created as normal in the CA, but they also have to be configured in the TPS for it to be available for token enrollments. TIP New profiles are added with new releases of Red Hat Certificate System. If you migrate an instance to Red Hat Certificate System 10, then you need to add the new profiles to the migrated instance as if they are custom profiles. Create a new token profile for the issuing CA. Setting up profiles is covered in 3.2 Setting up Certificate Profiles in the Administration Guide (Common Criteria Edition) . Copy the profile into the CA's profiles directory, /var/lib/instance_name/ca/profiles/ca/ . Edit the CA's CS.cfg file, and add the new profile references and the profile name to the CA's list of profiles. For example: Edit the TPS CS.cfg file, and add a line to point to the new CA enrollment profile. For example: Restart the instance after editing the smart card profiles: If the CA and TPS are in separate instances, restart both instances. Note Enrollment profiles for the External Registration ( externalReg ) setting are configured in the user LDAP entry. 11.5.3. Using the Windows smart card logon profile The TPS uses a profile to generate certificates to use for single sign-on to a Windows domain or PC; this is the Token User MS Login Certificate Enrollment profile ( caTokenMSLoginEnrollment.cfg ). However, there are some special considerations that administrators must account for when configuring Windows smart card login. Issue a certificate to the domain controller, if it is not already configured for TLS. Configure the smart card login per user, rather than as a global policy, to prevent locking out the domain administrator. Enable CRL publishing to the Active Directory server because the domain controller checks the CRL at every login. 11.6. Disabling certificate enrollment profiles This section provides instructions on how to disable selected profiles. To disable a certificate profile, edit the corresponding *.cfg file in the /var/lib/pki/instance_name/ca/profiles/ca/ directory and set the visible and enable parameters to false . For example, to disable all non-CMC profiles: List all non-CMC profiles: In each of the displayed files, set the following parameters to false : Additionally, set visible=false in all CMC profiles to make them invisible on the end entity page: List all CMC profiles: In each of the displayed files, set: For an alternative way to disable non-CMC profiles, also see Section 7.6.5, "Disable non-CMC and non-installation profiles" .
[ "policyset.cmcUserCertSet.6.constraint.class_id=noConstraintImpl policyset.cmcUserCertSet.6.constraint.name=No Constraint policyset.cmcUserCertSet.6.default.class_id=userExtensionDefaultImpl policyset.cmcUserCertSet.6.default.name=User Supplied Key Default policyset.cmcUserCertSet.6.default.params.userExtOID=2.5.29.15", "policyset.cmcUserCertSet.6.constraint.class_id=keyUsageExtConstraintImpl policyset.cmcUserCertSet.6.constraint.name=Key Usage Extension Constraint policyset.cmcUserCertSet.6.constraint.params.keyUsageCritical=true policyset.cmcUserCertSet.6.constraint.params.keyUsageCrlSign=false policyset.cmcUserCertSet.6.constraint.params.keyUsageDataEncipherment=false policyset.cmcUserCertSet.6.constraint.params.keyUsageDecipherOnly=false policyset.cmcUserCertSet.6.constraint.params.keyUsageDigitalSignature=true policyset.cmcUserCertSet.6.constraint.params.keyUsageEncipherOnly=false policyset.cmcUserCertSet.6.constraint.params.keyUsageKeyAgreement=false policyset.cmcUserCertSet.6.constraint.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.6.constraint.params.keyUsageKeyEncipherment=true policyset.cmcUserCertSet.6.constraint.params.keyUsageNonRepudiation=true policyset.cmcUserCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.cmcUserCertSet.6.default.name=Key Usage Default policyset.cmcUserCertSet.6.default.params.keyUsageCritical=true policyset.cmcUserCertSet.6.default.params.keyUsageCrlSign=false policyset.cmcUserCertSet.6.default.params.keyUsageDataEncipherment=false policyset.cmcUserCertSet.6.default.params.keyUsageDecipherOnly=false policyset.cmcUserCertSet.6.default.params.keyUsageDigitalSignature=true policyset.cmcUserCertSet.6.default.params.keyUsageEncipherOnly=false policyset.cmcUserCertSet.6.default.params.keyUsageKeyAgreement=false policyset.cmcUserCertSet.6.default.params.keyUsageKeyCertSign=false policyset.cmcUserCertSet.6.default.params.keyUsageKeyEncipherment=true policyset.cmcUserCertSet.6.default.params.keyUsageNonRepudiation=true", "policyset.cmcUserCertSet.6.default.class_id=userExtensionDefaultImpl policyset.cmcUserCertSet.6.default.name=User Supplied Key Default policyset.cmcUserCertSet.6.default.params.userExtOID=2.5.29.15", "policyset.ocspCertSet.6.default.class_id=keyUsageExtDefaultImpl policyset.ocspCertSet..6.default.name=Key Usage Default policyset.ocspCertSet..6.default.params.keyUsageCritical=true policyset.ocspCertSet..6.default.params.keyUsageCrlSign=false policyset.ocspCertSet..6.default.params.keyUsageDataEncipherment=false policyset.ocspCertSet..6.default.params.keyUsageDecipherOnly=false policyset.ocspCertSet..6.default.params.keyUsageDigitalSignature=true policyset.ocspCertSet..6.default.params.keyUsageEncipherOnly=false policyset.ocspCertSet..6.default.params.keyUsageKeyAgreement=false policyset.ocspCertSet..6.default.params.keyUsageKeyCertSign=false policyset.ocspCertSet..6.default.params.keyUsageKeyEncipherment=true policyset.ocspCertSet..6.default.params.keyUsageNonRepudiation=true policyset.ocspCertSet.7.constraint.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.9 policyset.ocspCertSet.7.default.class_id=extendedKeyUsageExtDefaultImpl policyset.ocspCertSet.7.default.name=Extended Key Usage Default policyset.ocspCertSet.7.default.params.exKeyUsageCritical=false policyset.ocspCertSet.7.default.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.9", "policyset.serverCertSet.6.default.name=Key Usage Default policyset.serverCertSet.6.default.params.keyUsageCritical=true policyset.serverCertSet.6.default.params.keyUsageDigitalSignature=true policyset.serverCertSet.6.default.params.keyUsageNonRepudiation=false policyset.serverCertSet.6.default.params.keyUsageDataEncipherment=true policyset.serverCertSet.6.default.params.keyUsageKeyEncipherment=false policyset.serverCertSet.6.default.params.keyUsageKeyAgreement=true policyset.serverCertSet.6.default.params.keyUsageKeyCertSign=false policyset.serverCertSet.6.default.params.keyUsageCrlSign=true policyset.serverCertSet.6.default.params.keyUsageEncipherOnly=false policyset.serverCertSet.6.default.params.keyUsageDecipherOnly=false policyset.cmcUserCertSet.7.default.class_id=extendedKeyUsageExtDefaultImpl policyset.cmcUserCertSet.7.default.name=Extended Key Usage Extension Default policyset.cmcUserCertSet.7.default.params.exKeyUsageCritical=false policyset.serverCertSet.7.default.params.exKeyUsageOIDs=1.3.6.1.5.5.7.3.1", "policyset.userCertSet.p7.constraint.class_id=noConstraintImpl policyset.userCertSet.p7.constraint.name=No Constraint policyset.userCertSet.p7.default.class_id=certificatePoliciesExtDefaultImpl policyset.userCertSet.p7.default.name=Certificate Policies Extension Default policyset.userCertSet.p7.default.params.Critical=false policyset.userCertSet.p7.default.params.PoliciesExt.num=1 policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.enable=true policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.policyId=1.1.1.1 policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.enable=false policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.value= policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.enable=false policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.explicitText.value= policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.noticeNumbers= policyset.userCertSet.p7.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.organization=", "Identifier: Certificate Policies: - 2.5.29.32 Critical: no Certificate Policies: Policy Identifier: 1.1.1.1", "input.list=i1,i2,i3,i4 input.i4.class_id=genericInputImpl input.i4.params.gi_display_name0=Name0 input.i4.params.gi_display_name1=Name1 input.i4.params.gi_display_name2=Name2 input.i4.params.gi_display_name3=Name3 input.i4.params.gi_param_enable0=true input.i4.params.gi_param_enable1=true input.i4.params.gi_param_enable2=true input.i4.params.gi_param_enable3=true input.i4.params.gi_param_name0=gname0 input.i4.params.gi_param_name1=gname1 input.i4.params.gi_param_name2=gname2 input.i4.params.gi_param_name3=gname3 input.i4.params.gi_num=4", "policyset.caCertSet.2.default.params.range=825", "policyset.cmcUserCertSet.8.constraint.class_id=signingAlgConstraintImpl policyset.cmcUserCertSet.8.constraint.name=No Constraint policyset.cmcUserCertSet.8.constraint.params.signingAlgsAllowed=SHA256withRSA,SHA512withRSA,SHA256withEC,SHA384withRSA,SHA384withEC,SHA512withEC policyset.cmcUserCertSet.8.default.class_id=signingAlgDefaultImpl policyset.cmcUserCertSet.8.default.name=Signing Alg policyset.cmcUserCertSet.8.default.params.signingAlg=-", "cp -p /usr/share/pki/ca/conf/caCert.profile /usr/share/pki/ca/conf/caCert.profile.orig", "vim /usr/share/pki/ca/conf/caCert.profile", "2.default.class=com.netscape.cms.profile.def.ValidityDefault 2.default.name=Validity Default 2.default.params.range=720", "9.default.class_id=certificatePoliciesExtDefaultImpl 9.default.name=Certificate Policies Extension Default 9.default.params.Critical=false 9.default.params.PoliciesExt.certPolicy0.enable=false 9.default.params.PoliciesExt.certPolicy0.policyId= 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.enable=true 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.CPSURI.value=CertificatePolicies.example.com 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.enable=false 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.explicitText.value= 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.noticeNumbers= 9.default.params.PoliciesExt.certPolicy0.PolicyQualifiers0.usernotice.noticeReference.organization=", "list=2,4,5,6,7,8,9", "systemctl stop pki-tomcatd@instance_name.service", "systemctl stop pki-tomcatd-nuxwdog@instance_name.service", "var/lib/pki/instance_name/ca/profiles/ca/caCACert.cfg", "policyset.caCertSet.2.default.name=CA Certificate Validity Default policyset.caCertSet.2.default.params.range=2922 policyset.caCertSet.2.default.params.startTime=0 policyset.caCertSet.2.default.params.bypassCAnotafter=true", "systemctl start pki-tomcatd@instance_name.service", "systemctl start pki-tomcatd-nuxwdog@instance_name.service", "policyset.set1.p1.default.params.dnpattern=UID=USDrequest.uidUSD, O=Token Key User policyset.set1.p1.default.params.ldap.enable=true policyset.set1.p1.default.params.ldap.basedn=ou=people,dc=host,dc=example,dc=com policyset.set1.p1.default.params.ldapStringAttributes=uid,mail policyset.set1.p1.default.params.ldap.ldapconn.host=localhost.example.com policyset.set1.p1.default.params.ldap.ldapconn.port=389", "vim etc/pki/instance_name/ca/CS.cfg profile.list=caUserCert,...,caManualRenewal,tpsExampleEnrollProfile profile.caTokenMSLoginEnrollment.class_id=caUserCertEnrollImpl profile.caTokenMSLoginEnrollment.config=/var/lib/pki/instance_name/profiles/ca/tpsExampleEnrollProfile.cfg", "vim /etc/pki/instance_name/tps/CS.cfg op.enroll.userKey.keyGen.signing.ca.profileId=tpsExampleEnrollProfile", "systemctl restart pki-tomcatd-nuxwdog@instance_name.service", "ls -l /var/lib/pki/instance_name/ca/profiles/ca/ | grep -v \"CMC\"", "visible=false enable=false", "ls -l /var/lib/pki/instance_name/ca/profiles/ca/*CMC *", "visible=false" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/planning_installation_and_deployment_guide_common_criteria_edition/certificate_profiles_configuration
Chapter 4. File Transfer Protocol
Chapter 4. File Transfer Protocol File Transfer Protocol (FTP) is one of the oldest and most commonly used protocols found on the Internet today. Its purpose is to reliably transfer files between computer hosts on a network without requiring the user to log directly into the remote host or have knowledge of how to use the remote system. It allows users to access files on remote systems using a standard set of simple commands. The Very Secure FTP Daemon ( vsftpd ) is designed from the ground up to be fast, stable, and, most importantly, secure. Its ability to handle large numbers of connections efficiently and securely is why vsftpd is the only stand-alone FTP distributed with Red Hat Enterprise Linux. In Red Hat Enterprise Linux, the vsftpd package provides the Very Secure FTP daemon. Run the rpm -q vsftpd command to see if vsftpd is installed: If you want an FTP server and the vsftpd package is not installed, run the following command as the root user to install it: 4.1. FTP and SELinux The vsftpd FTP daemon runs confined by default. SELinux policy defines how vsftpd interacts with files, processes, and with the system in general. For example, when an authenticated user logs in via FTP, they cannot read from or write to files in their home directories: SELinux prevents vsftpd from accessing user home directories by default. Also, by default, vsftpd does not have access to NFS or CIFS volumes, and anonymous users do not have write access, even if such write access is configured in /etc/vsftpd/vsftpd.conf . Booleans can be enabled to allow the previously mentioned access. The following example demonstrates an authenticated user logging in, and an SELinux denial when trying to view files in their home directory: Run the rpm -q ftp command to see if the ftp package is installed. If it is not, run the yum install ftp command as the root user to install it. Run the rpm -q vsftpd command to see if the vsftpd package is installed. If it is not, run the yum install vsftpd command as the root user to install it. In Red Hat Enterprise Linux, vsftpd only allows anonymous users to log in by default. To allow authenticated users to log in, edit /etc/vsftpd/vsftpd.conf as the root user. Make sure the local_enable=YES option is uncommented: Run the service vsftpd start command as the root user to start vsftpd . If the service was running before editing vsftpd.conf , run the service vsftpd restart command as the root user to apply the configuration changes: Run the ftp localhost command as the user you are currently logged in with. When prompted for your name, make sure your user name is displayed. If the correct user name is displayed, press Enter , otherwise, enter the correct user name: An SELinux denial similar to the following is logged: Access to home directories has been denied by SELinux. This can be fixed by activating the ftp_home_dir Boolean. Enable this ftp_home_dir Boolean by running the following command as the root user: Note Do not use the -P option if you do not want changes to persist across reboots. Try to log in again. Now that SELinux is allowing access to home directories via the ftp_home_dir Boolean, logging in will succeed.
[ "~]USD rpm -q vsftpd", "~]# yum install vsftpd", "Uncomment this to allow local users to log in. local_enable=YES", "~]# service vsftpd start Starting vsftpd for vsftpd: [ OK ]", "~] ftp localhost Connected to localhost (127.0.0.1). 220 (vsFTPd 2.1.0) Name (localhost: username ): 331 Please specify the password. Password: Enter your password 500 OOPS: cannot change directory:/home/ username Login failed. ftp>", "setroubleshoot: SELinux is preventing the ftp daemon from reading users home directories ( username ). For complete SELinux messages. run sealert -l c366e889-2553-4c16-b73f-92f36a1730ce", "~]# setsebool -P ftp_home_dir=1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/managing_confined_services/chap-managing_confined_services-file_transfer_protocol