title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
listlengths
1
5.62k
βŒ€
url
stringlengths
79
342
19.4. mount Command References
19.4. mount Command References The following resources provide an in-depth documentation on the subject. Manual Page Documentation man 8 mount : The manual page for the mount command that provides a full documentation on its usage. man 8 umount : The manual page for the umount command that provides a full documentation on its usage. man 8 findmnt : The manual page for the findmnt command that provides a full documentation on its usage. man 5 fstab : The manual page providing a thorough description of the /etc/fstab file format. Useful Websites Shared subtrees - An LWN article covering the concept of shared subtrees.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/storage_administration_guide/sect-Using_the_mount_Command-Additional_Resources
Chapter 60. quota
Chapter 60. quota This chapter describes the commands under the quota command. 60.1. quota list List quotas for all projects with non-default quota values or list detailed quota informations for requested project Usage: Table 60.1. Command arguments Value Summary -h, --help Show this help message and exit --project <project> List quotas for this project <project> (name or id) --detail Show details about quotas usage --compute List compute quota --volume List volume quota --network List network quota Table 60.2. Output formatter options Value Summary -f {csv,json,table,value,yaml}, --format {csv,json,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated --sort-column SORT_COLUMN Specify the column(s) to sort the data (columns specified first have a priority, non-existing columns are ignored), can be repeated Table 60.3. CSV formatter options Value Summary --quote {all,minimal,none,nonnumeric} When to include quotes, defaults to nonnumeric Table 60.4. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.5. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show. 60.2. quota set Set quotas for project or class Usage: Table 60.6. Positional arguments Value Summary <project/class> Set quotas for this project or class (name/id) Table 60.7. Command arguments Value Summary -h, --help Show this help message and exit --class Set quotas for <class> --cores <cores> New value for the cores quota --fixed-ips <fixed-ips> New value for the fixed-ips quota --injected-file-size <injected-file-size> New value for the injected-file-size quota --injected-path-size <injected-path-size> New value for the injected-path-size quota --injected-files <injected-files> New value for the injected-files quota --instances <instances> New value for the instances quota --key-pairs <key-pairs> New value for the key-pairs quota --properties <properties> New value for the properties quota --ram <ram> New value for the ram quota --server-groups <server-groups> New value for the server-groups quota --server-group-members <server-group-members> New value for the server-group-members quota --backups <backups> New value for the backups quota --backup-gigabytes <backup-gigabytes> New value for the backup-gigabytes quota --gigabytes <gigabytes> New value for the gigabytes quota --per-volume-gigabytes <per-volume-gigabytes> New value for the per-volume-gigabytes quota --snapshots <snapshots> New value for the snapshots quota --volumes <volumes> New value for the volumes quota --floating-ips <floating-ips> New value for the floating-ips quota --secgroup-rules <secgroup-rules> New value for the secgroup-rules quota --secgroups <secgroups> New value for the secgroups quota --networks <networks> New value for the networks quota --subnets <subnets> New value for the subnets quota --ports <ports> New value for the ports quota --routers <routers> New value for the routers quota --rbac-policies <rbac-policies> New value for the rbac-policies quota --subnetpools <subnetpools> New value for the subnetpools quota --volume-type <volume-type> Set quotas for a specific <volume-type> 60.3. quota show Show quotas for project or class. Specify ``--os-compute-api-version 2.50`` or higher to see ``server-groups`` and ``server-group-members`` output for a given quota class. Usage: Table 60.8. Positional arguments Value Summary <project/class> Show quotas for this project or class (name or id) Table 60.9. Command arguments Value Summary -h, --help Show this help message and exit --class Show quotas for <class> --default Show default quotas for <project> Table 60.10. Output formatter options Value Summary -f {json,shell,table,value,yaml}, --format {json,shell,table,value,yaml} The output format, defaults to table -c COLUMN, --column COLUMN Specify the column(s) to include, can be repeated Table 60.11. JSON formatter options Value Summary --noindent Whether to disable indenting the json Table 60.12. Shell formatter options Value Summary --prefix PREFIX Add a prefix to all variable names Table 60.13. Table formatter options Value Summary --max-width <integer> Maximum display width, <1 to disable. you can also use the CLIFF_MAX_TERM_WIDTH environment variable, but the parameter takes precedence. --fit-width Fit the table to the display width. implied if --max- width greater than 0. Set the environment variable CLIFF_FIT_WIDTH=1 to always enable --print-empty Print empty table if there is no data to show.
[ "openstack quota list [-h] [-f {csv,json,table,value,yaml}] [-c COLUMN] [--quote {all,minimal,none,nonnumeric}] [--noindent] [--max-width <integer>] [--fit-width] [--print-empty] [--sort-column SORT_COLUMN] [--project <project>] [--detail] (--compute | --volume | --network)", "openstack quota set [-h] [--class] [--cores <cores>] [--fixed-ips <fixed-ips>] [--injected-file-size <injected-file-size>] [--injected-path-size <injected-path-size>] [--injected-files <injected-files>] [--instances <instances>] [--key-pairs <key-pairs>] [--properties <properties>] [--ram <ram>] [--server-groups <server-groups>] [--server-group-members <server-group-members>] [--backups <backups>] [--backup-gigabytes <backup-gigabytes>] [--gigabytes <gigabytes>] [--per-volume-gigabytes <per-volume-gigabytes>] [--snapshots <snapshots>] [--volumes <volumes>] [--floating-ips <floating-ips>] [--secgroup-rules <secgroup-rules>] [--secgroups <secgroups>] [--networks <networks>] [--subnets <subnets>] [--ports <ports>] [--routers <routers>] [--rbac-policies <rbac-policies>] [--subnetpools <subnetpools>] [--volume-type <volume-type>] <project/class>", "openstack quota show [-h] [-f {json,shell,table,value,yaml}] [-c COLUMN] [--noindent] [--prefix PREFIX] [--max-width <integer>] [--fit-width] [--print-empty] [--class | --default] [<project/class>]" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.2/html/command_line_interface_reference/quota
Chapter 17. Telemetry
Chapter 17. Telemetry 17.1. Telemetry Red Hat uses telemetry to collect anonymous usage data from Migration Toolkit for Virtualization (MTV) installations to help us improve the usability and efficiency of MTV. MTV collects the following data: Migration plan status: The number of migrations. Includes those that failed, succeeded, or were canceled. Provider: The number of migrations per provider. Includes Red Hat Virtualization, vSphere, OpenStack, OVA, and OpenShift Virtualization providers. Mode: The number of migrations by mode. Includes cold and warm migrations. Target: The number of migrations by target. Includes local and remote migrations. Plan ID: The ID number of the migration plan. The number is assigned by MTV. Metrics are calculated every 10 seconds and are reported per week, per month, and per year.
null
https://docs.redhat.com/en/documentation/migration_toolkit_for_virtualization/2.7/html/installing_and_using_the_migration_toolkit_for_virtualization/telemetry_mtv
2.2. Consistent Multipath Device Names in a Cluster
2.2. Consistent Multipath Device Names in a Cluster When the user_friendly_names configuration option is set to yes , the name of the multipath device is unique to a node, but it is not guaranteed to be the same on all nodes using the multipath device. Similarly, if you set the alias option for a device in the multipaths section of the multipath.conf configuration file, the name is not automatically consistent across all nodes in the cluster. This should not cause any difficulties if you use LVM to create logical devices from the multipath device, but if you require that your multipath device names be consistent in every node it is recommended that you not set the user_friendly_names option to yes and that you not configure aliases for the devices. By default, if you do not set user_friendly_names to yes or configure an alias for a device, a device name will be the WWID for the device, which is always the same. If you want the system-defined user-friendly names to be consistent across all nodes in the cluster, however, you can follow this procedure: Set up all of the multipath devices on one machine. Disable all multipath devices on other machines by running the following commands: Copy the /etc/multipath/bindings file from the first machine to all the other machines in the cluster. Re-enable the multipathd daemon on all the other machines in the cluster by running the following command: If you add a new device, you will need to repeat this process. Similarly, if you configure an alias for a device that you would like to be consistent across the nodes in the cluster, you should ensure that the /etc/multipath.conf file is the same for each node in the cluster by following the same procedure: Configure the aliases for the multipath devices in the multipath.conf file on one machine. Disable all multipath devices on other machines by running the following commands: Copy the /etc/multipath.conf file from the first machine to all the other machines in the cluster. Re-enable the multipathd daemon on all the other machines in the cluster by running the following command: When you add a new device you will need to repeat this process.
[ "systemctl stop multipathd.service multipath -F", "systemctl start multipathd.service", "systemctl stop multipathd.service multipath -F", "systemctl start multipathd.service" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/dm_multipath/multipath_consistent_names
Chapter 1. Introduction to Content Management
Chapter 1. Introduction to Content Management In the context of Satellite, content is defined as the software installed on systems. This includes, but is not limited to, the base operating system, middleware services, and end-user applications. With Red Hat Satellite, you can manage the various types of content for Red Hat Enterprise Linux systems at every stage of the software life cycle. Red Hat Satellite manages the following content: Subscription management This provides organizations with a method to manage their Red Hat subscription information. Content management This provides organizations with a method to store Red Hat content and organize it in various ways.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/introduction_to_content_management_content-management
6.4.2. Moving Extents to a New Disk
6.4.2. Moving Extents to a New Disk In this example, the logical volume is distributed across three physical volumes in the volume group myvg as follows: We want to move the extents of /dev/sdb1 to a new device, /dev/sdd1 . 6.4.2.1. Creating the New Physical Volume Create a new physical volume from /dev/sdd1 .
[ "pvs -o+pv_used PV VG Fmt Attr PSize PFree Used /dev/sda1 myvg lvm2 a- 17.15G 7.15G 10.00G /dev/sdb1 myvg lvm2 a- 17.15G 15.15G 2.00G /dev/sdc1 myvg lvm2 a- 17.15G 15.15G 2.00G", "pvcreate /dev/sdd1 Physical volume \"/dev/sdd1\" successfully created" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/move_new_ex4
Chapter 8. Booting hosts with the discovery image
Chapter 8. Booting hosts with the discovery image The Assisted Installer uses an initial image to run an agent that performs hardware and network validations before attempting to install OpenShift Container Platform. You can boot hosts with the discovery image using three methods: USB drive Redfish virtual media iPXE 8.1. Creating an ISO image on a USB drive You can install the Assisted Installer agent using a USB drive that contains the discovery ISO image. Starting the host with the USB drive prepares the host for the software installation. Procedure On the administration host, insert a USB drive into a USB port. Copy the ISO image to the USB drive, for example: # dd if=<path_to_iso> of=<path_to_usb> status=progress where: <path_to_iso> is the relative path to the downloaded discovery ISO file, for example, discovery.iso . <path_to_usb> is the location of the connected USB drive, for example, /dev/sdb . After the ISO is copied to the USB drive, you can use the USB drive to install the Assisted Installer agent on the cluster host. 8.2. Booting with a USB drive To register nodes with the Assisted Installer using a bootable USB drive, use the following procedure. Procedure Insert the RHCOS discovery ISO USB drive into the target host. Configure the boot drive order in the server firmware settings to boot from the attached discovery ISO, and then reboot the server. Wait for the host to boot up. For UI installations, on the administration host, return to the browser. Wait for the host to appear in the list of discovered hosts. For API installations, refresh the token, check the enabled host count, and gather the host IDs: USD source refresh-token USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.enabled_host_count' USD curl -s -X GET "https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID" \ --header "Content-Type: application/json" \ -H "Authorization: Bearer USDAPI_TOKEN" \ | jq '.host_networks[].host_ids' Example output [ "1062663e-7989-8b2d-7fbb-e6f4d5bb28e5" ] 8.3. Booting from an HTTP-hosted ISO image using the Redfish API You can provision hosts in your network using ISOs that you install using the Redfish Baseboard Management Controller (BMC) API. Prerequisites Download the installation Red Hat Enterprise Linux CoreOS (RHCOS) ISO. Procedure Copy the ISO file to an HTTP server accessible in your network. Boot the host from the hosted ISO file, for example: Call the redfish API to set the hosted ISO as the VirtualMedia boot media by running the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"Image":"<hosted_iso_file>", "Inserted": true}' \ -H "Content-Type: application/json" \ -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia Where: <bmc_username>:<bmc_password> Is the username and password for the target host BMC. <hosted_iso_file> Is the URL for the hosted installation ISO, for example: http://webserver.example.com/rhcos-live-minimal.iso . The ISO must be accessible from the target host machine. <host_bmc_address> Is the BMC IP address of the target host machine. Set the host to boot from the VirtualMedia device by running the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -X PATCH -H 'Content-Type: application/json' \ -d '{"Boot": {"BootSourceOverrideTarget": "Cd", "BootSourceOverrideMode": "UEFI", "BootSourceOverrideEnabled": "Once"}}' \ <host_bmc_address>/redfish/v1/Systems/System.Embedded.1 Reboot the host: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"ResetType": "ForceRestart"}' \ -H 'Content-type: application/json' \ -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset Optional: If the host is powered off, you can boot it using the {"ResetType": "On"} switch. Run the following command: USD curl -k -u <bmc_username>:<bmc_password> \ -d '{"ResetType": "On"}' -H 'Content-type: application/json' \ -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset 8.4. Booting hosts using iPXE The Assisted Installer provides an iPXE script including all the artifacts needed to boot the discovery image for an infrastructure environment. Due to the limitations of the current HTTPS implementation of iPXE, the recommendation is to download and expose the needed artifacts in an HTTP server. Currently, even if iPXE supports HTTPS protocol, the supported algorithms are old and not recommended. The full list of supported ciphers is in https://ipxe.org/crypto . Prerequisites You have created an infrastructure environment by using the API or you have created a cluster by using the UI. You have your infrastructure environment ID exported in your shell as USDINFRA_ENV_ID . You have credentials to use when accessing the API and have exported a token as USDAPI_TOKEN in your shell. You have an HTTP server to host the images. Note When configuring via the UI, the USDINFRA_ENV_ID and USDAPI_TOKEN variables are already provided. Note IBM Power only supports PXE, which also requires: You have installed grub2 at /var/lib/tftpboot You have installed DHCP and TFTP for PXE Procedure Download the iPXE script directly from the UI, or get the iPXE script from the Assisted Installer: USD curl \ --silent \ --header "Authorization: Bearer USDAPI_TOKEN" \ https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script Example #!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" boot Download the required artifacts by extracting URLs from the ipxe-script . Download the initial RAM disk: USD awk '/^initrd /{print USDNF}' ipxe-script | curl -o initrd.img Download the linux kernel: USD awk '/^kernel /{print USD2}' ipxe-script | curl -o kernel Download the root filesystem: USD grep ^kernel ipxe-script | xargs -n1| grep ^coreos.live.rootfs_url | cut -d = -f 2- | curl -o rootfs.img Change the URLs to the different artifacts in the ipxe-script` to match your local HTTP server. For example: #!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8" boot Optional: When installing with RHEL KVM on IBM zSystems you must boot the host by specifying additional kernel arguments random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs="console=tty1 console=ttyS1,115200n8 Note If you install with iPXE on RHEL KVM, in some circumstances, the VMs on the VM host are not rebooted on first boot and need to be started manually. Optional: When installing on IBM Power you must download intramfs, kernel, and root as follows: Copy initrd.img and kernel.img to PXE directory `/var/lib/tftpboot/rhcos ` Copy rootfs.img to HTTPD directory `/var/www/html/install ` Add following entry to `/var/lib/tftpboot/boot/grub2/grub.cfg `: if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry "CoreOS (BIOS)" { echo "Loading kernel" linux "/rhcos/kernel.img" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo "Loading initrd" initrd "/rhcos/initrd.img" } fi
[ "dd if=<path_to_iso> of=<path_to_usb> status=progress", "source refresh-token", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.enabled_host_count'", "curl -s -X GET \"https://api.openshift.com/api/assisted-install/v2/clusters/USDCLUSTER_ID\" --header \"Content-Type: application/json\" -H \"Authorization: Bearer USDAPI_TOKEN\" | jq '.host_networks[].host_ids'", "[ \"1062663e-7989-8b2d-7fbb-e6f4d5bb28e5\" ]", "curl -k -u <bmc_username>:<bmc_password> -d '{\"Image\":\"<hosted_iso_file>\", \"Inserted\": true}' -H \"Content-Type: application/json\" -X POST <host_bmc_address>/redfish/v1/Managers/iDRAC.Embedded.1/VirtualMedia/CD/Actions/VirtualMedia.InsertMedia", "curl -k -u <bmc_username>:<bmc_password> -X PATCH -H 'Content-Type: application/json' -d '{\"Boot\": {\"BootSourceOverrideTarget\": \"Cd\", \"BootSourceOverrideMode\": \"UEFI\", \"BootSourceOverrideEnabled\": \"Once\"}}' <host_bmc_address>/redfish/v1/Systems/System.Embedded.1", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"ForceRestart\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl -k -u <bmc_username>:<bmc_password> -d '{\"ResetType\": \"On\"}' -H 'Content-type: application/json' -X POST <host_bmc_address>/redfish/v1/Systems/System.Embedded.1/Actions/ComputerSystem.Reset", "curl --silent --header \"Authorization: Bearer USDAPI_TOKEN\" https://api.openshift.com/api/assisted-install/v2/infra-envs/USDINFRA_ENV_ID/downloads/files?file_name=ipxe-script > ipxe-script", "#!ipxe initrd --name initrd http://api.openshift.com/api/assisted-images/images/<infra_env_id>/pxe-initrd?arch=x86_64&image_token=<token_string>&version=4.10 kernel http://api.openshift.com/api/assisted-images/boot-artifacts/kernel?arch=x86_64&version=4.10 initrd=initrd coreos.live.rootfs_url=http://api.openshift.com/api/assisted-images/boot-artifacts/rootfs?arch=x86_64&version=4.10 random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot", "awk '/^initrd /{print USDNF}' ipxe-script | curl -o initrd.img", "awk '/^kernel /{print USD2}' ipxe-script | curl -o kernel", "grep ^kernel ipxe-script | xargs -n1| grep ^coreos.live.rootfs_url | cut -d = -f 2- | curl -o rootfs.img", "#!ipxe set webserver http://192.168.0.1 initrd --name initrd USDwebserver/initrd.img kernel USDwebserver/kernel initrd=initrd coreos.live.rootfs_url=USDwebserver/rootfs.img random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8\" boot", "random.trust_cpu=on rd.luks.options=discard ignition.firstboot ignition.platform.id=metal console=tty1 console=ttyS1,115200n8 coreos.inst.persistent-kargs=\"console=tty1 console=ttyS1,115200n8", "if [ USD{net_default_mac} == fa:1d:67:35:13:20 ]; then default=0 fallback=1 timeout=1 menuentry \"CoreOS (BIOS)\" { echo \"Loading kernel\" linux \"/rhcos/kernel.img\" ip=dhcp rd.neednet=1 ignition.platform.id=metal ignition.firstboot coreos.live.rootfs_url=http://9.114.98.8:8000/install/rootfs.img echo \"Loading initrd\" initrd \"/rhcos/initrd.img\" } fi" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/assisted_installer_for_openshift_container_platform/assembly_booting-hosts-with-the-discovery-image
Chapter 15. Upgrading the Red Hat Quay Operator Overview
Chapter 15. Upgrading the Red Hat Quay Operator Overview The Red Hat Quay Operator follows a synchronized versioning scheme, which means that each version of the Operator is tied to the version of Red Hat Quay and the components that it manages. There is no field on the QuayRegistry custom resource which sets the version of Red Hat Quay to deploy ; the Operator can only deploy a single version of all components. This scheme was chosen to ensure that all components work well together and to reduce the complexity of the Operator needing to know how to manage the lifecycles of many different versions of Red Hat Quay on Kubernetes. 15.1. Operator Lifecycle Manager The Red Hat Quay Operator should be installed and upgraded using the Operator Lifecycle Manager (OLM) . When creating a Subscription with the default approvalStrategy: Automatic , OLM will automatically upgrade the Red Hat Quay Operator whenever a new version becomes available. Warning When the Red Hat Quay Operator is installed by Operator Lifecycle Manager, it might be configured to support automatic or manual upgrades. This option is shown on the Operator Hub page for the Red Hat Quay Operator during installation. It can also be found in the Red Hat Quay Operator Subscription object by the approvalStrategy field. Choosing Automatic means that your Red Hat Quay Operator will automatically be upgraded whenever a new Operator version is released. If this is not desirable, then the Manual approval strategy should be selected. 15.2. Upgrading the Quay Operator The standard approach for upgrading installed Operators on OpenShift Container Platform is documented at Upgrading installed Operators . In general, Red Hat Quay supports upgrades from a prior (N-1) minor version only. For example, upgrading directly from Red Hat Quay 3.0.5 to the latest version of 3.5 is not supported. Instead, users would have to upgrade as follows: 3.0.5 3.1.3 3.1.3 3.2.2 3.2.2 3.3.4 3.3.4 3.4.z 3.4.z 3.5.z This is required to ensure that any necessary database migrations are done correctly and in the right order during the upgrade. In some cases, Red Hat Quay supports direct, single-step upgrades from prior (N-2, N-3) minor versions. This simplifies the upgrade procedure for customers on older releases. The following upgrade paths are supported: 3.3.z 3.6.z 3.4.z 3.6.z 3.4.z 3.7.z 3.5.z 3.7.z 3.7.z 3.8.z 3.6.z 3.9.z 3.7.z 3.9.z 3.8.z 3.9.z For users on standalone deployments of Red Hat Quay wanting to upgrade to 3.9, see the Standalone upgrade guide. 15.2.1. Upgrading Quay To update Red Hat Quay from one minor version to the , for example, 3.4 3.5, you must change the update channel for the Red Hat Quay Operator. For z stream upgrades, for example, 3.4.2 3.4.3, updates are released in the major-minor channel that the user initially selected during install. The procedure to perform a z stream upgrade depends on the approvalStrategy as outlined above. If the approval strategy is set to Automatic , the Quay Operator will upgrade automatically to the newest z stream. This results in automatic, rolling Quay updates to newer z streams with little to no downtime. Otherwise, the update must be manually approved before installation can begin. 15.2.2. Updating Red Hat Quay from 3.8 3.9 Important If your Red Hat Quay deployment is upgrading from one y-stream to the , for example, from 3.8.10 3.8.11, you must not switch the upgrade channel from stable-3.8 to stable-3.9 . Changing the upgrade channel in the middle of a y-stream upgrade will disallow Red Hat Quay from upgrading to 3.9. This is a known issue and will be fixed in a future version of Red Hat Quay. When updating Red Hat Quay 3.8 3.9, the Operator automatically upgrades the existing PostgreSQL databases for Clair and Red Hat Quay from version 10 to version 13. Important This upgrade is irreversible. It is highly recommended that you upgrade to PostgreSQL 13. PostgreSQL 10 had its final release on November 10, 2022 and is no longer supported. For more information, see the PostgreSQL Versioning Policy . By default, Red Hat Quay is configured to remove old persistent volume claims (PVCs) from PostgreSQL 10. To disable this setting and backup old PVCs, you must set POSTGRES_UPGRADE_RETAIN_BACKUP to True in your quay-operator Subscription object. Prerequisites You have installed Red Hat Quay 3.8 on OpenShift Container Platform. 100 GB of free, additional storage. During the upgrade process, additional persistent volume claims (PVCs) are provisioned to store the migrated data. This helps prevent a destructive operation on user data. The upgrade process rolls out PVCs for 50 GB for both the Red Hat Quay database upgrade, and the Clair database upgrade. Procedure Optional. Back up your old PVCs from PostgreSQL 10 by setting POSTGRES_UPGRADE_RETAIN_BACKUP to True your quay-operator Subscription object. For example: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay-enterprise spec: channel: stable-3.8 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: POSTGRES_UPGRADE_RETAIN_BACKUP value: "true" In the OpenShift Container Platform Web Console, navigate to Operators Installed Operators . Click on the Red Hat Quay Operator. Navigate to the Subscription tab. Under Subscription details click Update channel . Select stable-3.9 and save the changes. Check the progress of the new installation under Upgrade status . Wait until the upgrade status changes to 1 installed before proceeding. In your OpenShift Container Platform cluster, navigate to Workloads Pods . Existing pods should be terminated, or in the process of being terminated. Wait for the following pods, which are responsible for upgrading the database and alembic migration of existing data, to spin up: clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade . After the clair-postgres-upgrade , quay-postgres-upgrade , and quay-app-upgrade pods are marked as Completed , the remaining pods for your Red Hat Quay deployment spin up. This takes approximately ten minutes. Verify that the quay-database and clair-postgres pods now use the postgresql-13 image. After the quay-app pod is marked as Running , you can reach your Red Hat Quay registry. 15.2.3. Upgrading directly from 3.3.z or 3.4.z to 3.6 The following section provides important information when upgrading from Red Hat Quay 3.3.z or 3.4.z to 3.6. 15.2.3.1. Upgrading with edge routing enabled Previously, when running a 3.3.z version of Red Hat Quay with edge routing enabled, users were unable to upgrade to 3.4.z versions of Red Hat Quay. This has been resolved with the release of Red Hat Quay 3.6. When upgrading from 3.3.z to 3.6, if tls.termination is set to none in your Red Hat Quay 3.3.z deployment, it will change to HTTPS with TLS edge termination and use the default cluster wildcard certificate. For example: apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem metadata: name: quay33 spec: quay: imagePullSecretName: redhat-pull-secret enableRepoMirroring: true image: quay.io/quay/quay:v3.3.4-2 ... externalAccess: hostname: quayv33.apps.devcluster.openshift.com tls: termination: none database: ... 15.2.3.2. Upgrading with custom SSL/TLS certificate/key pairs without Subject Alternative Names There is an issue for customers using their own SSL/TLS certificate/key pairs without Subject Alternative Names (SANs) when upgrading from Red Hat Quay 3.3.4 to Red Hat Quay 3.6 directly. During the upgrade to Red Hat Quay 3.6, the deployment is blocked, with the error message from the Red Hat Quay Operator pod logs indicating that the Red Hat Quay SSL/TLS certificate must have SANs. If possible, you should regenerate your SSL/TLS certificates with the correct hostname in the SANs. A possible workaround involves defining an environment variable in the quay-app , quay-upgrade and quay-config-editor pods after upgrade to enable CommonName matching: The GODEBUG=x509ignoreCN=0 flag enables the legacy behavior of treating the CommonName field on X.509 certificates as a hostname when no SANs are present. However, this workaround is not recommended, as it will not persist across a redeployment. 15.2.3.3. Configuring Clair v4 when upgrading from 3.3.z or 3.4.z to 3.6 using the Red Hat Quay Operator To set up Clair v4 on a new Red Hat Quay deployment on OpenShift Container Platform, it is highly recommended to use the Red Hat Quay Operator. By default, the Red Hat Quay Operator will install or upgrade a Clair deployment along with your Red Hat Quay deployment and configure Clair automatically. For instructions about setting up Clair v4 in a disconnected OpenShift Container Platform cluster, see Setting Up Clair on a Red Hat Quay OpenShift deployment . 15.2.4. Swift configuration when upgrading from 3.3.z to 3.6 When upgrading from Red Hat Quay 3.3.z to 3.6.z, some users might receive the following error: Switch auth v3 requires tenant_id (string) in os_options . As a workaround, you can manually update your DISTRIBUTED_STORAGE_CONFIG to add the os_options and tenant_id parameters: DISTRIBUTED_STORAGE_CONFIG: brscale: - SwiftStorage - auth_url: http://****/v3 auth_version: "3" os_options: tenant_id: **** project_name: ocp-base user_domain_name: Default storage_path: /datastorage/registry swift_container: ocp-svc-quay-ha swift_password: ***** swift_user: ***** 15.2.5. Changing the update channel for the Red Hat Quay Operator The subscription of an installed Operator specifies an update channel, which is used to track and receive updates for the Operator. To upgrade the Red Hat Quay Operator to start tracking and receiving updates from a newer channel, change the update channel in the Subscription tab for the installed Red Hat Quay Operator. For subscriptions with an Automatic approval strategy, the upgrade begins automatically and can be monitored on the page that lists the Installed Operators. 15.2.6. Manually approving a pending Operator upgrade If an installed Operator has the approval strategy in its subscription set to Manual , when new updates are released in its current update channel, the update must be manually approved before installation can begin. If the Red Hat Quay Operator has a pending upgrade, this status will be displayed in the list of Installed Operators. In the Subscription tab for the Red Hat Quay Operator, you can preview the install plan and review the resources that are listed as available for upgrade. If satisfied, click Approve and return to the page that lists Installed Operators to monitor the progress of the upgrade. The following image shows the Subscription tab in the UI, including the update Channel , the Approval strategy, the Upgrade status and the InstallPlan : The list of Installed Operators provides a high-level summary of the current Quay installation: 15.3. Upgrading a QuayRegistry When the Red Hat Quay Operator starts, it immediately looks for any QuayRegistries it can find in the namespace(s) it is configured to watch. When it finds one, the following logic is used: If status.currentVersion is unset, reconcile as normal. If status.currentVersion equals the Operator version, reconcile as normal. If status.currentVersion does not equal the Operator version, check if it can be upgraded. If it can, perform upgrade tasks and set the status.currentVersion to the Operator's version once complete. If it cannot be upgraded, return an error and leave the QuayRegistry and its deployed Kubernetes objects alone. 15.4. Upgrading a QuayEcosystem Upgrades are supported from versions of the Operator which used the QuayEcosystem API for a limited set of configurations. To ensure that migrations do not happen unexpectedly, a special label needs to be applied to the QuayEcosystem for it to be migrated. A new QuayRegistry will be created for the Operator to manage, but the old QuayEcosystem will remain until manually deleted to ensure that you can roll back and still access Quay in case anything goes wrong. To migrate an existing QuayEcosystem to a new QuayRegistry , use the following procedure. Procedure Add "quay-operator/migrate": "true" to the metadata.labels of the QuayEcosystem . USD oc edit quayecosystem <quayecosystemname> metadata: labels: quay-operator/migrate: "true" Wait for a QuayRegistry to be created with the same metadata.name as your QuayEcosystem . The QuayEcosystem will be marked with the label "quay-operator/migration-complete": "true" . After the status.registryEndpoint of the new QuayRegistry is set, access Red Hat Quay and confirm that all data and settings were migrated successfully. If everything works correctly, you can delete the QuayEcosystem and Kubernetes garbage collection will clean up all old resources. 15.4.1. Reverting QuayEcosystem Upgrade If something goes wrong during the automatic upgrade from QuayEcosystem to QuayRegistry , follow these steps to revert back to using the QuayEcosystem : Procedure Delete the QuayRegistry using either the UI or kubectl : USD kubectl delete -n <namespace> quayregistry <quayecosystem-name> If external access was provided using a Route , change the Route to point back to the original Service using the UI or kubectl . Note If your QuayEcosystem was managing the PostgreSQL database, the upgrade process will migrate your data to a new PostgreSQL database managed by the upgraded Operator. Your old database will not be changed or removed but Red Hat Quay will no longer use it once the migration is complete. If there are issues during the data migration, the upgrade process will exit and it is recommended that you continue with your database as an unmanaged component. 15.4.2. Supported QuayEcosystem Configurations for Upgrades The Red Hat Quay Operator reports errors in its logs and in status.conditions if migrating a QuayEcosystem component fails or is unsupported. All unmanaged components should migrate successfully because no Kubernetes resources need to be adopted and all the necessary values are already provided in Red Hat Quay's config.yaml file. Database Ephemeral database not supported ( volumeSize field must be set). Redis Nothing special needed. External Access Only passthrough Route access is supported for automatic migration. Manual migration required for other methods. LoadBalancer without custom hostname: After the QuayEcosystem is marked with label "quay-operator/migration-complete": "true" , delete the metadata.ownerReferences field from existing Service before deleting the QuayEcosystem to prevent Kubernetes from garbage collecting the Service and removing the load balancer. A new Service will be created with metadata.name format <QuayEcosystem-name>-quay-app . Edit the spec.selector of the existing Service to match the spec.selector of the new Service so traffic to the old load balancer endpoint will now be directed to the new pods. You are now responsible for the old Service ; the Quay Operator will not manage it. LoadBalancer / NodePort / Ingress with custom hostname: A new Service of type LoadBalancer will be created with metadata.name format <QuayEcosystem-name>-quay-app . Change your DNS settings to point to the status.loadBalancer endpoint provided by the new Service . Clair Nothing special needed. Object Storage QuayEcosystem did not have a managed object storage component, so object storage will always be marked as unmanaged. Local storage is not supported. Repository Mirroring Nothing special needed. Additional resources For more details on the Red Hat Quay Operator, see the upstream quay-operator project.
[ "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: quay-operator namespace: quay-enterprise spec: channel: stable-3.8 name: quay-operator source: redhat-operators sourceNamespace: openshift-marketplace config: env: - name: POSTGRES_UPGRADE_RETAIN_BACKUP value: \"true\"", "apiVersion: redhatcop.redhat.io/v1alpha1 kind: QuayEcosystem metadata: name: quay33 spec: quay: imagePullSecretName: redhat-pull-secret enableRepoMirroring: true image: quay.io/quay/quay:v3.3.4-2 externalAccess: hostname: quayv33.apps.devcluster.openshift.com tls: termination: none database:", "GODEBUG=x509ignoreCN=0", "DISTRIBUTED_STORAGE_CONFIG: brscale: - SwiftStorage - auth_url: http://****/v3 auth_version: \"3\" os_options: tenant_id: **** project_name: ocp-base user_domain_name: Default storage_path: /datastorage/registry swift_container: ocp-svc-quay-ha swift_password: ***** swift_user: *****", "oc edit quayecosystem <quayecosystemname>", "metadata: labels: quay-operator/migrate: \"true\"", "kubectl delete -n <namespace> quayregistry <quayecosystem-name>" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_operator_features/operator-upgrade
Troubleshooting OpenShift Data Foundation
Troubleshooting OpenShift Data Foundation Red Hat OpenShift Data Foundation 4.18 Instructions on troubleshooting OpenShift Data Foundation Red Hat Storage Documentation Team Abstract Read this document for instructions on troubleshooting Red Hat OpenShift Data Foundation. Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message . Providing feedback on Red Hat documentation We appreciate your input on our documentation. Do let us know how we can make it better. To give feedback, create a Jira ticket: Log in to the Jira . Click Create in the top navigation bar Enter a descriptive title in the Summary field. Enter your suggestion for improvement in the Description field. Include links to the relevant parts of the documentation. Select Documentation in the Components field. Click Create at the bottom of the dialogue. Chapter 1. Overview Troubleshooting OpenShift Data Foundation is written to help administrators understand how to troubleshoot and fix their Red Hat OpenShift Data Foundation cluster. Most troubleshooting tasks focus on either a fix or a workaround. This document is divided into chapters based on the errors that an administrator may encounter: Chapter 2, Downloading log files and diagnostic information using must-gather shows you how to use the must-gather utility in OpenShift Data Foundation. Chapter 4, Commonly required logs for troubleshooting shows you how to obtain commonly required log files for OpenShift Data Foundation. Chapter 7, Troubleshooting alerts and errors in OpenShift Data Foundation shows you how to identify the encountered error and perform required actions. Warning Red Hat does not support running Ceph commands in OpenShift Data Foundation clusters (unless indicated by Red Hat support or Red Hat documentation) as it can cause data loss if you run the wrong commands. In that case, the Red Hat support team is only able to provide commercially reasonable effort and may not be able to restore all the data in case of any data loss. Chapter 2. Downloading log files and diagnostic information using must-gather If Red Hat OpenShift Data Foundation is unable to automatically resolve a problem, use the must-gather tool to collect log files and diagnostic information so that you or Red Hat support can review the problem and determine a solution. Important When Red Hat OpenShift Data Foundation is deployed in external mode, must-gather only collects logs from the OpenShift Data Foundation cluster and does not collect debug data and logs from the external Red Hat Ceph Storage cluster. To collect debug logs from the external Red Hat Ceph Storage cluster, see Red Hat Ceph Storage Troubleshooting guide and contact your Red Hat Ceph Storage Administrator. Prerequisites Optional: If OpenShift Data Foundation is deployed in a disconnected environment, ensure that you mirror the individual must-gather image to the mirror registry available from the disconnected environment. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. <path-to-the-registry-config> Is the path to your registry credentials, by default it is ~/.docker/config.json . --insecure Add this flag only if the mirror registry is insecure. For more information, see the Red Hat Knowledgebase solutions: How to mirror images between Redhat Openshift registries Failed to mirror OpenShift image repository when private registry is insecure Procedure Run the must-gather command from the client connected to the OpenShift Data Foundation cluster: <directory-name> Is the name of the directory where you want to write the data to. Important For a disconnected environment deployment, replace the image in --image parameter with the mirrored must-gather image. <local-registry> Is the local image mirror registry available for a disconnected OpenShift Container Platform cluster. This collects the following information in the specified directory: All Red Hat OpenShift Data Foundation cluster related Custom Resources (CRs) with their namespaces. Pod logs of all the Red Hat OpenShift Data Foundation related pods. Output of some standard Ceph commands like Status, Cluster health, and others. 2.1. Variations of must-gather-commands If one or more master nodes are not in the Ready state, use --node-name to provide a master node that is Ready so that the must-gather pod can be safely scheduled. If you want to gather information from a specific time: To specify a relative time period for logs gathered, such as within 5 seconds or 2 days, add /usr/bin/gather since=<duration> : To specify a specific time to gather logs after, add /usr/bin/gather since-time=<rfc3339-timestamp> : Replace the example values in these commands as follows: <node-name> If one or more master nodes are not in the Ready state, use this parameter to provide the name of a master node that is still in the Ready state. This avoids scheduling errors by ensuring that the must-gather pod is not scheduled on a master node that is not ready. <directory-name> The directory to store information collected by must-gather . <duration> Specify the period of time to collect information from as a relative duration, for example, 5h (starting from 5 hours ago). <rfc3339-timestamp> Specify the period of time to collect information from as an RFC 3339 timestamp, for example, 2020-11-10T04:00:00+00:00 (starting from 4 am UTC on 11 Nov 2020). 2.2. Running must-gather in modular mode Red Hat OpenShift Data Foundation must-gather can take a long time to run in some environments. To avoid this, run must-gather in modular mode and collect only the resources you require using the following command: Replace < -arg> with one or more of the following arguments to specify the resources for which the must-gather logs is required. -o , --odf ODF logs (includes Ceph resources, namespaced resources, clusterscoped resources and Ceph logs) -d , --dr DR logs -n , --noobaa Noobaa logs -c , --ceph Ceph commands and pod logs -cl , --ceph-logs Ceph daemon, kernel, journal logs, and crash reports -ns , --namespaced namespaced resources -cs , --clusterscoped clusterscoped resources -pc , --provider openshift-storage-client logs from a provider/consumer cluster (includes all the logs under operator namespace, pods, deployments, secrets, configmap, and other resources) -h , --help Print help message Note If no < -arg> is included, must-gather will collect all logs. Chapter 3. Using odf-cli command odf-cli command and its subcommands help to reduce repetitive tasks and provide better experience. You can download the odf-cli tool from the customer portal . 3.1. Subcommands of odf get command odf get recovery-profile Displays the recovery-profile value set for the OSD. By default, an empty value is displayed if the value is not set using the odf set recovery-profile command. After the value is set, the appropriate value is displayed. Example : odf get health Checks the health of the Ceph cluster and common configuration issues. This command checks for the following: At least three mon pods are running on different nodes Mon quorum and Ceph health details At least three OSD pods are running on different nodes The 'Running' status of all pods Placement group status At least one MGR pod is running Example : odf get dr-health In mirroring-enabled clusters, fetches the connection status of a cluster from another cluster. The cephblockpool is queried with mirroring-enabled and If not found will exit with relevant logs. Example : odf get dr-prereq Checks and fetches the status of all the prerequisites to enable Disaster Recovery on a pair of clusters. The command takes the peer cluster name as an argument and uses it to compare current cluster configuration with the peer cluster configuration. Based on the comparison results, the status of the prerequisites is shown. Example 3.2. Subcommands of odf operator command odf operator rook set Sets the provided property value in the rook-ceph-operator config configmap Example : where, ROOK_LOG_LEVEL can be DEBUG , INFO , or WARNING odf operator rook restart Restarts the Rook-Ceph operator Example : odf restore mon-quorum Restores the mon quorum when the majority of mons are not in quorum and the cluster is down. When the majority of mons are lost permanently, the quorum needs to be restored to a remaining good mon in order to bring the Ceph cluster up again. Example : odf restore deleted <crd> Restores the deleted Rook CR when there is still data left for the components, CephClusters, CephFilesystems, and CephBlockPools. Generally, when Rook CR is deleted and there is leftover data, the Rook operator does not delete the CR to ensure data is not lost and the operator does not remove the finalizer on the CR. As a result, the CR is stuck in the Deleting state and cluster health is not ensured. Upgrades are blocked too. This command helps to repair the CR without the cluster downtime. Note A warning message seeking confirmation to restore appears. After confirming, you need to enter continue to start the operator and expand to the full mon-quorum again. Example: 3.3. Configuring debug verbosity of Ceph components You can configure verbosity of Ceph components by enabling or increasing the log debugging for a specific Ceph subsystem from OpenShift Data Foundation. For information about the Ceph subsystems and the log levels that can be updated, see Ceph subsystems default logging level values . Procedure Set log level for Ceph daemons: where ceph-subsystem can be osd , mds , or mon . For example, Chapter 4. Commonly required logs for troubleshooting Some of the commonly used logs for troubleshooting OpenShift Data Foundation are listed, along with the commands to generate them. Generating logs for a specific pod: Generating logs for Ceph or OpenShift Data Foundation cluster: Important Currently, the rook-ceph-operator logs do not provide any information about the failure and this acts as a limitation in troubleshooting issues, see Enabling and disabling debug logs for rook-ceph-operator . Generating logs for plugin pods like cephfs or rbd to detect any problem in the PVC mount of the app-pod: To generate logs for all the containers in the CSI pod: Generating logs for cephfs or rbd provisioner pods to detect problems if PVC is not in BOUND state: To generate logs for all the containers in the CSI pod: Generating OpenShift Data Foundation logs using cluster-info command: When using Local Storage Operator, generating logs can be done using cluster-info command: Check the OpenShift Data Foundation operator logs and events. To check the operator logs : <ocs-operator> To check the operator events : Get the OpenShift Data Foundation operator version and channel. Example output : Example output : Confirm that the installplan is created. Verify the image of the components post updating OpenShift Data Foundation. Check the node on which the pod of the component you want to verify the image is running. For Example : Example output: dell-r440-12.gsslab.pnq2.redhat.com is the node-name . Check the image ID. <node-name> Is the name of the node on which the pod of the component you want to verify the image is running. For Example : Take a note of the IMAGEID and map it to the Digest ID on the Rook Ceph Operator page. Additional resources Using must-gather 4.1. Adjusting verbosity level of logs The amount of space consumed by debugging logs can become a significant issue. Red Hat OpenShift Data Foundation offers a method to adjust, and therefore control, the amount of storage to be consumed by debugging logs. In order to adjust the verbosity levels of debugging logs, you can tune the log levels of the containers responsible for container storage interface (CSI) operations. In the container's yaml file, adjust the following parameters to set the logging levels: CSI_LOG_LEVEL - defaults to 5 CSI_SIDECAR_LOG_LEVEL - defaults to 1 The supported values are 0 through 5 . Use 0 for general useful logs, and 5 for trace level verbosity. Chapter 5. Overriding the cluster-wide default node selector for OpenShift Data Foundation post deployment When a cluster-wide default node selector is used for OpenShift Data Foundation, the pods generated by container storage interface (CSI) daemonsets are able to start only on the nodes that match the selector. To be able to use OpenShift Data Foundation from nodes which do not match the selector, override the cluster-wide default node selector by performing the following steps in the command line interface : Procedure Specify a blank node selector for the openshift-storage namespace. Delete the original pods generated by the DaemonSets. Chapter 6. Encryption token is deleted or expired Use this procedure to update the token if the encryption token for your key management system gets deleted or expires. Prerequisites Ensure that you have a new token with the same policy as the deleted or expired token Procedure Log in to OpenShift Container Platform Web Console. Click Workloads -> Secrets To update the ocs-kms-token used for cluster wide encryption: Set the Project to openshift-storage . Click ocs-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . To update the ceph-csi-kms-token for a given project or namespace with encrypted persistent volumes: Select the required Project . Click ceph-csi-kms-token -> Actions -> Edit Secret . Drag and drop or upload your encryption token file in the Value field. The token can either be a file or text that can be copied and pasted. Click Save . Note The token can be deleted only after all the encrypted PVCs using the ceph-csi-kms-token have been deleted. Chapter 7. Troubleshooting alerts and errors in OpenShift Data Foundation 7.1. Resolving alerts and errors Red Hat OpenShift Data Foundation can detect and automatically resolve a number of common failure scenarios. However, some problems require administrator intervention. To know the errors currently firing, check one of the following locations: Observe -> Alerting -> Firing option Home -> Overview -> Cluster tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Block and File tab Storage -> Data Foundation -> Storage System -> storage system link in the pop up -> Overview -> Object tab Copy the error displayed and search it in the following section to know its severity and resolution: Name : CephMonVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph Mon components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephOSDVersionMismatch Message : There are multiple versions of storage services running. Description : There are {{ USDvalue }} different versions of Ceph OSD components running. Severity : Warning Resolution : Fix Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Name : CephClusterCriticallyFull Message : Storage cluster is critically full and needs immediate expansion Description : Storage cluster utilization has crossed 85%. Severity : Crtical Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : CephClusterNearFull Fixed : Storage cluster is nearing full. Expansion is required. Description : Storage cluster utilization has crossed 75%. Severity : Warning Resolution : Fix Procedure : Remove unnecessary data or expand the cluster. Name : NooBaaBucketErrorState Message : A NooBaa Bucket Is In Error State Description : A NooBaa bucket {{ USDlabels.bucket_name }} is in error state for more than 6m Severity : Warning Resolution : Workaround Procedure : Finding the error code of an unhealthy bucket Name : NooBaaNamespaceResourceErrorState Message : A NooBaa Namespace Resource Is In Error State Description : A NooBaa namespace resource {{ USDlabels.namespace_resource_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy namespace store resource Name : NooBaaNamespaceBucketErrorState Message : A NooBaa Namespace Bucket Is In Error State Description : A NooBaa namespace bucket {{ USDlabels.bucket_name }} is in error state for more than 5m Severity : Warning Resolution : Fix Procedure : Finding the error code of an unhealthy bucket Name : CephMdsMissingReplicas Message : Insufficient replicas for storage metadata service. Description : `Minimum required replicas for storage metadata service not available. Might affect the working of storage cluster.` Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, contact Red Hat support . Name : CephMgrIsAbsent Message : Storage metrics collector service not available anymore. Description : Ceph Manager has disappeared from Prometheus target discovery. Severity : Critical Resolution : Contact Red Hat support Procedure : Inspect the user interface and log, and verify if an update is in progress. If an update is in progress, this alert is temporary. If an update is not in progress, restart the upgrade process. Once the upgrade is complete, check for alerts and operator status. If the issue persists or cannot be identified, contact Red Hat support . Name : CephNodeDown Message : Storage node {{ USDlabels.node }} went down Description : Storage node {{ USDlabels.node }} went down. Check the node immediately. Severity : Critical Resolution : Contact Red Hat support Procedure : Check which node stopped functioning and its cause. Take appropriate actions to recover the node. If node cannot be recovered: See Replacing storage nodes for Red Hat OpenShift Data Foundation Contact Red Hat support . Name : CephClusterErrorState Message : Storage cluster is in error state Description : Storage cluster is in error state for more than 10m. Severity : Critical Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephClusterWarningState Message : Storage cluster is in degraded state Description : Storage cluster is in warning state for more than 10m. Severity : Warning Resolution : Contact Red Hat support Procedure : Check for alerts and operator status. If the issue cannot be identified, download log files and diagnostic information using must-gather . Open a Support Ticket with Red Hat Support with an attachment of the output of must-gather. Name : CephDataRecoveryTakingTooLong Message : Data recovery is slow Description : Data recovery has been active for too long. Severity : Warning Resolution : Contact Red Hat support Name : CephOSDDiskNotResponding Message : Disk not responding Description : Disk device {{ USDlabels.device }} not responding, on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephOSDDiskUnavailable Message : Disk not accessible Description : Disk device {{ USDlabels.device }} not accessible on host {{ USDlabels.host }}. Severity : Critical Resolution : Contact Red Hat support Name : CephPGRepairTakingTooLong Message : Self heal problems detected Description : Self heal operations taking too long. Severity : Warning Resolution : Contact Red Hat support Name : CephMonHighNumberOfLeaderChanges Message : Storage Cluster has seen many leader changes recently. Description : 'Ceph Monitor "{{ USDlabels.job }}": instance {{ USDlabels.instance }} has seen {{ USDvalue printf "%.2f" }} leader changes per minute recently.' Severity : Warning Resolution : Contact Red Hat support Name : CephMonQuorumAtRisk Message : Storage quorum at risk Description : Storage cluster quorum is low. Severity : Critical Resolution : Contact Red Hat support Name : ClusterObjectStoreState Message : Cluster Object Store is in an unhealthy state. Check Ceph cluster health . Description : Cluster Object Store is in an unhealthy state for more than 15s. Check Ceph cluster health . Severity : Critical Resolution : Contact Red Hat support Procedure : Check the CephObjectStore CR instance. Contact Red Hat support . Name : CephOSDFlapping Message : Storage daemon osd.x has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause . Description : Storage OSD restarts more than 5 times in 5 minutes . Severity : Critical Resolution : Contact Red Hat support Name : OdfPoolMirroringImageHealth Message : Mirroring image(s) (PV) in the pool <pool-name> are in Warning state for more than a 1m. Mirroring might not work as expected. Description : Disaster recovery is failing for one or a few applications. Severity : Warning Resolution : Contact Red Hat support Name : OdfMirrorDaemonStatus Message : Mirror daemon is unhealthy . Description : Disaster recovery is failing for the entire cluster. Mirror daemon is in an unhealthy status for more than 1m. Mirroring on this cluster is not working as expected. Severity : Critical Resolution : Contact Red Hat support 7.2. Resolving cluster health issues There is a finite set of possible health messages that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health checks which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health code below for more information and troubleshooting. Health code Description MON_DISK_LOW One or more Ceph Monitors are low on disk space. 7.2.1. MON_DISK_LOW This alert triggers if the available space on the file system storing the monitor database as a percentage, drops below mon_data_avail_warn (default: 15%). This may indicate that some other process or user on the system is filling up the same file system used by the monitor. It may also indicate that the monitor's database is large. Note The paths to the file system differ depending on the deployment of your mons. You can find the path to where the mon is deployed in storagecluster.yaml . Example paths: Mon deployed over PVC path: /var/lib/ceph/mon Mon deployed over hostpath: /var/lib/rook/mon In order to clear up space, view the high usage files in the file system and choose which to delete. To view the files, run: Replace <path-in-the-mon-node> with the path to the file system where mons are deployed. 7.3. Resolving cluster alerts There is a finite set of possible health alerts that a Red Hat Ceph Storage cluster can raise that show in the OpenShift Data Foundation user interface. These are defined as health alerts which have unique identifiers. The identifier is a terse pseudo-human-readable string that is intended to enable tools to make sense of health checks, and present them in a way that reflects their meaning. Click the health alert for more information and troubleshooting. Table 7.1. Types of cluster health alerts Health alert Overview CephClusterCriticallyFull Storage cluster utilization has crossed 80%. CephClusterErrorState Storage cluster is in an error state for more than 10 minutes. CephClusterNearFull Storage cluster is nearing full capacity. Data deletion or cluster expansion is required. CephClusterReadOnly Storage cluster is read-only now and needs immediate data deletion or cluster expansion. CephClusterWarningState Storage cluster is in a warning state for more than 10 mins. CephDataRecoveryTakingTooLong Data recovery has been active for too long. CephMdsCacheUsageHigh Ceph metadata service (MDS) cache usage for the MDS daemon has exceeded 95% of the mds_cache_memory_limit . CephMdsCpuUsageHigh Ceph MDS CPU usage for the MDS daemon has exceeded the threshold for adequate performance. CephMdsMissingReplicas Minimum required replicas for storage metadata service not available. Might affect the working of the storage cluster. CephMgrIsAbsent Ceph Manager has disappeared from Prometheus target discovery. CephMgrIsMissingReplicas Ceph manager is missing replicas. This impacts health status reporting and will cause some of the information reported by the ceph status command to be missing or stale. In addition, the Ceph manager is responsible for a manager framework aimed at expanding the existing capabilities of Ceph. CephMonHighNumberOfLeaderChanges The Ceph monitor leader is being changed an unusual number of times. CephMonQuorumAtRisk Storage cluster quorum is low. CephMonQuorumLost The number of monitor pods in the storage cluster are not enough. CephMonVersionMismatch There are different versions of Ceph Mon components running. CephNodeDown A storage node went down. Check the node immediately. The alert should contain the node name. CephOSDCriticallyFull Utilization of back-end Object Storage Device (OSD) has crossed 80%. Free up some space immediately or expand the storage cluster or contact support. CephOSDDiskNotResponding A disk device is not responding on one of the hosts. CephOSDDiskUnavailable A disk device is not accessible on one of the hosts. CephOSDFlapping Ceph storage OSD flapping. CephOSDNearFull One of the OSD storage devices is nearing full. CephOSDSlowOps OSD requests are taking too long to process. CephOSDVersionMismatch There are different versions of Ceph OSD components running. CephPGRepairTakingTooLong Self-healing operations are taking too long. CephPoolQuotaBytesCriticallyExhausted Storage pool quota usage has crossed 90%. CephPoolQuotaBytesNearExhaustion Storage pool quota usage has crossed 70%. OSDCPULoadHigh CPU usage in the OSD container on a specific pod has exceeded 80%, potentially affecting the performance of the OSD. PersistentVolumeUsageCritical Persistent Volume Claim usage has exceeded more than 85% of its capacity. PersistentVolumeUsageNearFull Persistent Volume Claim usage has exceeded more than 75% of its capacity. 7.3.1. CephClusterCriticallyFull Meaning Storage cluster utilization has crossed 80% and will become read-only at 85%. Your Ceph cluster will become read-only once utilization crosses 85%. Free up some space or expand the storage cluster immediately. It is common to see alerts related to Object Storage Device (OSD) full or near full prior to this alert. Impact High Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information to free up some space. 7.3.2. CephClusterErrorState Meaning This alert reflects that the storage cluster is in ERROR state for an unacceptable amount of time and thispts the storage availability. Check for other alerts that would have triggered prior to this one and troubleshoot those alerts first. Impact Critical Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.3. CephClusterNearFull Meaning Storage cluster utilization has crossed 75% and will become read-only at 85%. Free up some space or expand the storage cluster. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.4. CephClusterReadOnly Meaning Storage cluster utilization has crossed 85% and will become read-only now. Free up some space or expand the storage cluster immediately. Impact Critical Diagnosis Scaling storage Depending on the type of cluster, you need to add storage devices, nodes, or both. For more information, see the Scaling storage guide . Mitigation Deleting information If it is not possible to scale up the cluster, you need to delete information in order to free up some space. 7.3.5. CephClusterWarningState Meaning This alert reflects that the storage cluster has been in a warning state for an unacceptable amount of time. While the storage operations will continue to function in this state, it is recommended to fix the errors so that the cluster does not get into an error state. Check for other alerts that might have triggered prior to this one and troubleshoot those alerts first. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.6. CephDataRecoveryTakingTooLong Meaning Data recovery is slow. Check whether all the Object Storage Devices (OSDs) are up and running. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.7. CephMdsCacheUsageHigh Meaning When the storage metadata service (MDS) cannot keep its cache usage under the target threshold specified by mds_health_cache_threshold , or 150% of the cache limit set by mds_cache_memory_limit , the MDS sends a health alert to the monitors indicating the cache is too large. As a result, the MDS related operations become slow. Impact High Diagnosis The MDS tries to stay under a reservation of the mds_cache_memory_limit by trimming unused metadata in its cache and recalling cached items in the client caches. It is possible for the MDS to exceed this limit due to slow recall from clients as a result of multiple clients accesing the files. Mitigation Make sure you have enough memory provisioned for MDS cache. Memory resources for the MDS pods need to be updated in the ocs-storageCluster in order to increase the mds_cache_memory_limit . Run the following command to set the memory of MDS pods, for example, 16GB: OpenShift Data Foundation automatically sets mds_cache_memory_limit to half of the MDS pod memory limit. If the memory is set to 8GB using the command, then the operator sets the MDS cache memory limit to 4GB. 7.3.8. CephMdsCpuUsageHigh Meaning The storage metadata service (MDS) serves filesystem metadata. The MDS is crucial for any file creation, rename, deletion, and update operations. MDS by default is allocated two or three CPUs. This does not cause issues as long as there are not too many metadata operations. When the metadata operation load increases enough to trigger this alert, it means the default CPU allocation is unable to cope with load. You need to increase the CPU allocation or run multiple active MDS servers. Impact High Diagnosis Click Workloads -> Pods . Select the corresponding MDS pod and click on the Metrics tab. There you will see the allocated and used CPU. By default, the alert is fired if the used CPU is 67% of allocated CPU for 6 hours. If this is the case, follow the steps in the mitigation section. Mitigation You need to either do a vertical or a horizontal scaling of CPU. For more information, see the Description and Runbook section of the alert. Use the following command to set the number of allocated CPU for MDS, for example, 8: In order to run multiple active MDS servers, use the following command: Make sure you have enough CPU provisioned for MDS depending on the load. Important Always increase the activeMetadataServers by 1 . The scaling of activeMetadataServers works only if you have more than one PV. If there is only one PV that is causing CPU load, look at increasing the CPU resource as described above. 7.3.9. CephMdsMissingReplicas Meaning Minimum required replicas for the storage metadata service (MDS) are not available. MDS is responsible for filing metadata. Degradation of the MDS service can affect how the storage cluster works (related to the CephFS storage class) and should be fixed as soon as possible. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.10. CephMgrIsAbsent Meaning Not having a Ceph manager running the monitoring of the cluster. Persistent Volume Claim (PVC) creation and deletion requests should be resolved as soon as possible. Impact High Diagnosis Verify that the rook-ceph-mgr pod is failing, and restart if necessary. If the Ceph mgr pod restart fails, follow the general pod troubleshooting to resolve the issue. Verify that the Ceph mgr pod is failing: Describe the Ceph mgr pod for more details: <pod_name> Specify the rook-ceph-mgr pod name from the step. Analyze the errors related to resource issues. Delete the pod, and wait for the pod to restart: Follow these steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.11. CephMgrIsMissingReplicas Meaning To resolve this alert, you need to determine the cause of the disappearance of the Ceph manager and restart if necessary. Impact High Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.12. CephMonHighNumberOfLeaderChanges Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact Medium Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Print the logs of the affected monitor pod to gather more information about the issue: <rook-ceph-mon-X-yyyy> Specify the name of the affected monitor pod. Alternatively, use the Openshift Web console to open the logs of the affected monitor pod. More information about possible causes is reflected in the log. Perform the general pod troubleshooting steps: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.13. CephMonQuorumAtRisk Meaning Multiple MONs work together to provide redundancy. Each of the MONs keeps a copy of the metadata. The cluster is deployed with 3 MONs, and requires 2 or more MONs to be up and running for quorum and for the storage operations to run. If quorum is lost, access to data is at risk. Impact High Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Perform the following for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.14. CephMonQuorumLost Meaning In a Ceph cluster there is a redundant set of monitor pods that store critical information about the storage cluster. Monitor pods synchronize periodically to obtain information about the storage cluster. The first monitor pod to get the most updated information becomes the leader, and the other monitor pods will start their synchronization process after asking the leader. A problem in network connection or another kind of problem in one or more monitor pods produces an unusual change of the leader. This situation can negatively affect the storage cluster performance. Impact High Important Check for any network issues. If there is a network issue, you need to escalate to the OpenShift Data Foundation team before you proceed with any of the following troubleshooting steps. Diagnosis Restore the Ceph MON Quorum. For more information, see Restoring ceph-monitor quorum in OpenShift Data Foundation in the Troubleshooting guide . If the restoration of the Ceph MON Quorum fails, follow the general pod troubleshooting to resolve the issue. Alternatively, perform general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.15. CephMonVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.16. CephNodeDown Meaning A node running Ceph pods is down. While storage operations will continue to function as Ceph is designed to deal with a node failure, it is recommended to resolve the issue to minimize the risk of another node going down and affecting storage functions. Impact Medium Diagnosis List all the pods that are running and failing: Important Ensure that you meet the OpenShift Data Foundation resource requirements so that the Object Storage Device (OSD) pods are scheduled on the new node. This may take a few minutes as the Ceph cluster recovers data for the failing but now recovering OSD. To watch this recovery in action, ensure that the OSD pods are correctly placed on the new worker node. Check if the OSD pods that were previously failing are now running: If the previously failing OSD pods have not been scheduled, use the describe command and check the events for reasons the pods were not rescheduled. Describe the events for the failing OSD pod: Find the one or more failing OSD pods: In the events section look for the failure reasons, such as the resources are not being met. In addition, you can use the rook-ceph-toolbox to watch the recovery. This step is optional, but is helpful for large Ceph clusters. To access the toolbox, run the following command: From the rsh command prompt, run the following, and watch for "recovery" under the io section: Determine if there are failed nodes. Get the list of worker nodes, and check for the node status: Describe the node which is of the NotReady status to get more information about the failure: Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.17. CephOSDCriticallyFull Meaning One of the Object Storage Devices (OSDs) is critically full. Expand the cluster immediately. Impact High Diagnosis Deleting data to free up storage space You can delete data, and the cluster will resolve the alert through self healing processes. Important This is only applicable to OpenShift Data Foundation clusters that are near or full but not in read-only mode. Read-only mode prevents any changes that include deleting data, that is, deletion of Persistent Volume Claim (PVC), Persistent Volume (PV) or both. Expanding the storage capacity Current storage size is less than 1 TB You must first assess the ability to expand. For every 1 TB of storage added, the cluster needs to have 3 nodes each with a minimum available 2 vCPUs and 8 GiB memory. You can increase the storage capacity to 4 TB via the add-on and the cluster will resolve the alert through self healing processes. If the minimum vCPU and memory resource requirements are not met, you need to add 3 additional worker nodes to the cluster. Mitigation If your current storage size is equal to 4 TB, contact Red Hat support. Optional: Run the following command to gather the debugging information for the Ceph cluster: 7.3.18. CephOSDDiskNotResponding Meaning A disk device is not responding. Check whether all the Object Storage Devices (OSDs) are up and running. Impact Medium Diagnosis pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.19. CephOSDDiskUnavailable Meaning A disk device is not accessible on one of the hosts and its corresponding Object Storage Device (OSD) is marked out by the Ceph cluster. This alert is raised when a Ceph node fails to recover within 10 minutes. Impact High Diagnosis Determine the failed node Get the list of worker nodes, and check for the node status: Describe the node which is of NotReady status to get more information on the failure: 7.3.20. CephOSDFlapping Meaning A storage daemon has restarted 5 times in the last 5 minutes. Check the pod events or Ceph status to find out the cause. Impact High Diagnosis Follow the steps in the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide. Alternatively, follow the steps for general pod troubleshooting: pod status: pending Check for resource issues, pending Persistent Volume Claims (PVCs), node assignment, and kubelet problems: Set MYPOD as the variable for the pod that is identified as the problem pod: <pod_name> Specify the name of the pod that is identified as the problem pod. Look for the resource limitations or pending PVCs. Otherwise, check for the node assignment: pod status: NOT pending, running, but NOT ready Check the readiness probe: pod status: NOT pending, but NOT running Check for application or image issues: Important If a node was assigned, check the kubelet on the node. If the basic health of the running pods, node affinity and resource availability on the nodes are verified, run the Ceph tools to get the status of the storage components. Mitigation Debugging log information This step is optional. Run the following command to gather the debugging information for the Ceph cluster: 7.3.21. CephOSDNearFull Meaning Utilization of back-end storage device Object Storage Device (OSD) has crossed 75% on a host. Impact High Mitigation Free up some space in the cluster, expand the storage cluster, or contact Red Hat support. For more information on scaling storage, see the Scaling storage guide . 7.3.22. CephOSDSlowOps Meaning An Object Storage Device (OSD) with slow requests is every OSD that is not able to service the I/O operations per second (IOPS) in the queue within the time defined by the osd_op_complaint_time parameter. By default, this parameter is set to 30 seconds. Impact Medium Diagnosis More information about the slow requests can be obtained using the Openshift console. Access the OSD pod terminal, and run the following commands: Note The number of the OSD is seen in the pod name. For example, in rook-ceph-osd-0-5d86d4d8d4-zlqkx , <0> is the OSD. Mitigation The main causes of the OSDs having slow requests are: Problems with the underlying hardware or infrastructure, such as, disk drives, hosts, racks, or network switches. Use the Openshift monitoring console to find the alerts or errors about cluster resources. This can give you an idea about the root cause of the slow operations in the OSD. Problems with the network. These problems are usually connected with flapping OSDs. See the Flapping OSDs section of the Red Hat Ceph Storage Troubleshooting Guide If it is a network issue, escalate to the OpenShift Data Foundation team System load. Use the Openshift console to review the metrics of the OSD pod and the node which is running the OSD. Adding or assigning more resources can be a possible solution. 7.3.23. CephOSDVersionMismatch Meaning Typically this alert triggers during an upgrade that is taking a long time. Impact Medium Diagnosis Check the ocs-operator subscription status and the operator pod health to check if an operator upgrade is in progress. Check the ocs-operator subscription health. The status condition types are CatalogSourcesUnhealthy , InstallPlanMissing , InstallPlanPending , and InstallPlanFailed . The status for each type should be False . Example output: The example output shows a False status for type CatalogSourcesUnHealthly , which means that the catalog sources are healthy. Check the OCS operator pod status to see if there is an OCS operator upgrading in progress. If you determine that the `ocs-operator`is in progress, wait for 5 mins and this alert should resolve itself. If you have waited or see a different error status condition, continue troubleshooting. 7.3.24. CephPGRepairTakingTooLong Meaning Self-healing operations are taking too long. Impact High Diagnosis Check for inconsistent Placement Groups (PGs), and repair them. For more information, see the Red Hat Knowledgebase solution Handle Inconsistent Placement Groups in Ceph . 7.3.25. CephPoolQuotaBytesCriticallyExhausted Meaning One or more pools has reached, or is very close to reaching, its quota. The threshold to trigger this error condition is controlled by the mon_pool_quota_crit_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.26. CephPoolQuotaBytesNearExhaustion Meaning One or more pools is approaching a configured fullness threshold. One threshold that can trigger this warning condition is the mon_pool_quota_warn_threshold configuration option. Impact High Mitigation Adjust the pool quotas. Run the following commands to fully remove or adjust the pool quotas up or down: Setting the quota value to 0 will disable the quota. 7.3.27. OSDCPULoadHigh Meaning OSD is a critical component in Ceph storage, responsible for managing data placement and recovery. High CPU usage in the OSD container suggests increased processing demands, potentially leading to degraded storage performance. Impact High Diagnosis Navigate to the Kubernetes dashboard or equivalent. Access the Workloads section and select the relevant pod associated with the OSD alert. Click the Metrics tab to view CPU metrics for the OSD container. Verify that the CPU usage exceeds 80% over a significant period (as specified in the alert configuration). Mitigation If the OSD CPU usage is consistently high, consider taking the following steps: Evaluate the overall storage cluster performance and identify the OSDs contributing to high CPU usage. Increase the number of OSDs in the cluster by adding more new storage devices in the existing nodes or adding new nodes with new storage devices. Review the Scaling storage4 for instructions to help distribute the load and improve overall system performance. 7.3.28. PersistentVolumeUsageCritical Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.3.29. PersistentVolumeUsageNearFull Meaning A Persistent Volume Claim (PVC) is nearing its full capacity and may lead to data loss if not attended to timely. Impact High Mitigation Expand the PVC size to increase the capacity. Log in to the OpenShift Web Console. Click Storage -> PersistentVolumeClaim . Select openshift-storage from the Project drop-down list. On the PVC you want to expand, click Action menu (...) -> Expand PVC . Update the Total size to the desired size. Click Expand . Alternatively, you can delete unnecessary data that may be taking up space. 7.4. Finding the error code of an unhealthy bucket Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Object Bucket Claims tab. Look for the object bucket claims (OBCs) that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the bucket. Click the YAML tab and look for related errors around the status and mode sections of the YAML. If the OBC is in Pending state. the error might appear in the product logs. However, in this case, it is recommended to verify that all the variables provided are accurate. 7.5. Finding the error code of an unhealthy namespace store resource Procedure In the OpenShift Web Console, click Storage -> Object Storage . Click the Namespace Store tab. Look for the namespace store resources that are not in Bound state and click on it. Click the Events tab and do one of the following: Look for events that might hint you about the current state of the resource. Click the YAML tab and look for related errors around the status and mode sections of the YAML. 7.6. Recovering pods When a first node (say NODE1 ) goes to NotReady state because of some issue, the hosted pods that are using PVC with ReadWriteOnce (RWO) access mode try to move to the second node (say NODE2 ) but get stuck due to multi-attach error. In such a case, you can recover MON, OSD, and application pods by using the following steps. Procedure Power off NODE1 (from AWS or vSphere side) and ensure that NODE1 is completely down. Force delete the pods on NODE1 by using the following command: 7.7. Recovering from EBS volume detach When an OSD or MON elastic block storage (EBS) volume where the OSD disk resides is detached from the worker Amazon EC2 instance, the volume gets reattached automatically within one or two minutes. However, the OSD pod gets into a CrashLoopBackOff state. To recover and bring back the pod to Running state, you must restart the EC2 instance. 7.8. Enabling and disabling debug logs for rook-ceph-operator Enable the debug logs for the rook-ceph-operator to obtain information about failures that help in troubleshooting issues. Procedure Enabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: DEBUG parameter in the rook-ceph-operator-config yaml file to enable the debug logs for rook-ceph-operator. Now, the rook-ceph-operator logs consist of the debug information. Disabling the debug logs Edit the configmap of the rook-ceph-operator. Add the ROOK_LOG_LEVEL: INFO parameter in the rook-ceph-operator-config yaml file to disable the debug logs for rook-ceph-operator. 7.9. Resolving low Ceph monitor count alert The CephMonLowNumber alert is displayed in the notification panel or Alert Center of the OpenShift Web Console to indicate the low number of Ceph monitor count when your internal mode deployment has five or more nodes, racks, or rooms, and when there are five or more number of failure domains in the deployment. You can increase the Ceph monitor count to improve the availability of cluster. Procedure In the CephMonLowNumber alert of the notification panel or Alert Center of OpenShift Web Console, click Configure . In the Configure Ceph Monitor pop up, click Update count. In the pop up, the recommended monitor count depending on the number of failure zones is shown. In the Configure CephMon pop up, update the monitor count value based on the recommended value and click Save changes . 7.10. Troubleshooting unhealthy blocklisted nodes 7.10.1. ODFRBDClientBlocked Meaning This alert indicates that an RADOS Block Device (RBD) client might be blocked by Ceph on a specific node within your Kubernetes cluster. The blocklisting occurs when the ocs_rbd_client_blocklisted metric reports a value of 1 for the node. Additionally, there are pods in a CreateContainerError state on the same node. The blocklisting can potentially result in the filesystem for the Persistent Volume Claims (PVCs) using RBD becoming read-only. It is crucial to investigate this alert to prevent any disruption to your storage cluster. Impact High Diagnosis The blocklisting of an RBD client can occur due to several factors, such as network or cluster slowness. In certain cases, the exclusive lock contention among three contending clients (workload, mirror daemon, and manager/scheduler) can lead to the blocklist. Mitigation Taint the blocklisted node: In Kubernetes, consider tainting the node that is blocklisted to trigger the eviction of pods to another node. This approach relies on the assumption that the unmounting/unmapping process progresses gracefully. Once the pods have been successfully evicted, the blocklisted node can be untainted, allowing the blocklist to be cleared. The pods can then be moved back to the untainted node. Reboot the blocklisted node: If tainting the node and evicting the pods do not resolve the blocklisting issue, a reboot of the blocklisted node can be attempted. This step may help alleviate any underlying issues causing the blocklist and restore normal functionality. Important Investigating and resolving the blocklist issue promptly is essential to avoid any further impact on the storage cluster. Chapter 8. Checking for Local Storage Operator deployments Red Hat OpenShift Data Foundation clusters with Local Storage Operator are deployed using local storage devices. To find out if your existing cluster with OpenShift Data Foundation was deployed using local storage devices, use the following procedure: Prerequisites OpenShift Data Foundation is installed and running in the openshift-storage namespace. Procedure By checking the storage class associated with your OpenShift Data Foundation cluster's persistent volume claims (PVCs), you can tell if your cluster was deployed using local storage devices. Check the storage class associated with OpenShift Data Foundation cluster's PVCs with the following command: Check the output. For clusters with Local Storage Operators, the PVCs associated with ocs-deviceset use the storage class localblock . The output looks similar to the following: Additional Resources Deploying OpenShift Data Foundation using local storage devices on VMware Deploying OpenShift Data Foundation using local storage devices on Red Hat Virtualization Deploying OpenShift Data Foundation using local storage devices on bare metal Deploying OpenShift Data Foundation using local storage devices on IBM Power Chapter 9. Removing failed or unwanted Ceph Object Storage devices The failed or unwanted Ceph OSDs (Object Storage Devices) affects the performance of the storage infrastructure. Hence, to improve the reliability and resilience of the storage cluster, you must remove the failed or unwanted Ceph OSDs. If you have any failed or unwanted Ceph OSDs to remove: Verify the Ceph health status. For more information see: Verifying Ceph cluster is healthy . Based on the provisioning of the OSDs, remove failed or unwanted Ceph OSDs. See: Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation . Removing failed or unwanted Ceph OSDs provisioned using local storage devices . If you are using local disks, you can reuse these disks after removing the old OSDs. 9.1. Verifying Ceph cluster is healthy Storage health is visible on the Block and File and Object dashboards. Procedure In the OpenShift Web Console, click Storage -> Data Foundation . In the Status card of the Overview tab, click Storage System and then click the storage system link from the pop up that appears. In the Status card of the Block and File tab, verify that the Storage Cluster has a green tick. In the Details card, verify that the cluster information is displayed. 9.2. Removing failed or unwanted Ceph OSDs in dynamically provisioned Red Hat OpenShift Data Foundation Follow the steps in the procedure to remove the failed or unwanted Ceph Object Storage Devices (OSDs) in dynamically provisioned Red Hat OpenShift Data Foundation. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Scale down the OSD deployment. Get the osd-prepare pod for the Ceph OSD to be removed. Delete the osd-prepare pod. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete the OSD deployment. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.3. Removing failed or unwanted Ceph OSDs provisioned using local storage devices You can remove failed or unwanted Ceph provisioned Object Storage Devices (OSDs) using local storage devices by following the steps in the procedure. Important Scaling down of clusters is supported only with the help of the Red Hat support team. Warning Removing an OSD when the Ceph component is not in a healthy state can result in data loss. Removing two or more OSDs at the same time results in data loss. Prerequisites Check if Ceph is healthy. For more information see Verifying Ceph cluster is healthy . Ensure no alerts are firing or any rebuilding process is in progress. Procedure Forcibly, mark the OSD down by scaling the replicas on the OSD deployment to 0. You can skip this step if the OSD is already down due to failure. Remove the failed OSD from the cluster. where, FAILED_OSD_ID is the integer in the pod name immediately after the rook-ceph-osd prefix. Verify that the OSD is removed successfully by checking the logs. Optional: If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, see Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs . Delete persistent volume claim (PVC) resources associated with the failed OSD. Get the PVC associated with the failed OSD. Get the persistent volume (PV) associated with the PVC. Get the failed device name. Get the prepare-pod associated with the failed OSD. Delete the osd-prepare pod before removing the associated PVC. Delete the PVC associated with the failed OSD. Remove failed device entry from the LocalVolume custom resource (CR). Log in to node with the failed device. Record the /dev/disk/by-id/<id> for the failed device name. Optional: In case, Local Storage Operator is used for provisioning OSD, login to the machine with {osd-id} and remove the device symlink. Get the OSD symlink for the failed device name. Remove the symlink. Delete the PV associated with the OSD. Verification step To check if the OSD is deleted successfully, run: This command must return the status as Completed . 9.4. Troubleshooting the error cephosd:osd.0 is NOT ok to destroy while removing failed or unwanted Ceph OSDs If you get an error as cephosd:osd.0 is NOT ok to destroy from the ocs-osd-removal-job pod in OpenShift Container Platform, run the Object Storage Device (OSD) removal job with FORCE_OSD_REMOVAL option to move the OSD to a destroyed state. Note You must use the FORCE_OSD_REMOVAL option only if all the PGs are in active state. If not, PGs must either complete the back filling or further investigate to ensure they are active. Chapter 10. Troubleshooting and deleting remaining resources during Uninstall Occasionally some of the custom resources managed by an operator may remain in "Terminating" status waiting on the finalizer to complete, although you have performed all the required cleanup tasks. In such an event you need to force the removal of such resources. If you do not do so, the resources remain in the Terminating state even after you have performed all the uninstall steps. Check if the openshift-storage namespace is stuck in the Terminating state upon deletion. Output: Check for the NamespaceFinalizersRemaining and NamespaceContentRemaining messages in the STATUS section of the command output and perform the step for each of the listed resources. Example output : Delete all the remaining resources listed in the step. For each of the resources to be deleted, do the following: Get the object kind of the resource which needs to be removed. See the message in the above output. Example : message: Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io Here cephobjectstoreuser.ceph.rook.io is the object kind. Get the Object name corresponding to the object kind. Example : Example output: Patch the resources. Example: Output: Verify that the openshift-storage project is deleted. Output: If the issue persists, reach out to Red Hat Support . Chapter 11. Troubleshooting CephFS PVC creation in external mode If you have updated the Red Hat Ceph Storage cluster from a version lower than 4.1.1 to the latest release and is not a freshly deployed cluster, you must manually set the application type for the CephFS pool on the Red Hat Ceph Storage cluster to enable CephFS Persistent Volume Claim (PVC) creation in external mode. Check for CephFS pvc stuck in Pending status. Example output : Check the output of the oc describe command to see the events for respective pvc. Expected error message is cephfs_metadata/csi.volumes.default/csi.volume.pvc-xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx: (1) Operation not permitted) Example output: Check the settings for the <cephfs metadata pool name> (here cephfs_metadata ) and <cephfs data pool name> (here cephfs_data ). For running the command, you will need jq preinstalled in the Red Hat Ceph Storage client node. Set the application type for the CephFS pool. Run the following commands on the Red Hat Ceph Storage client node : Verify if the settings are applied. Check the CephFS PVC status again. The PVC should now be in Bound state. Example output : Chapter 12. Restoring the monitor pods in OpenShift Data Foundation Restore the monitor pods if all three of them go down, and when OpenShift Data Foundation is not able to recover the monitor pods automatically. Note This is a disaster recovery procedure and must be performed under the guidance of the Red Hat support team. Contact Red Hat support team on, Red Hat support . Procedure Scale down the rook-ceph-operator and ocs operator deployments. Create a backup of all deployments in openshift-storage namespace. Patch the Object Storage Device (OSD) deployments to remove the livenessProbe parameter, and run it with the command parameter as sleep . Copy tar to the OSDs. Note While copying the tar binary to the OSD, it is important to ensure that the tar binary matches the container image OS of the pod. Copying the binary from a different OS such as, macOS, Ubuntu, and so on might lead to compatibility issues. Retrieve the monstore cluster map from all the OSDs. Create the recover_mon.sh script. Run the recover_mon.sh script. Patch the MON deployments, and run it with the command parameter as sleep . Edit the MON deployments. Patch the MON deployments to increase the initialDelaySeconds . Copy tar to the MON pods. Note While copying the tar binary to the MON, it is important to ensure that the tar binary matches the container image OS of the pod. Copying the binary from a different OS such as, macOS, Ubuntu, and so on might lead to compatibility issues. Copy the previously retrieved monstore to the mon-a pod. Navigate into the MON pod and change the ownership of the retrieved monstore . Copy the keyring template file before rebuilding the mon db . Populate the keyring of all other Ceph daemons (OSD, MGR, MDS and RGW) from their respective secrets. When getting the daemons keyring, use the following command: Get the OSDs keys with the following script: Copy the mon keyring locally, then edit it by adding all daemon keys captured in the earlier step and copy it back to one of the MON pods (mon-a): As an example, the keyring file should look like the following: Note If the caps entries are not present in the OSDs keys output, make sure to add caps to all the OSDs output as mentioned in the keyring file example. Navigate into the mon-a pod, and verify that the monstore has a monmap . Navigate into the mon-a pod. Verify that the monstore has a monmap . Optional: If the monmap is missing then create a new monmap . <mon-a-id> Is the ID of the mon-a pod. <mon-a-ip> Is the IP address of the mon-a pod. <mon-b-id> Is the ID of the mon-b pod. <mon-b-ip> Is the IP address of the mon-b pod. <mon-c-id> Is the ID of the mon-c pod. <mon-c-ip> Is the IP address of the mon-c pod. <fsid> Is the file system ID. Verify the monmap . Import the monmap . Important Use the previously created keyring file. Create a backup of the old store.db file. Copy the rebuild store.db file to the monstore directory. After rebuilding the monstore directory, copy the store.db file from local to the rest of the MON pods. <id> Is the ID of the MON pod Navigate into the rest of the MON pods and change the ownership of the copied monstore . <id> Is the ID of the MON pod Revert the patched changes. For MON deployments: <mon-deployment.yaml> Is the MON deployment yaml file For OSD deployments: <osd-deployment.yaml> Is the OSD deployment yaml file For MGR deployments: <mgr-deployment.yaml> Is the MGR deployment yaml file Important Ensure that the MON, MGR and OSD pods are up and running. Scale up the rook-ceph-operator and ocs-operator deployments. Verification steps Check the Ceph status to confirm that CephFS is running. Example output: Check the Multicloud Object Gateway (MCG) status. It should be active, and the backingstore and bucketclass should be in Ready state. Important If the MCG is not in the active state, and the backingstore and bucketclass not in the Ready state, you need to restart all the MCG related pods. For more information, see Section 12.1, "Restoring the Multicloud Object Gateway" . 12.1. Restoring the Multicloud Object Gateway If the Multicloud Object Gateway (MCG) is not in the active state, and the backingstore and bucketclass is not in the Ready state, you need to restart all the MCG related pods, and check the MCG status to confirm that the MCG is back up and running. Procedure Restart all the pods related to the MCG. <noobaa-operator> Is the name of the MCG operator <noobaa-core> Is the name of the MCG core pod <noobaa-endpoint> Is the name of the MCG endpoint <noobaa-db> Is the name of the MCG db pod If the RADOS Object Gateway (RGW) is configured, restart the pod. <rgw-pod> Is the name of the RGW pod Note In OpenShift Container Platform 4.11, after the recovery, RBD PVC fails to get mounted on the application pods. Hence, you need to restart the node that is hosting the application pods. To get the node name that is hosting the application pod, run the following command: Chapter 13. Restoring ceph-monitor quorum in OpenShift Data Foundation In some circumstances, the ceph-mons might lose quorum. If the mons cannot form quorum again, there is a manual procedure to get the quorum going again. The only requirement is that at least one mon must be healthy. The following steps removes the unhealthy mons from quorum and enables you to form a quorum again with a single mon , then bring the quorum back to the original size. For example, if you have three mons and lose quorum, you need to remove the two bad mons from quorum, notify the good mon that it is the only mon in quorum, and then restart the good mon . Procedure Stop the rook-ceph-operator so that the mons are not failed over when you are modifying the monmap . Inject a new monmap . Warning You must inject the monmap very carefully. If run incorrectly, your cluster could be permanently destroyed. The Ceph monmap keeps track of the mon quorum. The monmap is updated to only contain the healthy mon. In this example, the healthy mon is rook-ceph-mon-b , while the unhealthy mons are rook-ceph-mon-a and rook-ceph-mon-c . Take a backup of the current rook-ceph-mon-b Deployment: Open the YAML file and copy the command and arguments from the mon container (see containers list in the following example). This is needed for the monmap changes. Cleanup the copied command and args fields to form a pastable command as follows: Note Make sure to remove the single quotes around the --log-stderr-prefix flag and the parenthesis around the variables being passed ROOK_CEPH_MON_HOST , ROOK_CEPH_MON_INITIAL_MEMBERS and ROOK_POD_IP ). Patch the rook-ceph-mon-b Deployment to stop the working of this mon without deleting the mon pod. Perform the following steps on the mon-b pod: Connect to the pod of a healthy mon and run the following commands: Set the variable. Extract the monmap to a file, by pasting the ceph mon command from the good mon deployment and adding the --extract-monmap=USD{monmap_path} flag. Review the contents of the monmap . Remove the bad mons from the monmap . In this example we remove mon0 and mon2 : Inject the modified monmap into the good mon , by pasting the ceph mon command and adding the --inject-monmap=USD{monmap_path} flag as follows: Exit the shell to continue. Edit the Rook configmaps . Edit the configmap that the operator uses to track the mons . Verify that in the data element you see three mons such as the following (or more depending on your moncount ): Delete the bad mons from the list to end up with a single good mon . For example: Save the file and exit. Now, you need to adapt a Secret which is used for the mons and other components. Set a value for the variable good_mon_id . For example: You can use the oc patch command to patch the rook-ceph-config secret and update the two key/value pairs mon_host and mon_initial_members . Note If you are using hostNetwork: true , you need to replace the mon_host var with the node IP the mon is pinned to ( nodeSelector ). This is because there is no rook-ceph-mon-* service created in that "mode". Restart the mon . You need to restart the good mon pod with the original ceph-mon command to pick up the changes. Use the oc replace command on the backup of the mon deployment YAML file: Note Option --force deletes the deployment and creates a new one. Verify the status of the cluster. The status should show one mon in quorum. If the status looks good, your cluster should be healthy again. Delete the two mon deployments that are no longer expected to be in quorum. For example: In this example the deployments to be deleted are rook-ceph-mon-a and rook-ceph-mon-c . Restart the operator. Start the rook operator again to resume monitoring the health of the cluster. Note It is safe to ignore the errors that a number of resources already exist. The operator automatically adds more mons to increase the quorum size again depending on the mon count. Chapter 14. Enabling the Red Hat OpenShift Data Foundation console plugin The Data Foundation console plugin is enabled by default. In case, this option was unchecked during OpenShift Data Foundation Operator installation, use the following instructions to enable the console plugin post-deployment either from the graphical user interface (GUI) or command-line interface. Prerequisites You have administrative access to the OpenShift Web Console. OpenShift Data Foundation Operator is installed and running in the openshift-storage namespace. Procedure From user interface In the OpenShift Web Console, click Operators -> Installed Operators to view all the installed operators. Ensure that the Project selected is openshift-storage . Click on the OpenShift Data Foundation operator. Enable the console plugin option. In the Details tab, click the pencil icon under the Console plugin . Select Enable , and click Save . From command-line interface Execute the following command to enable the console plugin option: Verification steps After the console plugin option is enabled, a pop-up with a message, Web console update is available appears on the GUI. Click Refresh web console from this pop-up for the console changes to reflect. In the Web Console, navigate to Storage and verify if Data Foundation is available. Chapter 15. Changing resources for the OpenShift Data Foundation components When you install OpenShift Data Foundation, it comes with pre-defined resources that the OpenShift Data Foundation pods can consume. In some situations with higher I/O load, it might be required to increase these limits. To change the CPU and memory resources on the rook-ceph pods, see Section 15.1, "Changing the CPU and memory resources on the rook-ceph pods" . To tune the resources for the Multicloud Object Gateway (MCG), see Section 15.2, "Tuning the resources for the MCG" . 15.1. Changing the CPU and memory resources on the rook-ceph pods When you install OpenShift Data Foundation, it comes with pre-defined CPU and memory resources for the rook-ceph pods. You can manually increase these values according to the requirements. You can change the CPU and memory resources on the following pods: mgr mds rgw The following example illustrates how to change the CPU and memory resources on the rook-ceph pods. In this example, the existing MDS pod values of cpu and memory are increased from 1 and 4Gi to 2 and 8Gi respectively. Edit the storage cluster: <storagecluster_name> Specify the name of the storage cluster. For example: Add the following lines to the storage cluster Custom Resource (CR): Save the changes and exit the editor. Alternatively, run the oc patch command to change the CPU and memory value of the mds pod: <storagecluster_name> Specify the name of the storage cluster. For example: 15.2. Tuning the resources for the MCG The default configuration for the Multicloud Object Gateway (MCG) is optimized for low resource consumption and not performance. For more information on how to tune the resources for the MCG, see the Red Hat Knowledgebase solution Performance tuning guide for Multicloud Object Gateway (NooBaa) . Chapter 16. Disabling Multicloud Object Gateway external service after deploying OpenShift Data Foundation When you deploy OpenShift Data Foundation, public IPs are created even when OpenShift is installed as a private cluster. However, you can disable the Multicloud Object Gateway (MCG) load balancer usage by using the disableLoadBalancerService variable in the storagecluster CRD. This restricts MCG from creating any public resources for private clusters and helps to disable the NooBaa service EXTERNAL-IP . Procedure Run the following command and add the disableLoadBalancerService variable in the storagecluster YAML to set the service to ClusterIP: Note To undo the changes and set the service to LoadBalancer, set the disableLoadBalancerService variable to false or remove that line completely. Chapter 17. Accessing odf-console with the ovs-multitenant plugin by manually enabling global pod networking In OpenShift Container Platform, when ovs-multitenant plugin is used for software-defined networking (SDN), pods from different projects cannot send packets to or receive packets from pods and services of a different project. By default, pods can not communicate between namespaces or projects because a project's pod networking is not global. To access odf-console, the OpenShift console pod in the openshift-console namespace needs to connect with the OpenShift Data Foundation odf-console in the openshift-storage namespace. This is possible only when you manually enable global pod networking. Issue When`ovs-multitenant` plugin is used in the OpenShift Container Platform, the odf-console plugin fails with the following message: Resolution Make the pod networking for the OpenShift Data Foundation project global: Chapter 18. Annotating encrypted RBD storage classes Starting with OpenShift Data Foundation 4.14, when the OpenShift console creates a RADOS block device (RBD) storage class with encryption enabled, the annotation is set automatically. However, you need to add the annotation, cdi.kubevirt.io/clone-strategy=copy for any of the encrypted RBD storage classes that were previously created before updating to the OpenShift Data Foundation version 4.14. This enables customer data integration (CDI) to use host-assisted cloning instead of the default smart cloning. The keys used to access an encrypted volume are tied to the namespace where the volume was created. When cloning an encrypted volume to a new namespace, such as, provisioning a new OpenShift Virtualization virtual machine, a new volume must be created and the content of the source volume must then be copied into the new volume. This behavior is triggered automatically if the storage class is properly annotated. Chapter 19. Troubleshooting issues in provider mode 19.1. Force deletion of storage in provider clusters When a client cluster is deleted without performing the offboarding process to remove all the resources from the corresponding provider cluster, you need to perform force deletion of the corresponding storage consumer from the provider cluster. This helps to release the storage space that was claimed by the client. Caution It is recommended to use this method only in unavoidable situations such as accidental deletion of storage client clusters. Prerequisites Access to the OpenShift Data Foundation storage cluster in provider mode. Procedure Click Storage -> Storage Clients from the OpenShift console. Click the delete icon at the far right of the listed storage client cluster. The delete icon is enabled only after 5 minutes of the last heartbeat of the cluster. Click Confirm .
[ "oc image mirror registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 <local-registry> /odf4/odf-must-gather-rhel9:v4.15 [--registry-config= <path-to-the-registry-config> ] [--insecure=true]", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>", "oc adm must-gather --image=<local-registry>/odf4/odf-must-gather-rhel9:v4.15 --dest-dir= <directory-name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ --node-name=_<node-name>_", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since=<duration>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 --dest-dir=_<directory-name>_ /usr/bin/gather since-time=<rfc3339-timestamp>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15 -- /usr/bin/gather <-arg>", "odf get recovery-profile high_recovery_ops", "odf get health Info: Checking if at least three mon pods are running on different nodes rook-ceph-mon-a-7fb76597dc-98pxz Running openshift-storage ip-10-0-69-145.us-west-1.compute.internal rook-ceph-mon-b-885bdc59c-4vvcm Running openshift-storage ip-10-0-64-239.us-west-1.compute.internal rook-ceph-mon-c-5f59bb5dbc-8vvlg Running openshift-storage ip-10-0-30-197.us-west-1.compute.internal Info: Checking mon quorum and ceph health details Info: HEALTH_OK [...]", "odf get dr-health Info: fetching the cephblockpools with mirroring enabled Info: found \"ocs-storagecluster-cephblockpool\" cephblockpool with mirroring enabled Info: running ceph status from peer cluster Info: cluster: id: 9a2e7e55-40e1-4a79-9bfa-c3e4750c6b0f health: HEALTH_OK [...]", "odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled. odf get mon-endpoints Displays the mon endpoints odf get dr-prereq peer-cluster-1 Info: Submariner is installed. Info: Globalnet is required. Info: Globalnet is enabled.", "odf operator rook set ROOK_LOG_LEVEL DEBUG configmap/rook-ceph-operator-config patched", "odf operator rook restart deployment.apps/rook-ceph-operator restarted", "odf restore mon-quorum c", "odf restore deleted cephclusters Info: Detecting which resources to restore for crd \"cephclusters\" Info: Restoring CR my-cluster Warning: The resource my-cluster was found deleted. Do you want to restore it? yes | no [...]", "odf set ceph log-level <ceph-subsystem1> <ceph-subsystem2> <log-level>", "odf set ceph log-level osd crush 20", "odf set ceph log-level mds crush 20", "odf set ceph log-level mon crush 20", "oc logs <pod-name> -n <namespace>", "oc logs rook-ceph-operator-<ID> -n openshift-storage", "oc logs csi-cephfsplugin-<ID> -n openshift-storage -c csi-cephfsplugin", "oc logs csi-rbdplugin-<ID> -n openshift-storage -c csi-rbdplugin", "oc logs csi-cephfsplugin-<ID> -n openshift-storage --all-containers", "oc logs csi-rbdplugin-<ID> -n openshift-storage --all-containers", "oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage -c csi-cephfsplugin", "oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage -c csi-rbdplugin", "oc logs csi-cephfsplugin-provisioner-<ID> -n openshift-storage --all-containers", "oc logs csi-rbdplugin-provisioner-<ID> -n openshift-storage --all-containers", "oc cluster-info dump -n openshift-storage --output-directory=<directory-name>", "oc cluster-info dump -n openshift-local-storage --output-directory=<directory-name>", "oc logs <ocs-operator> -n openshift-storage", "oc get pods -n openshift-storage | grep -i \"ocs-operator\" | awk '{print USD1}'", "oc get events --sort-by=metadata.creationTimestamp -n openshift-storage", "oc get csv -n openshift-storage", "NAME DISPLAY VERSION REPLACES PHASE mcg-operator.v4.15.0 NooBaa Operator 4.15.0 Succeeded ocs-operator.v4.15.0 OpenShift Container Storage 4.15.0 Succeeded odf-csi-addons-operator.v4.15.0 CSI Addons 4.15.0 Succeeded odf-operator.v4.15.0 OpenShift Data Foundation 4.15.0 Succeeded", "oc get subs -n openshift-storage", "NAME PACKAGE SOURCE CHANNEL mcg-operator-stable-4.15-redhat-operators-openshift-marketplace mcg-operator redhat-operators stable-4.15 ocs-operator-stable-4.15-redhat-operators-openshift-marketplace ocs-operator redhat-operators stable-4.15 odf-csi-addons-operator odf-csi-addons-operator redhat-operators stable-4.15 odf-operator odf-operator redhat-operators stable-4.15", "oc get installplan -n openshift-storage", "oc get pods -o wide | grep <component-name>", "oc get pods -o wide | grep rook-ceph-operator", "rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 rook-ceph-operator-566cc677fd-bjqnb 1/1 Running 20 4h6m 10.128.2.5 dell-r440-12.gsslab.pnq2.redhat.com <none> <none> <none> <none>", "oc debug node/<node name>", "chroot /host", "crictl images | grep <component>", "crictl images | grep rook-ceph", "oc annotate namespace openshift-storage openshift.io/node-selector=", "delete pod -l app=csi-cephfsplugin -n openshift-storage delete pod -l app=csi-rbdplugin -n openshift-storage", "du -a <path-in-the-mon-node> |sort -n -r |head -n10", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-osd", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"memory\": \"16Gi\"},\"requests\": {\"memory\": \"16Gi\"}}}}}'", "patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"8\"}, \"requests\": {\"cpu\": \"8\"}}}}}'", "patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch '{\"spec\": {\"managedResources\": {\"cephFilesystems\":{\"activeMetadataServers\": 2}}}}'", "oc project openshift-storage", "get pod | grep rook-ceph-mds", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc get pods | grep mgr", "oc describe pods/ <pod_name>", "oc get pods | grep mgr", "oc project openshift-storage", "get pod | grep rook-ceph-mgr", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-mgr", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc logs <rook-ceph-mon-X-yyyy> -n openshift-storage", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep rook-ceph-mon", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "get pod | grep {ceph-component}", "Examine the output for a {ceph-component} that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions", "[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]", "oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "-n openshift-storage get pods", "-n openshift-storage get pods", "-n openshift-storage get pods | grep osd", "-n openshift-storage describe pods/<osd_podname_ from_the_ previous step>", "TOOLS_POD=USD(oc get pods -n openshift-storage -l app=rook-ceph-tools -o name) rsh -n openshift-storage USDTOOLS_POD", "ceph status", "get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'", "describe node <node_name>", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "get nodes --selector='node-role.kubernetes.io/worker','!node-role.kubernetes.io/infra'", "describe node <node_name>", "oc project openshift-storage", "oc get pod | grep rook-ceph", "Examine the output for a rook-ceph that is in the pending state, not running or not ready MYPOD= <pod_name>", "oc get pod/USD{MYPOD} -o wide", "oc describe pod/USD{MYPOD}", "oc logs pod/USD{MYPOD}", "oc adm must-gather --image=registry.redhat.io/odf4/odf-must-gather-rhel9:v4.15", "ceph daemon osd.<id> ops", "ceph daemon osd.<id> dump_historic_ops", "oc get sub USD(oc get pods -n openshift-storage | grep -v ocs-operator) -n openshift-storage -o json | jq .status.conditions", "[ { \"lastTransitionTime\": \"2021-01-26T19:21:37Z\", \"message\": \"all available catalogsources are healthy\", \"reason\": \"AllCatalogSourcesHealthy\", \"status\": \"False\", \"type\": \"CatalogSourcesUnhealthy\" } ]", "oc get pod -n openshift-storage | grep ocs-operator OCSOP=USD(oc get pod -n openshift-storage -o custom-columns=POD:.metadata.name --no-headers | grep ocs-operator) echo USDOCSOP oc get pod/USD{OCSOP} -n openshift-storage oc describe pod/USD{OCSOP} -n openshift-storage", "ceph osd pool set-quota <pool> max_bytes <bytes>", "ceph osd pool set-quota <pool> max_objects <objects>", "ceph osd pool set-quota <pool> max_bytes <bytes>", "ceph osd pool set-quota <pool> max_objects <objects>", "oc delete pod <pod-name> --grace-period=0 --force", "oc edit configmap rook-ceph-operator-config", "... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: DEBUG", "oc edit configmap rook-ceph-operator-config", "... data: # The logging level for the operator: INFO | DEBUG ROOK_LOG_LEVEL: INFO", "oc get pvc -n openshift-storage", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE db-noobaa-db-0 Bound pvc-d96c747b-2ab5-47e2-b07e-1079623748d8 50Gi RWO ocs-storagecluster-ceph-rbd 114s ocs-deviceset-0-0-lzfrd Bound local-pv-7e70c77c 1769Gi RWO localblock 2m10s ocs-deviceset-1-0-7rggl Bound local-pv-b19b3d48 1769Gi RWO localblock 2m10s ocs-deviceset-2-0-znhk8 Bound local-pv-e9f22cdc 1769Gi RWO localblock 2m10s", "oc scale deployment rook-ceph-osd-<osd-id> --replicas=0", "oc get deployment rook-ceph-osd-<osd-id> -oyaml | grep ceph.rook.io/pvc", "oc delete -n openshift-storage pod rook-ceph-osd-prepare-<pvc-from-above-command>-<pod-suffix>", "failed_osd_id=<osd-id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -", "oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc delete deployment rook-ceph-osd-<osd-id>", "oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc scale deployment rook-ceph-osd-<osd-id> --replicas=0", "failed_osd_id=<osd_id> oc process -n openshift-storage ocs-osd-removal -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -", "oc logs -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc get -n openshift-storage -o yaml deployment rook-ceph-osd-<osd-id> | grep ceph.rook.io/pvc", "oc get -n openshift-storage pvc <pvc-name>", "oc get pv <pv-name-from-above-command> -oyaml | grep path", "oc describe -n openshift-storage pvc ocs-deviceset-0-0-nvs68 | grep Mounted", "oc delete -n openshift-storage pod <osd-prepare-pod-from-above-command>", "oc delete -n openshift-storage pvc <pvc-name-from-step-a>", "oc debug node/<node_with_failed_osd>", "ls -alh /mnt/local-storage/localblock/", "oc debug node/<node_with_failed_osd>", "ls -alh /mnt/local-storage/localblock", "rm /mnt/local-storage/localblock/<failed-device-name>", "oc delete pv <pv-name>", "#oc get pod -n openshift-storage ocs-osd-removal-USD<failed_osd_id>-<pod-suffix>", "oc process -n openshift-storage ocs-osd-removal -p FORCE_OSD_REMOVAL=true -p FAILED_OSD_IDS=USD<failed_osd_id> | oc create -f -", "oc get project -n <namespace>", "NAME DISPLAY NAME STATUS openshift-storage Terminating", "oc get project openshift-storage -o yaml", "status: conditions: - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All resources successfully discovered reason: ResourcesDiscovered status: \"False\" type: NamespaceDeletionDiscoveryFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All legacy kube types successfully parsed reason: ParsedGroupVersions status: \"False\" type: NamespaceDeletionGroupVersionParsingFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: All content successfully deleted, may be waiting on finalization reason: ContentDeleted status: \"False\" type: NamespaceDeletionContentFailure - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some resources are remaining: cephobjectstoreusers.ceph.rook.io has 1 resource instances' reason: SomeResourcesRemain status: \"True\" type: NamespaceContentRemaining - lastTransitionTime: \"2020-07-26T12:32:56Z\" message: 'Some content in the namespace has finalizers remaining: cephobjectstoreuser.ceph.rook.io in 1 resource instances' reason: SomeFinalizersRemain status: \"True\" type: NamespaceFinalizersRemaining", "oc get <Object-kind> -n <project-name>", "oc get cephobjectstoreusers.ceph.rook.io -n openshift-storage", "NAME AGE noobaa-ceph-objectstore-user 26h", "oc patch -n <project-name> <object-kind>/<object-name> --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "oc patch -n openshift-storage cephobjectstoreusers.ceph.rook.io/noobaa-ceph-objectstore-user --type=merge -p '{\"metadata\": {\"finalizers\":null}}'", "cephobjectstoreuser.ceph.rook.io/noobaa-ceph-objectstore-user patched", "oc get project openshift-storage", "Error from server (NotFound): namespaces \"openshift-storage\" not found", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Pending ocs-external-storagecluster-cephfs 28h [...]", "oc describe pvc ngx-fs-pxknkcix20-pod -n nginx-file", "Name: ngx-fs-pxknkcix20-pod Namespace: nginx-file StorageClass: ocs-external-storagecluster-cephfs Status: Pending Volume: Labels: <none> Annotations: volume.beta.kubernetes.io/storage-provisioner: openshift-storage.cephfs.csi.ceph.com Finalizers: [kubernetes.io/pvc-protection] Capacity: Access Modes: VolumeMode: Filesystem Mounted By: ngx-fs-oyoe047v2bn2ka42jfgg-pod-hqhzf Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning ProvisioningFailed 107m (x245 over 22h) openshift-storage.cephfs.csi.ceph.com_csi-cephfsplugin-provisioner-5f8b66cc96-hvcqp_6b7044af-c904-4795-9ce5-bf0cf63cc4a4 (combined from similar events): failed to provision volume with StorageClass \"ocs-external-storagecluster-cephfs\": rpc error: code = Internal desc = error (an error (exit status 1) occurred while running rados args: [-m 192.168.13.212:6789,192.168.13.211:6789,192.168.13.213:6789 --id csi-cephfs-provisioner --keyfile= stripped -c /etc/ceph/ceph.conf -p cephfs_metadata getomapval csi.volumes.default csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 /tmp/omap-get-186436239 --namespace=csi]) occurred, command output streams is ( error getting omap value cephfs_metadata/csi.volumes.default/csi.volume.pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47: (1) Operation not permitted)", "ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": {} } \"cephfs_metadata\" { \"cephfs\": {} }", "ceph osd pool application set <cephfs metadata pool name> cephfs metadata cephfs", "ceph osd pool application set <cephfs data pool name> cephfs data cephfs", "ceph osd pool ls detail --format=json | jq '.[] | select(.pool_name| startswith(\"cephfs\")) | .pool_name, .application_metadata' \"cephfs_data\" { \"cephfs\": { \"data\": \"cephfs\" } } \"cephfs_metadata\" { \"cephfs\": { \"metadata\": \"cephfs\" } }", "oc get pvc -n <namespace>", "NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE ngx-fs-pxknkcix20-pod Bound pvc-1ac0c6e6-9428-445d-bbd6-1284d54ddb47 1Mi RWO ocs-external-storagecluster-cephfs 29h [...]", "oc scale deployment rook-ceph-operator --replicas=0 -n openshift-storage", "oc scale deployment ocs-operator --replicas=0 -n openshift-storage", "mkdir backup", "cd backup", "oc project openshift-storage", "for d in USD(oc get deployment|awk -F' ' '{print USD1}'|grep -v NAME); do echo USDd;oc get deployment USDd -o yaml > oc_get_deployment.USD{d}.yaml; done", "for i in USD(oc get deployment -l app=rook-ceph-osd -oname);do oc patch USD{i} -n openshift-storage --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' ; oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"osd\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}' ; done", "for i in `oc get pods -l app=rook-ceph-osd -o name | sed -e \"s/pod\\///g\"` ; do cat /usr/bin/tar | oc exec -i USD{i} -- bash -c 'cat - >/usr/bin/tar' ; oc exec -i USD{i} -- bash -c 'chmod +x /usr/bin/tar' ;done", "#!/bin/bash ms=/tmp/monstore rm -rf USDms mkdir USDms for osd_pod in USD(oc get po -l app=rook-ceph-osd -oname -n openshift-storage); do echo \"Starting with pod: USDosd_pod\" podname=USD(echo USDosd_pod|sed 's/pod\\///g') oc exec USDosd_pod -- rm -rf USDms oc exec USDosd_pod -- mkdir USDms oc cp USDms USDpodname:USDms rm -rf USDms mkdir USDms echo \"pod in loop: USDosd_pod ; done deleting local dirs\" oc exec USDosd_pod -- ceph-objectstore-tool --type bluestore --data-path /var/lib/ceph/osd/ceph-USD(oc get USDosd_pod -ojsonpath='{ .metadata.labels.ceph_daemon_id }') --op update-mon-db --no-mon-config --mon-store-path USDms echo \"Done with COT on pod: USDosd_pod\" oc cp USDpodname:USDms USDms echo \"Finished pulling COT data from pod: USDosd_pod\" done", "chmod +x recover_mon.sh", "./recover_mon.sh", "for i in USD(oc get deployment -l app=rook-ceph-mon -oname);do oc patch USD{i} -n openshift-storage -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'; done", "for i in a b c ; do oc get deployment rook-ceph-mon-USD{i} -o yaml | sed \"s/initialDelaySeconds: 10/initialDelaySeconds: 10000/g\" | oc replace -f - ; done", "for i in `oc get pods -l app=rook-ceph-mon -o name | sed -e \"s/pod\\///g\"` ; do cat /usr/bin/tar | oc exec -i USD{i} -- bash -c 'cat - >/usr/bin/tar' ; oc exec -i USD{i} -- bash -c 'chmod +x /usr/bin/tar' ;done", "oc cp /tmp/monstore/ USD(oc get po -l app=rook-ceph-mon,mon=a -oname |sed 's/pod\\///g'):/tmp/", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)", "chown -R ceph:ceph /tmp/monstore", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)", "cp /etc/ceph/keyring-store/keyring /tmp/keyring", "cat /tmp/keyring [mon.] key = AQCleqldWqm5IhAAgZQbEzoShkZV42RiQVffnA== caps mon = \"allow *\" [client.admin] key = AQCmAKld8J05KxAArOWeRAw63gAwwZO5o75ZNQ== auid = 0 caps mds = \"allow *\" caps mgr = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\"", "oc get secret rook-ceph-mds-ocs-storagecluster-cephfilesystem-a-keyring -ojson | jq .data.keyring | xargs echo | base64 -d [mds.ocs-storagecluster-cephfilesystem-a] key = AQB3r8VgAtr6OhAAVhhXpNKqRTuEVdRoxG4uRA== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\"", "for i in `oc get secret | grep keyring| awk '{print USD1}'` ; do oc get secret USD{i} -ojson | jq .data.keyring | xargs echo | base64 -d ; done", "for i in `oc get pods -l app=rook-ceph-osd -o name | sed -e \"s/pod\\///g\"` ; do oc exec -i USD{i} -- bash -c 'cat /var/lib/ceph/osd/ceph-*/keyring ' ;done", "cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname|sed -e \"s/pod\\///g\"):/etc/ceph/keyring-store/..data/keyring /tmp/keyring-mon-a", "vi /tmp/keyring-mon-a", "[mon.] key = AQCbQLRn0j9mKhAAJKWmMZ483QIpMwzx/yGSLw== caps mon = \"allow *\" [mds.ocs-storagecluster-cephfilesystem-a] key = AQBFQbRnYuB9LxAA8i1fCSAKQQsPuywZ0Jlc5Q== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\" [mds.ocs-storagecluster-cephfilesystem-b] key = AQBHQbRnwHAOEBAAv+rBpYP5W8BmC7gLfLyk1w== caps mon = \"allow profile mds\" caps osd = \"allow *\" caps mds = \"allow\" [osd.0] key = AQAvQbRnjF0eEhAA3H0l9zvKGZZM9Up6fJajhQ== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.1] key = AQA0QbRnq4cSGxAA7JpuK1+sq8gALNmMYFUMzw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [osd.2] key = AQA3QbRn6JvcOBAAFKruZQhlQJKUOi9oxcN6fw== caps mgr = \"allow profile osd\" caps mon = \"allow profile osd\" caps osd = \"allow *\" [client.admin] key = AQCbQLRnSzOuLBAAK1cSgr2eIyrZV8mV28UfvQ== caps mds = \"allow *\" caps mon = \"allow *\" caps osd = \"allow *\" caps mgr = \"allow *\" [client.rgw.ocs.storagecluster.cephobjectstore.a] key = AQBTQbRny7NJLRAAPeTvK9kVg71/glbYLANGyw== caps mon = \"allow rw\" caps osd = \"allow rwx\" [mgr.a] key = AQD9QLRn8+xzDxAARqWQatoT9ruK76EpDS6iCw== caps mon = \"allow profile mgr\" caps mds = \"allow *\" caps osd = \"allow *\" [mgr.b] key = AQD9QLRnltZOIhAAexshUqdOr3G79HWYXUDGFg== caps mon = \"allow profile mgr\" caps mds = \"allow *\" caps osd = \"allow *\" [client.crash] key = AQD7QLRn6DDzCBAAEzhXRzGQUBUNTzC3nHntFQ== caps mon = \"allow profile crash\" caps mgr = \"allow rw\" [client.ceph-exporter] key = AQD7QLRntHzkGxAApQTkMVzcTiZn7jZbwK99SQ== caps mon = \"allow profile ceph-exporter\" caps mgr = \"allow r\" caps osd = \"allow r\" caps mds = \"allow r\"", "cp /tmp/keyring-mon-a USD(oc get po -l app=rook-ceph-mon,mon=a -oname|sed -e \"s/pod\\///g\"):/tmp/keyring", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon=a -oname)", "ceph-monstore-tool /tmp/monstore get monmap -- --out /tmp/monmap", "monmaptool /tmp/monmap --print", "monmaptool --create --add <mon-a-id> <mon-a-ip> --add <mon-b-id> <mon-b-ip> --add <mon-c-id> <mon-c-ip> --enable-all-features --clobber /root/monmap --fsid <fsid>", "monmaptool /root/monmap --print", "ceph-monstore-tool /tmp/monstore rebuild -- --keyring /tmp/keyring --monmap /root/monmap", "chown -R ceph:ceph /tmp/monstore", "mv /var/lib/ceph/mon/ceph-a/store.db /var/lib/ceph/mon/ceph-a/store.db.corrupted", "mv /var/lib/ceph/mon/ceph-b/store.db /var/lib/ceph/mon/ceph-b/store.db.corrupted", "mv /var/lib/ceph/mon/ceph-c/store.db /var/lib/ceph/mon/ceph-c/store.db.corrupted", "mv /tmp/monstore/store.db /var/lib/ceph/mon/ceph-a/store.db", "chown -R ceph:ceph /var/lib/ceph/mon/ceph-a/store.db", "oc cp USD(oc get po -l app=rook-ceph-mon,mon=a -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph-a/store.db /tmp/store.db", "oc cp /tmp/store.db USD(oc get po -l app=rook-ceph-mon,mon=<id> -oname | sed 's/pod\\///g'):/var/lib/ceph/mon/ceph- <id>", "oc rsh USD(oc get po -l app=rook-ceph-mon,mon= <id> -oname)", "chown -R ceph:ceph /var/lib/ceph/mon/ceph- <id> /store.db", "oc replace --force -f <mon-deployment.yaml>", "oc replace --force -f <osd-deployment.yaml>", "oc replace --force -f <mgr-deployment.yaml>", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1", "oc -n openshift-storage scale deployment ocs-operator --replicas=1", "ceph -s", "cluster: id: f111402f-84d1-4e06-9fdb-c27607676e55 health: HEALTH_ERR 1 filesystem is offline 1 filesystem is online with fewer MDS than max_mds 3 daemons have recently crashed services: mon: 3 daemons, quorum b,c,a (age 15m) mgr: a(active, since 14m) mds: ocs-storagecluster-cephfilesystem:0 osd: 3 osds: 3 up (since 15m), 3 in (since 2h) data: pools: 3 pools, 96 pgs objects: 500 objects, 1.1 GiB usage: 5.5 GiB used, 295 GiB / 300 GiB avail pgs: 96 active+clean", "noobaa status -n openshift-storage", "oc delete pods <noobaa-operator> -n openshift-storage", "oc delete pods <noobaa-core> -n openshift-storage", "oc delete pods <noobaa-endpoint> -n openshift-storage", "oc delete pods <noobaa-db> -n openshift-storage", "oc delete pods <rgw-pod> -n openshift-storage", "oc get pods <application-pod> -n <namespace> -o yaml | grep nodeName nodeName: node_name", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=0", "oc -n openshift-storage get deployment rook-ceph-mon-b -o yaml > rook-ceph-mon-b-deployment.yaml", "[...] containers: - args: - --fsid=41a537f2-f282-428e-989f-a9e07be32e47 - --keyring=/etc/ceph/keyring-store/keyring - --log-to-stderr=true - --err-to-stderr=true - --mon-cluster-log-to-stderr=true - '--log-stderr-prefix=debug ' - --default-log-to-file=false - --default-mon-cluster-log-to-file=false - --mon-host=USD(ROOK_CEPH_MON_HOST) - --mon-initial-members=USD(ROOK_CEPH_MON_INITIAL_MEMBERS) - --id=b - --setuser=ceph - --setgroup=ceph - --foreground - --public-addr=10.100.13.242 - --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db - --public-bind-addr=USD(ROOK_POD_IP) command: - ceph-mon [...]", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP", "oc -n openshift-storage patch deployment rook-ceph-mon-b --type='json' -p '[{\"op\":\"remove\", \"path\":\"/spec/template/spec/containers/0/livenessProbe\"}]' oc -n openshift-storage patch deployment rook-ceph-mon-b -p '{\"spec\": {\"template\": {\"spec\": {\"containers\": [{\"name\": \"mon\", \"command\": [\"sleep\", \"infinity\"], \"args\": []}]}}}}'", "oc -n openshift-storage exec -it <mon-pod> bash", "monmap_path=/tmp/monmap", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --extract-monmap=USD{monmap_path}", "monmaptool --print /tmp/monmap", "monmaptool USD{monmap_path} --rm <bad_mon>", "monmaptool USD{monmap_path} --rm a monmaptool USD{monmap_path} --rm c", "ceph-mon --fsid=41a537f2-f282-428e-989f-a9e07be32e47 --keyring=/etc/ceph/keyring-store/keyring --log-to-stderr=true --err-to-stderr=true --mon-cluster-log-to-stderr=true --log-stderr-prefix=debug --default-log-to-file=false --default-mon-cluster-log-to-file=false --mon-host=USDROOK_CEPH_MON_HOST --mon-initial-members=USDROOK_CEPH_MON_INITIAL_MEMBERS --id=b --setuser=ceph --setgroup=ceph --foreground --public-addr=10.100.13.242 --setuser-match-path=/var/lib/ceph/mon/ceph-b/store.db --public-bind-addr=USDROOK_POD_IP --inject-monmap=USD{monmap_path}", "oc -n openshift-storage edit configmap rook-ceph-mon-endpoints", "data: a=10.100.35.200:6789;b=10.100.13.242:6789;c=10.100.35.12:6789", "data: b=10.100.13.242:6789", "good_mon_id=b", "mon_host=USD(oc -n openshift-storage get svc rook-ceph-mon-b -o jsonpath='{.spec.clusterIP}') oc -n openshift-storage patch secret rook-ceph-config -p '{\"stringData\": {\"mon_host\": \"[v2:'\"USD{mon_host}\"':3300,v1:'\"USD{mon_host}\"':6789]\", \"mon_initial_members\": \"'\"USD{good_mon_id}\"'\"}}'", "oc replace --force -f rook-ceph-mon-b-deployment.yaml", "oc delete deploy <rook-ceph-mon-1> oc delete deploy <rook-ceph-mon-2>", "oc -n openshift-storage scale deployment rook-ceph-operator --replicas=1", "oc patch console.operator cluster -n openshift-storage --type json -p '[{\"op\": \"add\", \"path\": \"/spec/plugins\", \"value\": [\"odf-console\"]}]'", "oc edit storagecluster -n openshift-storage <storagecluster_name>", "oc edit storagecluster -n openshift-storage ocs-storagecluster", "spec: resources: mds: limits: cpu: 2 memory: 8Gi requests: cpu: 2 memory: 8Gi", "oc patch -n openshift-storage storagecluster <storagecluster_name> --type merge --patch '{\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}}'", "oc patch -n openshift-storage storagecluster ocs-storagecluster --type merge --patch ' {\"spec\": {\"resources\": {\"mds\": {\"limits\": {\"cpu\": \"2\",\"memory\": \"8Gi\"},\"requests\": {\"cpu\": \"2\",\"memory\": \"8Gi\"}}}}} '", "oc edit storagecluster -n openshift-storage <storagecluster_name> [...] spec: arbiter: {} encryption: kms: {} externalStorage: {} managedResources: cephBlockPools: {} cephCluster: {} cephConfig: {} cephDashboard: {} cephFilesystems: {} cephNonResilientPools: {} cephObjectStoreUsers: {} cephObjectStores: {} cephRBDMirror: {} cephToolbox: {} mirroring: {} multiCloudGateway: disableLoadBalancerService: true <--------------- Add this endpoints: [...]", "GET request for \"odf-console\" plugin failed: Get \"https://odf-console-service.openshift-storage.svc.cluster.local:9001/locales/en/plugin__odf-console.json\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)", "oc adm pod-network make-projects-global openshift-storage" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.18/html-single/troubleshooting_openshift_data_foundation/troubleshooting-and-deleting-remaining-resources-during-uninstall_rhodf
1.2. SystemTap Capabilities
1.2. SystemTap Capabilities Flexibility: SystemTap's framework allows users to develop simple scripts for investigating and monitoring a wide variety of kernel functions, system calls, and other events that occur in kernel-space. With this, SystemTap is not so much a tool as it is a system that allows you to develop your own kernel-specific forensic and monitoring tools. Ease-Of-Use: as mentioned earlier, SystemTap allows users to probe kernel-space events without having to resort to the lengthy instrument, recompile, install, and reboot the kernel process. Most of the SystemTap scripts enumerated in Chapter 4, Useful SystemTap Scripts demonstrate system forensics and monitoring capabilities not natively available with other similar tools (such as top , oprofile , or ps ). These scripts are provided to give readers extensive examples of the application of SystemTap, which in turn will educate them further on the capabilities they can employ when writing their own SystemTap scripts.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_beginners_guide/intro-systemtap-vs-others
Chapter 3. Insights for Red Hat Enterprise Linux advisor service weekly-report email subscription
Chapter 3. Insights for Red Hat Enterprise Linux advisor service weekly-report email subscription The advisor service Weekly Report email provides a quick view of the health of your environment. The email is informative yet unobtrusive; all of the included information can be consumed at a glance. 3.1. Overview of the advisor service weekly-report email subscription The email is sent every Sunday night (United States Eastern Standard Time, UTC -05:00; Eastern Daylight Time, UTC-4:00) to the email address associated with your individual Red Hat Customer Portal user account. This enables you to begin your week well informed. You are automatically subscribed to the Weekly Report email the first time you visit the advisor service. The email provides the following information about systems registered to your account: Number of critical rule hits in your infrastructure Number of impacted systems in your infrastructure Total number of recommendations impacting your infrastructure 3.2. Unsubscribing/Subscribing to the advisor service Weekly Report email Note You are automatically subscribed to the Weekly Report email the first time you visit the advisor service. Procedure Use the following procedure to unsubscribe or subscribe to the advisor Weekly Report email: Navigate to the Notification Preferences page and log in if necessary. Locate the Red Hat Enterprise Linux heading halfway down the page on the left and click Advisor just under the heading. Under the Reports heading to the right, select or clear the Weekly Report option based on your preferences. Click Save . You will receive the email each Sunday night from the sender, Red Hat Insights [email protected] , with the subject line, Weekly Insights summary report .
null
https://docs.redhat.com/en/documentation/red_hat_insights/1-latest/html/generating_advisor_service_reports_with_fedramp/insights-report-weekly-report-email-subs
Chapter 2. Differences from upstream OpenJDK 17
Chapter 2. Differences from upstream OpenJDK 17 Red Hat build of OpenJDK in Red Hat Enterprise Linux contains a number of structural changes from the upstream distribution of OpenJDK. The Microsoft Windows version of Red Hat build of OpenJDK attempts to follow Red Hat Enterprise Linux updates as closely as possible. The following list details the most notable Red Hat build of OpenJDK 17 changes: FIPS support. Red Hat build of OpenJDK 17 automatically detects whether RHEL is in FIPS mode and automatically configures Red Hat build of OpenJDK 17 to operate in that mode. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Cryptographic policy support. Red Hat build of OpenJDK 17 obtains the list of enabled cryptographic algorithms and key size constraints from the RHEL system configuration. These configuration components are used by the Transport Layer Security (TLS) encryption protocol, the certificate path validation, and any signed JARs. You can set different security profiles to balance safety and compatibility. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. Red Hat build of OpenJDK on RHEL dynamically links against native libraries such as zlib for archive format support and libjpeg-turbo , libpng , and giflib for image support. RHEL also dynamically links against Harfbuzz and Freetype for font rendering and management. This change does not apply to Red Hat build of OpenJDK builds for Microsoft Windows. The src.zip file includes the source for all of the JAR libraries shipped with Red Hat build of OpenJDK. Red Hat build of OpenJDK on RHEL uses system-wide timezone data files as a source for timezone information. Red Hat build of OpenJDK on RHEL uses system-wide CA certificates. Red Hat build of OpenJDK on Microsoft Windows includes the latest available timezone data from RHEL. Red Hat build of OpenJDK on Microsoft Windows uses the latest available CA certificate from RHEL. Additional resources See, Improve system FIPS detection (RHEL Planning Jira) See, Using system-wide cryptographic policies (RHEL documentation)
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_red_hat_build_of_openjdk_17.0.5/rn-openjdk-diff-from-upstream
Chapter 3. Specifying dedicated nodes
Chapter 3. Specifying dedicated nodes A Kubernetes cluster runs on top of many Virtual Machines or nodes (generally anywhere between 2 and 20 nodes). Pods can be scheduled on any of these nodes. When you create or schedule a new pod, use the topology_spread_constraints setting to configure how new pods are distributed across the underlying nodes when scheduled or created. Do not schedule your pods on a single node, because if that node fails, the services that those pods provide also fails. Schedule the control plane nodes to run on different nodes to the automation job pods. If the control plane pods share nodes with the job pods, the control plane can become resource starved and degrade the performance of the whole application. 3.1. Assigning pods to specific nodes You can constrain the automation controller pods created by the operator to run on a certain subset of nodes. node_selector and postgres_selector constrain the automation controller pods to run only on the nodes that match all the specified key, or value, pairs. tolerations and postgres_tolerations enable the automation controller pods to be scheduled onto nodes with matching taints. See Taints and Toleration in the Kubernetes documentation for further details. The following table shows the settings and fields that can be set on the automation controller's specification section of the YAML (or using the OpenShift UI form). Name Description Default postgres_image Path of the image to pull postgres postgres_image_version Image version to pull 13 node_selector AutomationController pods' nodeSelector ""'' topology_spread_constraints AutomationController pods' topologySpreadConstraints ""'' tolerations AutomationController pods' tolerations ""'' annotations AutomationController pods' annotations ""'' postgres_selector Postgres pods' nodeSelector ""'' postgres_tolerations Postgres pods' tolerations ""'' topology_spread_constraints can help optimize spreading your control plane pods across the compute nodes that match your node selector. For example, with the maxSkew parameter of this option set to 100 , this means maximally spread across available nodes. So if there are three matching compute nodes and three pods, one pod will be assigned to each compute node. This parameter helps prevent the control plane pods from competing for resources with each other. Example of a custom configuration for constraining controller pods to specific nodes 3.2. Specify nodes for job execution You can add a node selector to the container group pod specification to ensure they only run against certain nodes. First add a label to the nodes you want to run jobs against. The following procedure adds a label to a node. Procedure List the nodes in your cluster, along with their labels: kubectl get nodes --show-labels The output is similar to this (shown here in a table): Name Status Roles Age Version Labels worker0 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker0 worker1 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker1 worker2 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker2 Choose one of your nodes, and add a label to it by using the following command: kubectl label nodes <your-node-name> <aap_node_type>=<execution> For example: kubectl label nodes <your-node-name> disktype=ssd where <your-node-name> is the name of your chosen node. Verify that your chosen node has a disktype=ssd label: kubectl get nodes --show-labels The output is similar to this (shown here in a table): Name Status Roles Age Version Labels worker0 Ready <none> 1d v1.13.0 ... disktype=ssd,kubernetes.io/hostname=worker0 worker1 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker1 worker2 Ready <none> 1d v1.13.0 ... ,kubernetes.io/hostname=worker2 You can see that the worker0 node now has a disktype=ssd label. In the automation controller UI, specify that label in the metadata section of your customized pod specification in the container group. apiVersion: v1 kind: Pod metadata: disktype: ssd namespace: ansible-automation-platform spec: serviceAccountName: default automountServiceAccountToken: false nodeSelector: aap_node_type: execution containers: - image: >- registry.redhat.io/ansible-automation-platform-22/ee-supported-rhel8@sha256:d134e198b179d1b21d3f067d745dd1a8e28167235c312cdc233860410ea3ec3e name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' resources: requests: cpu: 250m memory: 100Mi Extra settings With extra_settings , you can pass many custom settings by using the awx-operator. The parameter extra_settings is appended to /etc/tower/settings.py and can be an alternative to the extra_volumes parameter. Name Description Default extra_settings Extra settings '' Example configuration of extra_settings parameter 3.3. Custom pod timeouts A container group job in automation controller transitions to the running state just before you submit the pod to the Kubernetes API. Automation controller then expects the pod to enter the Running state before AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT seconds has elapsed. You can set AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT to a higher value if you want automation controller to wait for longer before canceling jobs that fail to enter the Running state. AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT is how long automation controller waits from creation of a pod until the Ansible work begins in the pod. You can also extend the time if the pod cannot be scheduled because of resource constraints. You can do this using extra_settings on the automation controller specification. The default value is two hours. This is used if you are consistently launching many more jobs than Kubernetes can schedule, and jobs are spending periods longer than AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT in pending . Jobs are not launched until control capacity is available. If many more jobs are being launched than the container group has capacity to run, consider scaling up your Kubernetes worker nodes. 3.4. Jobs scheduled on the worker nodes Both automation controller and Kubernetes play a role in scheduling a job. When a job is launched, its dependencies are fulfilled, meaning any project updates or inventory updates are launched by automation controller as required by the job template, project, and inventory settings. If the job is not blocked by other business logic in automation controller and there is control capacity in the control plane to start the job, the job is submitted to the dispatcher. The default settings of the "cost" to control a job is 1 capacity . So, a control pod with 100 capacity is able to control up to 100 jobs at a time. Given control capacity, the job transitions from pending to waiting . The dispatcher, which is a background process in the control plan pod, starts a worker process to run the job. This communicates with the Kubernetes API using a service account associated with the container group and uses the pod specification as defined on the Container Group in automation controller to provision the pod. The job status in automation controller is shown as running . Kubernetes now schedules the pod. A pod can remain in the pending state for AWX_CONTAINER_GROUP_POD_PENDING_TIMEOUT . If the pod is denied through a ResourceQuota , the job starts over at pending . You can configure a resource quota on a namespace to limit how many resources may be consumed by pods in the namespace. For further information about ResourceQuotas, see Resource Quotas .
[ "spec: node_selector: | disktype: ssd kubernetes.io/arch: amd64 kubernetes.io/os: linux topology_spread_constraints: | - maxSkew: 100 topologyKey: \"topology.kubernetes.io/zone\" whenUnsatisfiable: \"ScheduleAnyway\" labelSelector: matchLabels: app.kubernetes.io/name: \"<resourcename>\" tolerations: | - key: \"dedicated\" operator: \"Equal\" value: \"AutomationController\" effect: \"NoSchedule\" postgres_selector: | disktype: ssd kubernetes.io/arch: amd64 kubernetes.io/os: linux postgres_tolerations: | - key: \"dedicated\" operator: \"Equal\" value: \"AutomationController\" effect: \"NoSchedule\"", "get nodes --show-labels", "label nodes <your-node-name> <aap_node_type>=<execution>", "label nodes <your-node-name> disktype=ssd", "get nodes --show-labels", "apiVersion: v1 kind: Pod metadata: disktype: ssd namespace: ansible-automation-platform spec: serviceAccountName: default automountServiceAccountToken: false nodeSelector: aap_node_type: execution containers: - image: >- registry.redhat.io/ansible-automation-platform-22/ee-supported-rhel8@sha256:d134e198b179d1b21d3f067d745dd1a8e28167235c312cdc233860410ea3ec3e name: worker args: - ansible-runner - worker - '--private-data-dir=/runner' resources: requests: cpu: 250m memory: 100Mi", "spec: extra_settings: - setting: MAX_PAGE_SIZE value: \"500\" - setting: AUTH_LDAP_BIND_DN value: \"cn=admin,dc=example,dc=com\" - setting: SYSTEM_TASK_ABS_MEM value: \"500\"" ]
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/performance_considerations_for_operator_environments/assembly-specify-dedicted-nodes
Chapter 1. Red Hat Ansible Automation Platform installation overview
Chapter 1. Red Hat Ansible Automation Platform installation overview The Red Hat Ansible Automation Platform installation program offers you flexibility, allowing you to install Ansible Automation Platform by using a number of supported installation scenarios. Starting with Ansible Automation Platform 2.4, the installation scenarios include the optional deployment of Event-Driven Ansible controller, which introduces the automated resolution of IT requests. Regardless of the installation scenario you choose, installing Ansible Automation Platform involves the following steps: Editing the Red Hat Ansible Automation Platform installer inventory file The Ansible Automation Platform installer inventory file allows you to specify your installation scenario and describe host deployments to Ansible. The examples provided in this document show the parameter specifications needed to install that scenario for your deployment. Running the Red Hat Ansible Automation Platform installer setup script The setup script installs your private automation hub by using the required parameters defined in the inventory file. Verifying automation controller installation After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation controller. Verifying automation hub installation After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the automation hub. Verifying Event-Driven Ansible controller installation After installing Ansible Automation Platform, you can verify that the installation has been successful by logging in to the Event-Driven Ansible controller. Additional resources For more information about the supported installation scenarios, see the Red Hat Ansible Automation Platform Planning Guide . 1.1. Prerequisites You chose and obtained a platform installer from the Red Hat Ansible Automation Platform Product Software . You are installing on a machine that meets base system requirements. You have updated all of the packages to the recent version of your RHEL nodes. Warning To prevent errors, upgrade your RHEL nodes fully before installing Ansible Automation Platform. You have created a Red Hat Registry Service Account, by using the instructions in Creating Registry Service Accounts . Additional resources For more information about obtaining a platform installer or system requirements, see the Red Hat Ansible Automation Platform system requirements in the Red Hat Ansible Automation Platform Planning Guide .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.4/html/red_hat_ansible_automation_platform_installation_guide/assembly-platform-install-overview
Chapter 1. Introduction to AMQ Broker on OpenShift Container Platform
Chapter 1. Introduction to AMQ Broker on OpenShift Container Platform Red Hat AMQ Broker 7.8 is available as a containerized image for use with OpenShift Container Platform (OCP) 3.11 and 4.5 and 4.6. AMQ Broker is based on Apache ActiveMQ Artemis. It provides a message broker that is JMS-compliant. After you have set up the initial broker pod, you can quickly deploy duplicates by using OpenShift Container Platform features. 1.1. Version compatibility and support For details about OpenShift Container Platform image version compatibility, see: OpenShift and Atomic Platform 3.x Tested Integrations OpenShift Container Platform 4.x Tested Integrations 1.2. Unsupported features Master-slave-based high availability High availability (HA) achieved by configuring master and slave pairs is not supported. Instead, when pods are scaled down, HA is provided in OpenShift by using the scaledown controller, which enables message migration. External Clients that connect to a cluster of brokers, either through the OpenShift proxy or by using bind ports, may need to be configured for HA accordingly. In a clustered scenario, a broker will inform certain clients of the addresses of all the broker's host and port information. Since these are only accessible internally, certain client features either will not work or will need to be disabled. Client Configuration Core JMS Client Because external Core Protocol JMS clients do not support HA or any type of failover, the connection factories must be configured with useTopologyForLoadBalancing=false . AMQP Clients AMQP clients do not support failover lists Durable subscriptions in a cluster When a durable subscription is created, this is represented as a durable queue on the broker to which a client has connected. When a cluster is running within OpenShift the client does not know on which broker the durable subscription queue has been created. If the subscription is durable and the client reconnects there is currently no method for the load balancer to reconnect it to the same node. When this happens, it is possible that the client will connect to a different broker and create a duplicate subscription queue. For this reason, using durable subscriptions with a cluster of brokers is not recommended.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/deploying_amq_broker_on_openshift/con_br-intro-to-broker-on-ocp-broker-ocp
Chapter 1. Clair security scanner
Chapter 1. Clair security scanner Clair v4 (Clair) is an open source application that leverages static code analyses for parsing image content and reporting vulnerabilities affecting the content. Clair is packaged with Red Hat Quay and can be used in both standalone and Operator deployments. It can be run in highly scalable configurations, where components can be scaled separately as appropriate for enterprise environments. 1.1. About Clair Clair uses Common Vulnerability Scoring System (CVSS) data from the National Vulnerability Database (NVD) to enrich vulnerability data, which is a United States government repository of security-related information, including known vulnerabilities and security issues in various software components and systems. Using scores from the NVD provides Clair the following benefits: Data synchronization . Clair can periodically synchronize its vulnerability database with the NVD. This ensures that it has the latest vulnerability data. Matching and enrichment . Clair compares the metadata and identifiers of vulnerabilities it discovers in container images with the data from the NVD. This process involves matching the unique identifiers, such as Common Vulnerabilities and Exposures (CVE) IDs, to the entries in the NVD. When a match is found, Clair can enrich its vulnerability information with additional details from NVD, such as severity scores, descriptions, and references. Severity Scores . The NVD assigns severity scores to vulnerabilities, such as the Common Vulnerability Scoring System (CVSS) score, to indicate the potential impact and risk associated with each vulnerability. By incorporating NVD's severity scores, Clair can provide more context on the seriousness of the vulnerabilities it detects. If Clair finds vulnerabilities from NVD, a detailed and standardized assessment of the severity and potential impact of vulnerabilities detected within container images is reported to users on the UI. CVSS enrichment data provides Clair the following benefits: Vulnerability prioritization . By utilizing CVSS scores, users can prioritize vulnerabilities based on their severity, helping them address the most critical issues first. Assess Risk . CVSS scores can help Clair users understand the potential risk a vulnerability poses to their containerized applications. Communicate Severity . CVSS scores provide Clair users a standardized way to communicate the severity of vulnerabilities across teams and organizations. Inform Remediation Strategies . CVSS enrichment data can guide Quay.io users in developing appropriate remediation strategies. Compliance and Reporting . Integrating CVSS data into reports generated by Clair can help organizations demonstrate their commitment to addressing security vulnerabilities and complying with industry standards and regulations. 1.1.1. Clair releases New versions of Clair are regularly released. The source code needed to build Clair is packaged as an archive and attached to each release. Clair releases can be found at Clair releases . Release artifacts also include the clairctl command line interface tool, which obtains updater data from the internet by using an open host. Clair 4.8 Clair 4.8 was released on 24-10-28. The following changes have been made: Clair on Red Hat Quay now requires that you update the Clair PostgreSQL database from version 13 to version 15. For more information about this procedure, see Upgrading the Clair PostgreSQL database . This release deprecates the updaters that rely on the Red Hat OVAL v2 security data in favor of the Red Hat VEX data. This change includes a database migration to delete all the vulnerabilities that originated from the OVAL v2 feeds. Because of this, there could be intermittent downtime in production environments before the VEX updater complete for the first time when no vulnerabilities exist. 1.1.1.1. Clair 4.8.0 known issues When pushing Suse Enterprise Linux Images with HIGH image vulnerabilities, Clair 4.8.0 does not report these vulnerabilities. This is a known issue and will be fixed in a future version of Red Hat Quay. Clair 4.7.4 Clair 4.7.4 was released on 2024-05-01. The following changes have been made: The default layer download location has changed. For more information, see Disk usage considerations . Clair 4.7.3 Clair 4.7.3 was released on 2024-02-26. The following changes have been made: The minimum TLS version for Clair is now 1.2. Previously, servers allowed for 1.1 connections. Clair 4.7.2 Clair 4.7.2 was released on 2023-10-09. The following changes have been made: CRDA support has been removed. Clair 4.7.1 Clair 4.7.1 was released as part of Red Hat Quay 3.9.1. The following changes have been made: With this release, you can view unpatched vulnerabilities from Red Hat Enterprise Linux (RHEL) sources. If you want to view unpatched vulnerabilities, you can the set ignore_unpatched parameter to false . For example: updaters: config: rhel: ignore_unpatched: false To disable this feature, you can set ignore_unpatched to true . Clair 4.7 Clair 4.7 was released as part of Red Hat Quay 3.9, and includes support for the following features: Native support for indexing Golang modules and RubeGems in container images. Change to OSV.dev as the vulnerability database source for any programming language package managers. This includes popular sources like GitHub Security Advisories or PyPA. This allows offline capability. Use of pyup.io for Python and CRDA for Java is suspended. Clair now supports Java, Golang, Python, and Ruby dependencies. 1.1.2. Clair vulnerability databases Clair uses the following vulnerability databases to report for issues in your images: Ubuntu Oval database Debian Security Tracker Red Hat Enterprise Linux (RHEL) Oval database SUSE Oval database Oracle Oval database Alpine SecDB database VMware Photon OS database Amazon Web Services (AWS) UpdateInfo Open Source Vulnerability (OSV) Database 1.1.3. Clair supported dependencies Clair supports identifying and managing the following dependencies: Java Golang Python Ruby This means that it can analyze and report on the third-party libraries and packages that a project in these languages relies on to work correctly. When an image that contains packages from a language unsupported by Clair is pushed to your repository, a vulnerability scan cannot be performed on those packages. Users do not receive an analysis or security report for unsupported dependencies or packages. As a result, the following consequences should be considered: Security risks . Dependencies or packages that are not scanned for vulnerability might pose security risks to your organization. Compliance issues . If your organization has specific security or compliance requirements, unscanned, or partially scanned, container images might result in non-compliance with certain regulations. Note Scanned images are indexed, and a vulnerability report is created, but it might omit data from certain unsupported languages. For example, if your container image contains a Lua application, feedback from Clair is not provided because Clair does not detect it. It can detect other languages used in the container image, and shows detected CVEs for those languages. As a result, Clair images are fully scanned based on what it supported by Clair. 1.1.4. Clair containers Official downstream Clair containers bundled with Red Hat Quay can be found on the Red Hat Ecosystem Catalog . Official upstream containers are packaged and released as a under the Clair project on Quay.io . The latest tag tracks the Git development branch. Version tags are built from the corresponding release. 1.2. Clair severity mapping Clair offers a comprehensive approach to vulnerability assessment and management. One of its essential features is the normalization of security databases' severity strings. This process streamlines the assessment of vulnerability severities by mapping them to a predefined set of values. Through this mapping, clients can efficiently react to vulnerability severities without the need to decipher the intricacies of each security database's unique severity strings. These mapped severity strings align with those found within the respective security databases, ensuring consistency and accuracy in vulnerability assessment. 1.2.1. Clair severity strings Clair alerts users with the following severity strings: Unknown Negligible Low Medium High Critical These severity strings are similar to the strings found within the relevant security database. Alpine mapping Alpine SecDB database does not provide severity information. All vulnerability severities will be Unknown. Alpine Severity Clair Severity * Unknown AWS mapping AWS UpdateInfo database provides severity information. AWS Severity Clair Severity low Low medium Medium important High critical Critical Debian mapping Debian Oval database provides severity information. Debian Severity Clair Severity * Unknown Unimportant Low Low Medium Medium High High Critical Oracle mapping Oracle Oval database provides severity information. Oracle Severity Clair Severity N/A Unknown LOW Low MODERATE Medium IMPORTANT High CRITICAL Critical RHEL mapping RHEL Oval database provides severity information. RHEL Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical SUSE mapping SUSE Oval database provides severity information. Severity Clair Severity None Unknown Low Low Moderate Medium Important High Critical Critical Ubuntu mapping Ubuntu Oval database provides severity information. Severity Clair Severity Untriaged Unknown Negligible Negligible Low Low Medium Medium High High Critical Critical OSV mapping Table 1.1. CVSSv3 Base Score Clair Severity 0.0 Negligible 0.1-3.9 Low 4.0-6.9 Medium 7.0-8.9 High 9.0-10.0 Critical Table 1.2. CVSSv2 Base Score Clair Severity 0.0-3.9 Low 4.0-6.9 Medium 7.0-10 High
[ "updaters: config: rhel: ignore_unpatched: false" ]
https://docs.redhat.com/en/documentation/red_hat_quay/3.13/html/vulnerability_reporting_with_clair_on_red_hat_quay/clair-vulnerability-scanner
14.22.8. Listing Information about a Virtual Network
14.22.8. Listing Information about a Virtual Network Returns the list of active networks, if --all is specified this will also include defined but inactive networks, if --inactive is specified only the inactive ones will be listed. You may also want to filter the returned networks by --persistent to list the persitent ones, --transient to list the transient ones, --autostart to list the ones with autostart enabled, and --no-autostart to list the ones with autostart disabled. Note: When talking to older servers, this command is forced to use a series of API calls with an inherent race, where a pool might not be listed or might appear more than once if it changed state between calls while the list was being collected. Newer servers do not have this problem. To list the virtual networks, run:
[ "net-list [ --inactive | --all ] [ --persistent ] [< --transient >] [--autostart] [< --no-autostart >]" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-virtual_networking_commands-listing_information_about_a_virtual_network
Chapter 1. Upgrading overview
Chapter 1. Upgrading overview Review prerequisites and available upgrade paths below before upgrading your current Red Hat Satellite installation to Red Hat Satellite 6.15. For interactive upgrade instructions, you can also use the Red Hat Satellite Upgrade Helper on the Red Hat Customer Portal. This application provides you with an exact guide to match your current version number. You can find instructions that are specific to your upgrade path, as well as steps to prevent known issues. For more information, see Satellite Upgrade Helper on the Red Hat Customer Portal. Note that you can upgrade Capsules separately from Satellite. For more information, see Section 1.3, "Upgrading Capsules separately from Satellite" . 1.1. Prerequisites Upgrading to Satellite 6.15 affects your entire Satellite infrastructure. Before proceeding, complete the following: Read the Red Hat Satellite 6.15 Release Notes . Plan your upgrade path. For more information, see Section 1.2, "Upgrade paths" . Plan for the required downtime. Satellite services are shut down during the upgrade. The upgrade process duration might vary depending on your hardware configuration, network speed, and the amount of data that is stored on the server. Upgrading Satellite takes approximately 1 - 2 hours. Upgrading Capsule takes approximately 10 - 30 minutes. Ensure that you have sufficient storage space on your server. For more information, see Preparing your Environment for Installation in Installing Satellite Server in a connected network environment and Preparing your Environment for Installation in Installing Capsule Server . Back up your Satellite Server and all Capsule Servers. For more information, see Backing Up Satellite Server and Capsule Server in Administering Red Hat Satellite . Plan for updating any scripts you use that contain Satellite API commands because some API commands differ between versions of Satellite. Ensure that all Satellite Servers are on the same version. Warning If you customize configuration files, manually or using a tool such as Hiera, these changes are overwritten when the maintenance script runs during upgrading or updating. You can use the --noop option with the satellite-installer to test for changes. For more information, see the Red Hat Knowledgebase solution How to use the noop option to check for changes in Satellite config files during an upgrade. 1.2. Upgrade paths You can upgrade to Red Hat Satellite 6.15 from Red Hat Satellite 6.14. Satellite Servers and Capsule Servers on earlier versions must first be upgraded to Satellite 6.14. For more information, see Upgrading Red Hat Satellite to 6.14 . High-level upgrade steps The high-level steps in upgrading Satellite to 6.15 are as follows: Upgrade Satellite Server to 6.15. For more information, see Section 2.1, "Satellite Server upgrade considerations" . Upgrade all Capsule Servers to 6.15. For more information, see Section 2.5, "Upgrading Capsule Servers" . 1.3. Upgrading Capsules separately from Satellite You can upgrade Satellite to version 6.15 and keep Capsules at version 6.14 until you have the capacity to upgrade them too. All the functionality that worked previously works on 6.14 Capsules. However, the functionality added in the 6.15 release will not work until you upgrade Capsules to 6.15. Upgrading Capsules after upgrading Satellite can be useful in the following example scenarios: If you want to have several smaller outage windows instead of one larger window. If Capsules in your organization are managed by several teams and are located in different locations. If you use a load-balanced configuration, you can upgrade one load-balanced Capsule and keep other load-balanced Capsules at one version lower. This allows you to upgrade all Capsules one after another without any outage. 1.4. Following the progress of the upgrade Because of the lengthy upgrade time, use a utility such as tmux to suspend and reattach a communication session. You can then check the upgrade progress without staying connected to the command shell continuously. For more information, see the tmux manual page. If you lose connection to the command shell where the upgrade command is running you can see the logs in /var/log/foreman-installer/satellite.log to check if the process completed successfully.
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.15/html/upgrading_connected_red_hat_satellite_to_6.15/upgrading_overview_upgrading-connected
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/using_the_memcached_protocol_endpoint_with_data_grid/red-hat-data-grid
Part I. New Features
Part I. New Features This part documents new features and major enhancements introduced in Red Hat Enterprise Linux 7.5.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/new-features
33.3. Boot Loader Options
33.3. Boot Loader Options Figure 33.3. Boot Loader Options Please note that this screen will be disabled if you have specified a target architecture other than x86 / x86_64. GRUB is the default boot loader for Red Hat Enterprise Linux on x86 / x86_64 architectures. If you do not want to install a boot loader, select Do not install a boot loader . If you choose not to install a boot loader, make sure you create a boot diskette or have another way to boot your system, such as a third-party boot loader. You must choose where to install the boot loader (the Master Boot Record or the first sector of the /boot partition). Install the boot loader on the MBR if you plan to use it as your boot loader. To pass any special parameters to the kernel to be used when the system boots, enter them in the Kernel parameters text field. For example, if you have an IDE CD-ROM Writer, you can tell the kernel to use the SCSI emulation driver that must be loaded before using cdrecord by configuring hdd=ide-scsi as a kernel parameter (where hdd is the CD-ROM device). You can password protect the GRUB boot loader by configuring a GRUB password. Select Use GRUB password , and enter a password in the Password field. Type the same password in the Confirm Password text field. To save the password as an encrypted password in the file, select Encrypt GRUB password . If the encryption option is selected, when the file is saved, the plain text password that you typed is encrypted and written to the kickstart file. If the password you typed was already encrypted, uncheck the encryption option. Important It is highly recommended to set up a boot loader password on every machine. An unprotected boot loader can allow a potential attacker to modify the system's boot options and gain access to the system. See the chapter titled Workstation Security in the Red Hat Enterprise Linux Security Guide for more information on boot loader passwords and password security in general. If Upgrade an existing installation is selected on the Installation Method page, select Upgrade existing boot loader to upgrade the existing boot loader configuration, while preserving the old entries.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-redhat-config-kickstart-bootloader
19.4. Near Caches in a Clustered Environment
19.4. Near Caches in a Clustered Environment Near caches are implemented using Hot Rod Remote Events, and utilize clustered listeners for receiving events from across the cluster. Clustered listeners are installed on a single node within the cluster, with the remaining nodes sending events to the node on which the listeners are installed. It is therefore possible for a node running the near cache-backing clustered listener to fail. In this situation, another node takes over the clustered listener. When the node running the clustered listener fails, a client failover event callback can be defined and invoked. For near caches, this callback and its implementation will clear the near cache, as during a failover events may be missed. Refer to Section 7.5, "Clustered Listeners" for more information. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/developer_guide/near_caches_in_a_clustered_environment
Chapter 2. Support
Chapter 2. Support Only the configuration options described in this documentation are supported for logging. Do not use any other configuration options, as they are unsupported. Configuration paradigms might change across OpenShift Dedicated releases, and such cases can only be handled gracefully if all configuration possibilities are controlled. If you use configurations other than those described in this documentation, your changes will be overwritten, because Operators are designed to reconcile any differences. Note If you must perform configurations not described in the OpenShift Dedicated documentation, you must set your Red Hat OpenShift Logging Operator to Unmanaged . An unmanaged logging instance is not supported and does not receive updates until you return its status to Managed . Note Logging is provided as an installable component, with a distinct release cycle from the core OpenShift Dedicated. The Red Hat OpenShift Container Platform Life Cycle Policy outlines release compatibility. Logging for Red Hat OpenShift is an opinionated collector and normalizer of application, infrastructure, and audit logs. It is intended to be used for forwarding logs to various supported systems. Logging is not: A high scale log collection system Security Information and Event Monitoring (SIEM) compliant Historical or long term log retention or storage A guaranteed log sink Secure storage - audit logs are not stored by default 2.1. Supported API custom resource definitions LokiStack development is ongoing. Not all APIs are currently supported. Table 2.1. Loki API support states CustomResourceDefinition (CRD) ApiVersion Support state LokiStack lokistack.loki.grafana.com/v1 Supported in 5.5 RulerConfig rulerconfig.loki.grafana/v1 Supported in 5.7 AlertingRule alertingrule.loki.grafana/v1 Supported in 5.7 RecordingRule recordingrule.loki.grafana/v1 Supported in 5.7 2.2. Unsupported configurations You must set the Red Hat OpenShift Logging Operator to the Unmanaged state to modify the following components: The Elasticsearch custom resource (CR) The Kibana deployment The fluent.conf file The Fluentd daemon set You must set the OpenShift Elasticsearch Operator to the Unmanaged state to modify the Elasticsearch deployment files. Explicitly unsupported cases include: Configuring default log rotation . You cannot modify the default log rotation configuration. Configuring the collected log location . You cannot change the location of the log collector output file, which by default is /var/log/fluentd/fluentd.log . Throttling log collection . You cannot throttle down the rate at which the logs are read in by the log collector. Configuring the logging collector using environment variables . You cannot use environment variables to modify the log collector. Configuring how the log collector normalizes logs . You cannot modify default log normalization. 2.3. Support policy for unmanaged Operators The management state of an Operator determines whether an Operator is actively managing the resources for its related component in the cluster as designed. If an Operator is set to an unmanaged state, it does not respond to changes in configuration nor does it receive updates. While this can be helpful in non-production clusters or during debugging, Operators in an unmanaged state are unsupported and the cluster administrator assumes full control of the individual component configurations and upgrades. An Operator can be set to an unmanaged state using the following methods: Individual Operator configuration Individual Operators have a managementState parameter in their configuration. This can be accessed in different ways, depending on the Operator. For example, the Red Hat OpenShift Logging Operator accomplishes this by modifying a custom resource (CR) that it manages, while the Cluster Samples Operator uses a cluster-wide configuration resource. Changing the managementState parameter to Unmanaged means that the Operator is not actively managing its resources and will take no action related to the related component. Some Operators might not support this management state as it might damage the cluster and require manual recovery. Warning Changing individual Operators to the Unmanaged state renders that particular component and functionality unsupported. Reported issues must be reproduced in Managed state for support to proceed. Cluster Version Operator (CVO) overrides The spec.overrides parameter can be added to the CVO's configuration to allow administrators to provide a list of overrides to the CVO's behavior for a component. Setting the spec.overrides[].unmanaged parameter to true for a component blocks cluster upgrades and alerts the administrator after a CVO override has been set: Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing. Warning Setting a CVO override puts the entire cluster in an unsupported state. Reported issues must be reproduced after removing any overrides for support to proceed. 2.4. Collecting logging data for Red Hat Support When opening a support case, it is helpful to provide debugging information about your cluster to Red Hat Support. You can use the must-gather tool to collect diagnostic information for project-level resources, cluster-level resources, and each of the logging components. For prompt support, supply diagnostic information for both OpenShift Dedicated and logging. Note Do not use the hack/logging-dump.sh script. The script is no longer supported and does not collect data. 2.4.1. About the must-gather tool The oc adm must-gather CLI command collects the information from your cluster that is most likely needed for debugging issues. For your logging, must-gather collects the following information: Project-level resources, including pods, configuration maps, service accounts, roles, role bindings, and events at the project level Cluster-level resources, including nodes, roles, and role bindings at the cluster level OpenShift Logging resources in the openshift-logging and openshift-operators-redhat namespaces, including health status for the log collector, the log store, and the log visualizer When you run oc adm must-gather , a new pod is created on the cluster. The data is collected on that pod and saved in a new directory that starts with must-gather.local . This directory is created in the current working directory. 2.4.2. Collecting logging data You can use the oc adm must-gather CLI command to collect information about logging. Procedure To collect logging information with must-gather : Navigate to the directory where you want to store the must-gather information. Run the oc adm must-gather command against the logging image: USD oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == "cluster-logging-operator")].image}') The must-gather tool creates a new directory that starts with must-gather.local within the current directory. For example: must-gather.local.4157245944708210408 . Create a compressed file from the must-gather directory that was just created. For example, on a computer that uses a Linux operating system, run the following command: USD tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408 Attach the compressed file to your support case on the Red Hat Customer Portal .
[ "Disabling ownership via cluster version overrides prevents upgrades. Please remove overrides before continuing.", "oc adm must-gather --image=USD(oc -n openshift-logging get deployment.apps/cluster-logging-operator -o jsonpath='{.spec.template.spec.containers[?(@.name == \"cluster-logging-operator\")].image}')", "tar -cvaf must-gather.tar.gz must-gather.local.4157245944708210408" ]
https://docs.redhat.com/en/documentation/openshift_dedicated/4/html/logging/support
Overview
Overview Red Hat Service Interconnect 1.8 Key features and supported configurations
null
https://docs.redhat.com/en/documentation/red_hat_service_interconnect/1.8/html/overview/index
Chapter 17. Configuring the Web Server (Undertow)
Chapter 17. Configuring the Web Server (Undertow) 17.1. Undertow Subsystem Overview Important In JBoss EAP 7, the undertow subsystem takes the place of the web subsystem from JBoss EAP 6. The undertow subsystem allows you to configure the web server and servlet container settings. It implements the Jakarta Servlet 4.0 Specification as well as websockets. It also supports HTTP upgrade and using high performance non-blocking handlers in servlet deployments. The undertow subsystem also has the ability to act as a high performance reverse proxy which supports mod_cluster. Within the undertow subsystem, there are five main components to configure: buffer caches server servlet container handlers filters Note While JBoss EAP does offer the ability to update the configuration for each of these components, the default configuration is suitable for most use cases and provides reasonable performance settings. Default Undertow Subsystem Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> Important The undertow subsystem also relies on the io subsystem to provide XNIO workers and buffer pools. The io subsystem is configured separately and provides a default configuration which should give optimal performance in most cases. Note Compared to the web subsystem in JBoss EAP 6, the undertow subsystem in JBoss EAP 7 has different default behaviors of HTTP methods . Using Elytron with Undertow Subsystem As a web application is deployed, the name of the security domain required by that application will be identified. This will either be from within the deployment or if the deployment does not have a security domain, the default-security-domain as defined on the undertow subsystem will be assumed. By default it is assumed that the security domain maps to a PicketBox defined in the legacy security subsystem. However, an application-security-domain resource can be added to the undertow subsystem which maps from the name of the security domain required by the application to the appropriate Elytron configuration. Example: Adding a Mapping. The addition of mapping is successful if the result is: <subsystem xmlns="urn:jboss:domain:undertow:10.0" ... default-security-domain="other"> ... <application-security-domains> <application-security-domain name="ApplicationDomain" security-domain="ApplicationDomain"/> </application-security-domains> ... </subsystem> Note If the deployment was already deployed at this point, the application server should be reloaded for the application security domain mapping to take effect. In current web service-Elytron integration, the name of the security domain specified to secure a web service endpoint and the Elytron security domain name must be the same. This simple form is suitable where a deployment is using the standard HTTP mechanism as defined within the Servlet specification like BASIC , CLIENT_CERT , DIGEST , FORM . Here, the authentication will be performed against the ApplicationDomain security domain. This form is also suitable where an application is not using any authentication mechanism and instead is using programmatic authentication or is trying to obtain the SecurityDomain associated with the deployment and use it directly. Example: Advanced Form of the Mapping: The advanced mapping is successful if the result is: <subsystem xmlns="urn:jboss:domain:undertow:10.0" ... default-security-domain="other"> ... <application-security-domains> <application-security-domain name="MyAppSecurity" http-authentication-factory="application-http-authentication"/> </application-security-domains> ... </subsystem> In this form of the configuration, instead of referencing a security domain, an http-authentication-factory is referenced. This is the factory that will be used to obtain the instances of the authentication mechanisms and is in turn associated with the security domain. You should reference an http-authentication-factory attribute when using custom HTTP authentication mechanisms or where additional configuration must be defined for mechanisms such as principal transformers, credential factories, and mechanism realms. It is also better to reference an http-authentication-factory attribute when using mechanisms other than the four described in the Servlet specification. When the advanced form of mapping is used, another configuration option is available, override-deployment-config . The referenced http-authentication-factory can return a complete set of authentication mechanisms. By default, these are filtered to just match the mechanisms requested by the application. If this option is set to true , then the mechanisms offered by the factory will override the mechanisms requested by the application. The application-security-domain resource also has one additional option enable-jacc . If this is set to true , Jakarta Authorization will be enabled for any deployments matching this mapping. Runtime Information Where an application-security-domain mapping is in use, it can be useful to double check that deployments did match against it as expected. If the resource is read with include-runtime=true , the deployments that are associated with the mapping will also be shown as: In this output, the referencing-deployments attribute shows that the deployment simple-webapp.war has been deployed using the mapping. 17.2. Configuring Buffer Caches The buffer cache is used to cache static resources. JBoss EAP enables multiple caches to be configured and referenced by deployments, allowing different deployments to use different cache sizes. Buffers are allocated in regions and are a fixed size. The total amount of space used can be calculated by multiplying the buffer size by the number of buffers per region by the maximum number of regions. The default size of a buffer cache is 10MB. JBoss EAP provides a single cache by default: Default Undertow Subsystem Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> ... </subsystem> Updating an Existing Buffer Cache To update an existing buffer cache: Creating a New Buffer Cache To create a new buffer cache: Deleting a Buffer Cache To delete a buffer cache: For a full list of the attributes available for configuring buffer caches, please see the Undertow Subsystem Attributes section. 17.3. Configuring Byte Buffer Pools Undertow byte buffer pools are used to allocate pooled NIO ByteBuffer instances. All listeners have a byte buffer pool and you can use different buffer pools and workers for each listener. Byte buffer pools can be shared between different server instances. These buffers are used for IO operations, and the buffer size has a big impact on application performance. For most servers, the ideal size is usually 16k. Updating an Existing Byte Buffer Pool To update an existing byte buffer pool: Creating a Byte Buffer Pool To create a new byte buffer pool: Deleting a Byte Buffer Pool To delete a byte buffer pool: For a full list of the attributes available for configuring byte buffer pools, see the Byte Buffer Pool Attributes reference. 17.4. Configuring a Server A server represents an instance of Undertow and consists of several elements: host http-listener https-listener ajp-listener The host element provides a virtual host configuration, while the three listeners provide connections of that type to the Undertow instance. The default behavior of the server is to queue requests while the server is starting. You can change the default behavior using the queue-requests-on-start attribute on the host. If this attribute is set to true , which is the default, then requests that arrive when the server is starting will be held until the server is ready. If this attribute is set to false , then requests that arrive before the server has completely started will be rejected with the default response code. Regardless of the attribute value, request processing does not start until the server is completely started. You can configure the queue-requests-on-start attribute using the management console by navigating to Configuration Subsystems Web (Undertow) Server , selecting the server and clicking View , and selecting the Hosts tab. For a managed domain, you must specify which profile to configure. Note Multiple servers can be configured, allowing deployments and servers to be completely isolated. This can be useful in certain scenarios such as multi-tenant environments. JBoss EAP provides a server by default: Default Undertow Subsystem Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> ... </subsystem> The following examples show how to configure a server using the management CLI. You can also configure a server using the management console by navigating to Configuration Subsystems Web (Undertow) Server . Updating an Existing Server To update an existing server: Creating a New Server To create a new server: Deleting a Server To delete a server: For a full list of the attributes available for configuring servers, see the Undertow Subsystem Attributes section. 17.4.1. Access Logging You can configure access logging on each host you define. Two access logging options are available: standard access logging and console access logging. Note that the additional processing required for access logging can affect system performance. 17.4.1.1. Standard Access Logging Standard access logging writes log entries to a log file. By default, the log file is stored in the directory standalone/log/access_log.log. To enable standard access logging, add the access-log setting to the host for which you want to capture access log data. The following CLI command illustrates the configuration on the default host in the default JBoss EAP server: Note You must reload the server after enabling standard access logging. By default, the access log record includes the following data: Remote host name Remote logical user name (always -) Remote user that was authenticated The date and time of the request, in Common Log Format The first line of the request The HTTP status code of the response The number of bytes sent, excluding HTTP headers This set of data is defined as the common pattern. Another pattern, combined, is also available. In addition to the data logged in the common pattern, the combined pattern includes the referer and user agent from the incoming header. You can change the data logged using the pattern attribute. The following CLI command illustrates updating the pattern attribute to use the combined pattern: Note You must reload the server after updating the pattern attribute. Table 17.1. Available patterns Pattern Description %a Remote IP address %A Local IP address %b Bytes sent, excluding HTTP headers or - if no bytes were sent %B Bytes sent, excluding HTTP headers %h Remote host name %H Request protocol %l Remote logical username from identd (always returns - ; included for Apache access log compatibility) %m Request method %p Local port %q Query string (excluding the ? character) %r First line of the request %s HTTP status code of the response %t Date and time, in Common Log Format format %u Remote user that was authenticated %U Requested URL path %v Local server name %D Time taken to process the request, in milliseconds %T Time taken to process the request, in seconds %I Current Request thread name (can compare later with stack traces) common %h %l %u %t "%r" %s %b combined %h %l %u %t "%r" %s %b "%{i,Referer}" "%{i,User-Agent}" You can also write information from the cookie, the incoming header and response header, or the session. The syntax is modeled after the Apache syntax: %{i,xxx} for incoming headers %{o,xxx} for outgoing response headers %{c,xxx} for a specific cookie %{r,xxx} where xxx is an attribute in the ServletRequest %{s,xxx} where xxx is an attribute in the HttpSession Additional configuration options are available for this log. For more information see "access-log Attributes" in the appendix. 17.4.1.2. Console Access Logging Console access logging writes data to stdout as structured as JSON data. Each access log record is a single line of data. You can capture this data for processing by log aggregation systems. To configure console access logging, add the console-access-log setting to the host for which you want to capture access log data. The following CLI command illustrates the configuration on the default host in the default JBoss EAP server: By default, the console access log record includes the following data: Table 17.2. Default console access log data Log data field name Description eventSource The source of the event in the request hostName The JBoss EAP host that processed the request bytesSent The number of bytes the JBoss EAP server sent in response to the request dateTime The date and time that the request was processed by the JBoss EAP server remoteHost The IP address of the machine where the request originated remoteUser The user name associated with the remote request requestLine The request submitted responseCode The HTTP response code returned by the JBoss EAP server Default properties are always included in the log output. You can use the attributes attribute to change the labels of the default log data, and in some cases to change the data configuration. You can also use the attributes attribute to add additional log data to the output. Table 17.3. Available console access log data Log data field name Description Format authentication-type The authentication type used to authenticate the user associated with the request. Default label: authenticationType Use the key option to change the label for this property. authentication-type{} authentication-type={key="authType"} bytes-sent The number of bytes returned for the request, excluding HTTP headers. Default label: bytesSent Use the key option to change the label for this property. bytes-sent={} bytes-sent={key="sent-bytes"} date-time The date and time that the request was received and processed. Default label: dateTime Use the key option to change the label for this property. Use the date-format to define the pattern used to format the date-time record. The pattern must be a Java SimpleDateFormatter pattern. Use the time-zone option to specify the time zone used to format the date and/or time data if the date-format option is defined. This value must be a valid java.util.TimeZone. date-time={key="<keyname>", date-format="<date-time format>"} date-time={key="@timestamp", date-format="yyyy-MM-dd'T'HH:mm:ssSSS"} host-and-port The host and port queried by the request. Default label: hostAndPort Use the key option to change the label for this property. host-and-port{} host-and-port={key="port-host"} local-ip The IP address of the local connection. Use the key option to change the label for this property. Default label: localIp Use the key option to change the label for this property. local-ip{} local-ip{key="localIP"} local-port The port of the local connection. Default label: localPort Use the key option to change the label for this property. local-port{} local-port{key="LocalPort"} local-server-name The name of the local server that processed the request. Default label: localServerName Use the key option to change the label for this property. local-server-name {} local-server-name {key=LocalServerName} path-parameter One or more path or URI parameters included in the request. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of each path parameter in the output. path-parameter{names={store,section}} path-parameter{names={store,section}, key-prefix="my-"} predicate The name of the predicate context. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of each path parameter in the output. predicate{names={store,section}} predicate{names={store,section}, key-prefix="my-"} query-parameter One or query parameters included in the request. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of each path parameter in the output. query-parameter{names={store,section}} query-parameter{names={store,section}, key-prefix="my-"} query-string The query string of the request. Default label: queryString Use the key option to change the label for this property. Use the include-question-mark property to specify whether the query string should include the question mark. By default, the question mark is not included. query-string{} query-string{key="QueryString", include-question-mark="true"} relative-path The relative path of the request. Default label: relativePath Use the key option to change the label for this property. relative-path{} relative-path{key="RelativePath"} remote-host The remote host name. Default label: remoteHost Use the key option to change the label for this property. remote-host{} remote-host{key="RemoteHost"} remote-ip The remote IP address. Default label: remoteIp Use the key options to change the label for this property. Use the obfuscated property to obfuscate the IP address in the output log record. The default value is false. remote-ip{} remote-ip{key="RemoteIP", obfuscated="true"} remote-user Remote user that was authenticated. Default label: remoteUser Use the key options to change the label for this property. remote-user{} remote-user{key="RemoteUser"} request-header The name of a request header. The key for the structured data is the name of the header; the value is the value of the named header. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of the request headers in the log output. request-header{names={store,section}} request-header{names={store,section}, key-prefix="my-"} request-line The request line. Default label: requestLine Use the key option to change the label for this property. request-line{} request-line{key="Request-Line"} request-method The request method. Default label: requestMethod Use the key option to change the label for this property. request-method{} request-method{key="RequestMethod"} request-path The relative path for the request. Default label: requestPath Use the key option to change the label for this property. request-path{} request-path{key="RequestPath"} request-protocol The protocol for the request. Default label: requestProtocol Use the key option to change the label for this property. request-protocol{} request-protocol{key="RequestProtocol"} request-scheme The URI scheme of the request. Default label: requestScheme Use the key option to change the label for this property. request-scheme{} request-scheme{key="RequestScheme"} request-url The original request URI. Includes host name, protocol, and so forth, if specified by the client. Default label: requestUrl Use the key option to change the label for this property. request-url{} request-url{key="RequestURL"} resolved-path The resolved path. Default Label: resolvedPath Use the key option to change the label for this property. resolved-path{} resolved-path{key="ResolvedPath"} response-code The response code. Default label: responseCode Use the key option to change the label for this property. response-code{} response-code{key="ResponseCode"} response-header The name of a response header. The key for the structured data is the name of the header; the value is the value of the named header. The names property is a comma-separated list of names used to resolve the exchange values. Use the key-prefix property to make the keys unique. If the key-prefix is specified, the prefix is prepended to the name of the request headers in the log output. response-header{names={store,section}} response-header{names={store,section}, key-prefix="my-"} response-reason-phrase The text reason for the response code. Default label: responseReasonPhrase Use the key option to change the label for this property. response-reason-phrase{} response-reason-phrase{key="ResponseReasonPhrase"} response-time The time used to process the request. Default label: responseTime Use the key option to change the label for this property. The default time unit is MILLISECONDS. Available time units include: * NANOSECONDS * MICROSECONDS * MILLISECONDS * SECONDS response-time{} response-time{key="ResponseTime", time-unit=SECONDS} secure-exchange Indicates whether the exchange was secure. Default label: secureExchange Use the key option to change the label for this property. secure-exchange{} secure-exchange{key="SecureExchange"} ssl-cipher The SSL cipher for the request. Default label: sslCipher Use the key option to change the label for this property. ssl-cipher{} ssl-cipher{key="SSLCipher"} ssl-client-cert The SSL client certificate for the request. Default label: sslClientCert Use the key option to change the label for this property. ssl-client-cert{} ssl-client-cert{key="SSLClientCert"} ssl-session-id The SSL session id of the request. Default label: sslSessionId Use the key option to change the label for this property. ssl-session-id{} stored-response The stored response to the request. Default label: storedResponse Use the key option to change the label for this property. stored-response{} stored-response{key="StoredResponse"} thread-name The thread name of the current thread. Default label: threadName Use the key option to change the label for this property. thread-name{} thread-name{key="ThreadName"} transport-protocol You can use the metadata attribute to configure additional arbitrary data to include in the access log record. The value of the metadata attribute is a set of key:value pairs that defines the data to include in the access log record. The value in a pair can be a management model expression. Management model expressions are resolved when the server is started or reloaded. Key-value pairs are comma-separated. The following CLI command demonstrates an example of a complex console log configuration, including additional log data, customization of log data, and additional metadata: The resulting access log record would resemble the following additional JSON data (Note: the example output below is formatted for readability; in an actual record, all data would be output as a single line): { "eventSource":"web-access", "hostName":"default-host", "@version":"1", "qualifiedHostName":"localhost.localdomain", "bytesSent":1504, "@timestamp":"2019-05-02T11:57:37123", "remoteHost":"127.0.0.1", "remoteUser":null, "requestLine":"GET / HTTP/2.0", "responseCode":200, "responseHeaderContent-Type":"text/html" } The following command illustrates updates to the log data after activating the console access log: The following command illustrates updates to the custom metadata after activating the console access log: 17.5. Configuring a Servlet Container A servlet container provides all servlet, Jakarta Server Pages and websocket-related configuration, including session-related configuration. While most servers will only need a single servlet container, it is possible to configure multiple servlet containers by adding an additional servlet-container element. Having multiple servlet containers enables behavior such as allowing multiple deployments to be deployed to the same context path on different virtual hosts. Note Much of the configuration provided in by servlet container can be individually overridden by deployed applications using their web.xml file. JBoss EAP provides a servlet container by default: Default Undertow Subsystem Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0"> <buffer-cache name="default"/> <server name="default-server"> ... </server> <servlet-container name="default"> <jsp-config/> <websockets/> </servlet-container> ... </subsystem> The following examples show how to configure a servlet container using the management CLI. You can also configure a servlet container using the management console by navigating to Configuration Subsystems Web (Undertow) Servlet Container . Updating an Existing Servlet Container To update an existing servlet container: Creating a New Servlet Container To create a new servlet container: Deleting a Servlet Container To delete a servlet container: For a full list of the attributes available for configuring servlet containers, see the Undertow Subsystem Attributes section. 17.6. Configuring a Servlet Extension Servlet extensions allow you to hook into the servlet deployment process and modify aspects of a servlet deployment. This can be useful in cases where you need to add additional authentication mechanisms to a deployment or use native Undertow handlers as part of a servlet deployment. To create a custom servlet extension, it is necessary to implement the io.undertow.servlet.ServletExtension interface and then add the name of your implementation class to the META-INF/services/io.undertow.servlet.ServletExtension file in the deployment. You also need to include the compiled class file of the ServletExtension implementation. When Undertow deploys the servlet, it loads all the services from the deployments class loader and then invokes their handleDeployment methods. An Undertow DeploymentInfo structure, which contains a complete and mutable description of the deployment, is passed to this method. You can modify this structure to change any aspect of the deployment. The DeploymentInfo structure is the same structure that is used by the embedded API, so in effect a ServletExtension has the same amount of flexibility that you have when using Undertow in embedded mode. 17.7. Configuring Handlers JBoss EAP allows for two types of handlers to be configured: file handlers reverse-proxy handlers File handlers serve static files. Each file handler must be attached to a location in a virtual host. Reverse-proxy handlers allow JBoss EAP to serve as a high performance reverse-proxy. JBoss EAP provides a file handler by default: Default Undertow Subsystem Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> <buffer-cache name="default"/> <server name="default-server"> ... </server> <servlet-container name="default"> ... </servlet-container> <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> Using WebDAV for Static Resources versions of JBoss EAP allowed for using WebDAV with the web subsystem, by way of the WebdavServlet , to host static resources and enable additional HTTP methods for accessing and manipulating those files. In JBoss EAP 7, the undertow subsystem does provide a mechanism for serving static files using a file handler, but the undertow subsystem does not support WebDAV. If you want to use WebDAV with JBoss EAP 7, you can write a custom WebDAV servlet. Updating an Existing File Handler To update an existing file handler: Creating a New File Handler To create a new file handler: Warning If you set a file handler's path directly to a file instead of a directory, any location elements that reference that file handler must not end with a forward slash ( / ). Otherwise, the server will return a 404 - Not Found response. Deleting a File Handler To delete a file handler For a full list of the attributes available for configuring handlers, see the Undertow Subsystem Attributes section. 17.8. Configuring Filters A filter enables some aspect of a request to be modified and can use predicates to control when a filter executes. Some common use cases for filters include setting headers or doing GZIP compression. Note A filter is functionally equivalent to a global valve used in JBoss EAP 6. The following types of filters can be defined: custom-filter error-page expression-filter gzip mod-cluster request-limit response-header rewrite The following examples show how to configure a filter using the management CLI. You can also configure a filter using the management console by navigating to Configuration Subsystems Web (Undertow) Filters . Updating an Existing Filter To update an existing filter: Creating a New Filter To create a new filter: Deleting a Filter To delete a filter: For a full list of the attributes available for configuring filters, see the Undertow Subsystem Attributes section. 17.8.1. Configuring the buffer-request Handler A request from the client or the browser consists of two parts: the header and the body. In a typical situation, the header and the body are sent to JBoss EAP without any delays in between. However, if the header is sent first and then after few seconds, the body is sent, there is a delay sending the complete request. This scenario creates a thread in JBoss EAP to show as waiting to execute the complete request. The delay caused in sending the header and the body of the request can be corrected using the buffer-request handler. The buffer-request handler attempts to consume the request from a non-blocking IO thread before allocating it to a worker thread. When no buffer-request handler is added, the thread allocation to the worker thread happens directly. However, when the buffer-request handler is added, the handler attempts to read the amount of data that it can buffer in a non-blocking manner using the IO thread before allocating it to the worker thread. You can use the following management CLI commands to configure the buffer-request handler: There is a limit to the size of the buffer requests that can be processed. This limit is determined by a combination of the buffer size and the total number of buffers, as shown in the equation below. Total_size = num_buffers x buffer_size In the equation above: Total_size is the size of data that will be buffered before the request is dispatched to a worker thread. num_buffers is the number of buffers. The number of buffers is set by the buffers parameter on the handler. buffer_size is the size of each buffer. The buffer size is set in the io subsystem, and is 16KB by default per request. Warning Avoid configuring very large buffer requests, or else you might run out of memory. 17.8.2. Configuring the SameSite attribute Use the SameSite attribute to define the accessibility of a cookie, whether the cookie is accessible within the same site. This attribute prevents the cross-site forgery attacks because browsers do not send the cookie with cross-site requests. You can configure the SameSite attribute for cookies with SameSiteCookieHandler in the undertow subsystem. With this configuration, you do not need to change your application code. The following table includes the detail of SameSiteCookieHandler parameters: Table 17.4. SameSiteCookieHandler parameters Parameter Name Presence Description add-secure-for-none Optional This parameter adds a Secure attribute to the cookie when the SameSite attribute mode is None . The default value is true . case-sensitive Optional This parameter indicates if the cookie-pattern is case-sensitive. The default value is true . cookie-pattern Optional This parameter accepts a regex pattern for the cookie name. If this parameter is not specified, the attribute SameSite=<specified-mode> is added to all cookies. enable-client-checker Optional This parameter verifies if client applications are incompatible with the SameSite=None attribute. The default value is true . If you use this default value and set the SameSite attribute mode to a value other than None , the parameter ignores verification. To prevent issues with incompatible clients, this parameter skips setting the SameSite attribute mode to None and takes no effect. For requests coming from compatible clients, the parameter applies the SameSite attribute mode None as expected. mode Mandatory This parameter specifies the SameSite attribute mode, which can be set to Strict , Lax or None . To improve security against cross-site request forgery attacks, some browsers set the default SameSite attribute mode to Lax . For detailed information, see the Additional resources section. SameSiteCookieHandler adds the attribute SameSite= <specified-mode> to the cookies that match cookie-pattern or to all cookies when cookie-pattern is not specified. The attribute SameSite= <specified-mode> includes a user-replaced variable, that is, <specified-mode> . The cookie-pattern is matched according to the value set in case-sensitive . Before configuring the SameSite attribute for any browser, consider the following points: Review the application to identify whether the cookies require the SameSite attribute, and those cookies need to be secured. Setting the SameSite attribute mode to None for all cookies makes the application more susceptible to attacks. Procedure to configure SameSiteCookieHandler by using expression-filter For configuring SameSiteCookieHandler on the server by using expression-filter , perform the following steps: Create a new expression-filter with the SameSiteCookieHandler by using the following command: Enable the expression-filter in the undertow web server by using the following command: Procedure to configure SameSiteCookieHandler by adding a configuration file For configuring SameSiteCookieHandler in your application by adding the undertow-handlers.conf file, perform the following steps: Add an undertow-handlers.conf file to your WAR's WEB-INF directory. In the undertow-handlers.conf file, add the following command with a specific SameSiteCookieHandler parameter: The valid values for the mode parameter are Strict , Lax or None . Using the above command, you can also configure other SameSiteCookieHandler parameters, such as cookie-pattern , case-sensitive , enable-client-checker , or add-secure-for-none . Additional resources Information on the chromium site Information on the chrome site Information on the mozilla site Information on the Microsoft site Information about the RFC on the IETF site 17.9. Configure the Default Welcome Web Application JBoss EAP includes a default Welcome application, which displays at the root context on port 8080 by default. There is a default server preconfigured in Undertow that serves up the welcome content. Default Undertow Subsystem Configuration <subsystem xmlns="urn:jboss:domain:undertow:10.0" default-server="default-server" default-virtual-host="default-host" default-servlet-container="default" default-security-domain="other"> ... <server name="default-server"> <http-listener name="default" socket-binding="http" redirect-socket="https" enable-http2="true"/> <https-listener name="https" socket-binding="https" security-realm="ApplicationRealm" enable-http2="true"/> <host name="default-host" alias="localhost"> <location name="/" handler="welcome-content"/> <http-invoker security-realm="ApplicationRealm"/> </host> </server> ... <handlers> <file name="welcome-content" path="USD{jboss.home.dir}/welcome-content"/> </handlers> </subsystem> The default server, default-server , has a default host, default-host , configured. The default host is configured to handle requests to the server's root, using the <location> element, with the welcome-content file handler. The welcome-content handler serves up the content in the location specified in the path property. This default Welcome application can be replaced with your own web application. This can be configured in one of two ways: Change the welcome-content file handler Change the default-web-module You can also disable the welcome content . Change the welcome-content File Handler Modify the existing welcome-content file handler's path to point to the new deployment. Note Alternatively, you could create a different file handler to be used by the server's root. Reload the server for the changes to take effect. Change the default-web-module Map a deployed web application to the server's root. Reload the server for the changes to take effect. Disable the Default Welcome Web Application Disable the welcome application by removing the location entry / for the default-host . Reload the server for the changes to take effect. 17.10. Configuring HTTPS For information on configuring HTTPS for web applications, see Configure One-way and Two-way SSL/TLS for Applications in How to Configure Server Security . For information on configuring HTTPS for use with the JBoss EAP management interfaces, see How to Secure the Management Interfaces in How to Configure Server Security . 17.11. Configuring HTTP Session Timeout The HTTP session timeout defines the period of inactive time needed to declare an HTTP session invalid. For example, a user accesses an application deployed to JBoss EAP which creates an HTTP session. If that user then attempts to access that application again after the HTTP session timeout, the original HTTP session will be invalidated and the user will be forced to create a new HTTP session. This may result in the loss of unpersisted data or the user having to reauthenticate. The HTTP session timeout is configured in an application's web.xml file, but a default HTTP session timeout can be specified within JBoss EAP. The server's timeout value will apply to all deployed applications, but an application's web.xml will override the server's value. The server value is specified in the default-session-timeout property which is found in the servlet-container section of the undertow subsystem. The value of default-session-timeout is specified in minutes and the default is 30 . Configuring the Default Session Timeout To configure the default-session-timeout : 17.12. Configuring HTTP-Only Session Management Cookies Session management cookies can be accessed by both HTTP APIs and non-HTTP APIs such as JavaScript. JBoss EAP offers the ability to send the HttpOnly header as part of the Set-Cookie response header to the client, usually a browser. In supported browsers, enabling this header tells the browser that it should prevent accessing session management cookies through non-HTTP APIs. Restricting session management cookies to only HTTP APIs can help to mitigate the threat of session cookie theft via cross-site scripting attacks. To enable this behavior, the http-only attribute should be set to true . Important Using the HttpOnly header does not actually prevent cross-site scripting attacks by itself, it merely notifies the browser. The browser must also support HttpOnly for this behavior to take affect. Important Using the http-only attribute only applies the restriction to session management cookies and not other browser cookies. The http-only attribute is set in two places in the undertow subsystem: In the servlet container as a session cookie setting In the host section of the server as a single sign-on property Configuring host-only for the Servlet Container Session Cookie To configure the host-only property for the servlet container session cookie: Configuring host-only for the Host Single Sign-On To configure the host-only property for the host single sign-on: 17.13. Configuring HTTP/2 Undertow allows for the use of the HTTP/2 standard, which reduces latency by compressing headers and multiplexing many streams over the same TCP connection. It also provides the ability for a server to push resources to the client before it has requested them, leading to faster page loads. Be aware that HTTP/2 only works with clients and browsers that also support the HTTP/2 standard. Important Most modern browsers enforce HTTP/2 over a secured TLS connection, known as h2 and may not support HTTP/2 over plain HTTP, known as h2c . It is still possible to configure JBoss EAP to use HTTP/2 with h2c , in other words, without using HTTPS and only using plain HTTP with HTTP upgrade. In that case, you can simply enable HTTP/2 in the HTTP listener Undertow: To configure Undertow to use HTTP/2, enable the HTTPS listener in Undertow to use HTTP/2 by setting the enable-http2 attribute to true : For more information on the HTTPS listener and configuring Undertow to use HTTPS for web applications, see Configure One-way and Two-way SSL/TLS for Applications in How to Configure Server Security . Note In order to utilize HTTP/2 with the elytron subsystem, you will need to ensure that the configured ssl-context in the https-listener of the Undertow is configured as modifiable. This can be achieved by setting the wrap attribute of the appropriate server-ssl-context to false . By default, the wrap attribute is set to false . This is required by Undertow to make modifications in the ssl-context about the ALPN. If the provided ssl-context is not writable, ALPN cannot be used and the connection falls back to HTTP/1.1. ALPN support when using HTTP/2 When using HTTP/2 over a secured TLS connection, a TLS stack that supports ALPN TLS protocol extension is required. Obtaining this stack varies based on the installed JDK. When using Java 8, the ALPN implementation is introduced directly into JBoss EAP with dependencies on Java internals. Therefore, this ALPN implementation only works with Oracle and OpenJDK. It does not work with IBM Java. Red Hat strongly recommends that you utilize ALPN TLS protocol extension support from the OpenSSL provider in JBoss EAP, with OpenSSL libraries that implement ALPN capability. Using the ALPN TLS protocol extension support from the OpenSSL provider should result in better performance. As of Java 9, the JDK supports ALPN natively; however, using the ALPN TLS protocol extension support from the OpenSSL provider should also result in better performance when using Java 9 or later. Instructions for installing OpenSSL, to obtain the ALPN TLS protocol extension support, are available in Install OpenSSL from JBoss Core Services . The standard system OpenSSL is supported on Red Hat Enterprise Linux 8 and no additional JBoss Core Services OpenSSL is required. Once OpenSSL has been installed, follow the instructions in Configure JBoss EAP to Use OpenSSL . Verify HTTP/2 is Being Used To verify that Undertow is using HTTP/2, you will need to inspect the headers coming from Undertow. Navigate to your JBoss EAP instance using https, for example https://localhost:8443 , and use your browser's developer tools to inspect the headers. Some browsers, for example Google Chrome, will show HTTP/2 pseudo headers, such as :path , :authority , :method and :scheme , when using HTTP/2. Other browsers, for example Firefox and Safari, will report the status or version of the header as HTTP/2.0 . 17.14. Configuring a RequestDumping Handler The RequestDumping handler, io.undertow.server.handlers.RequestDumpingHandler , logs the details of a request and corresponding response objects handled by Undertow within JBoss EAP. Important While this handler can be useful for debugging, it may also log sensitive information. Please keep this in mind when enabling this handler. Note The RequestDumping handler replaces the RequestDumperValve from JBoss EAP 6. You can configure a RequestDumping handler at either at the server level directly in JBoss EAP or within an individual application. 17.14.1. Configuring a RequestDumping Handler on the Server A RequestDumping handler should be configured as an expression filter. To configure a RequestDumping handler as an expression filter, you need to do the following: Create a new Expression Filter with the RequestDumping Handler Enable the Expression Filter in the Undertow Web Server Important All requests and corresponding responses handled by the Undertow web server will be logged when enabling the RequestDumping handler as a expression filter in this manner. Configuring a RequestDumping Handler for Specific URLs In addition to logging all requests, you can also use an expression filter to only log requests and corresponding responses for specific URLs. This can be accomplished using a predicate in your expression such as path , path-prefix , or path-suffix . For example, if you want to log all requests and corresponding responses to /myApplication/test , you can use the expression "path(/myApplication/test) -> dump-request" instead of the expression "dump-request" when creating your expression filter. This will only direct requests with a path exactly matching /myApplication/test to the RequestDumping handler. 17.14.2. Configuring a RequestDumping Handler within an Application In addition to configuring a RequestDumping handler at the server, you can also configure it within individual applications. This will limit the scope of the handler to only that specific application. A RequestDumping handler should be configured in WEB-INF/undertow-handlers.conf . To configure the RequestDumping handler in WEB-INF/undertow-handlers.conf to log all requests and corresponding responses for this application, add the following expression to WEB-INF/undertow-handlers.conf : Example: WEB-INF/undertow-handlers.conf To configure the RequestDumping handler in WEB-INF/undertow-handlers.conf to only log requests and corresponding responses to specific URLs within this application, you can use a predicate in your expression such as path , path-prefix , or path-suffix . For example, to log all requests and corresponding responses to test in your application, the following expression with the path predicate could be used: Example: WEB-INF/undertow-handlers.conf Note When using the predicates such as path , path-prefix , or path-suffix in expressions defined in the application's WEB-INF/undertow-handlers.conf , the value used will be relative to the context root of the application. For example, if the application's context root is myApplication with an expression path(/test) -> dump-request configured in WEB-INF/undertow-handlers.conf , it will only log requests and corresponding responses to /myApplication/test . 17.15. Configuring Cookie Security You can use the secure-cookie handler to enhance the security of cookies that are created over a connection between a server and a client. In this case, if the connection over which the cookie is set is marked as secure, the cookie will have its secure attribute set to true . You can secure the connection by configuring a listener or by using HTTPS. You configure the secure-cookie handler by defining an expression-filter in the undertow subsystem. For more information, see Configuring Filters . When the secure-cookie handler is in use, cookies that are set over a secure connection will be implicitly set as secure and will never be sent over an unsecure connection. 17.16. Tuning the Undertow Subsystem For tips on optimizing performance for the undertow subsystem, see the Undertow Subsystem Tuning section of the Performance Tuning Guide .
[ "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=undertow/application-security-domain=ApplicationDomain:add(security-domain=ApplicationDomain)", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" ... default-security-domain=\"other\"> <application-security-domains> <application-security-domain name=\"ApplicationDomain\" security-domain=\"ApplicationDomain\"/> </application-security-domains> </subsystem>", "/subsystem=undertow/application-security-domain=MyAppSecurity:add(http-authentication-factory=application-http-authentication)", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" ... default-security-domain=\"other\"> <application-security-domains> <application-security-domain name=\"MyAppSecurity\" http-authentication-factory=\"application-http-authentication\"/> </application-security-domains> </subsystem>", "/subsystem=undertow/application-security-domain=MyAppSecurity:read-resource(include-runtime=true) { \"outcome\" => \"success\", \"result\" => { \"enable-jacc\" => false, \"http-authentication-factory\" => undefined, \"override-deployment-config\" => false, \"referencing-deployments\" => [\"simple-webapp.war\"], \"security-domain\" => \"ApplicationDomain\", \"setting\" => undefined } }", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> </subsystem>", "/subsystem=undertow/buffer-cache=default/:write-attribute(name=buffer-size,value=2048)", "reload", "/subsystem=undertow/buffer-cache=new-buffer:add", "/subsystem=undertow/buffer-cache=new-buffer:remove", "reload", "/subsystem=undertow/byte-buffer-pool=myByteBufferPool:write-attribute(name=buffer-size,value=1024)", "reload", "/subsystem=undertow/byte-buffer-pool=newByteBufferPool:add", "/subsystem=undertow/byte-buffer-pool=newByteBufferPool:remove", "reload", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> </subsystem>", "/subsystem=undertow/server=default-server:write-attribute(name=default-host,value=default-host)", "reload", "/subsystem=undertow/server=new-server:add", "reload", "/subsystem=undertow/server=new-server:remove", "reload", "/subsystem=undertow/server=default-server/host=default-host/setting=access-log:add", "/subsystem=undertow/server=default-server/host=default-host/setting=access-log:write-attribute(name=pattern,value=\"combined\"", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:add", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:add(metadata={\"@version\"=\"1\", \"qualifiedHostName\"=USD{jboss.qualified.host.name:unknown}}, attributes={bytes-sent={}, date-time={key=\"@timestamp\", date-format=\"yyyy-MM-dd'T'HH:mm:ssSSS\"}, remote-host={}, request-line={}, response-header={key-prefix=\"responseHeader\", names=[\"Content-Type\"]}, response-code={}, remote-user={}})", "{ \"eventSource\":\"web-access\", \"hostName\":\"default-host\", \"@version\":\"1\", \"qualifiedHostName\":\"localhost.localdomain\", \"bytesSent\":1504, \"@timestamp\":\"2019-05-02T11:57:37123\", \"remoteHost\":\"127.0.0.1\", \"remoteUser\":null, \"requestLine\":\"GET / HTTP/2.0\", \"responseCode\":200, \"responseHeaderContent-Type\":\"text/html\" }", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:write-attribute(name=attributes,value={bytes-sent={}, date-time={key=\"@timestamp\", date-format=\"yyyy-MM-dd'T'HH:mm:ssSSS\"}, remote-host={}, request-line={}, response-header={key-prefix=\"responseHeader\", names=[\"Content-Type\"]}, response-code={}, remote-user={}})", "/subsystem=undertow/server=default-server/host=default-host/setting=console-access-log:write-attribute(name=metadata,value={\"@version\"=\"1\", \"qualifiedHostName\"=USD{jboss.qualified.host.name:unknown}})", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> </server> <servlet-container name=\"default\"> <jsp-config/> <websockets/> </servlet-container> </subsystem>", "/subsystem=undertow/servlet-container=default:write-attribute(name=ignore-flush,value=true)", "reload", "/subsystem=undertow/servlet-container=new-servlet-container:add", "reload", "/subsystem=undertow/servlet-container=new-servlet-container:remove", "reload", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <buffer-cache name=\"default\"/> <server name=\"default-server\"> </server> <servlet-container name=\"default\"> </servlet-container> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=case-sensitive,value=true)", "reload", "/subsystem=undertow/configuration=handler/file=new-file-handler:add(path=\"USD{jboss.home.dir}/welcome-content\")", "/subsystem=undertow/configuration=handler/file=new-file-handler:remove", "reload", "/subsystem=undertow/configuration=filter/response-header=myHeader:write-attribute(name=header-value,value=\"JBoss-EAP\")", "reload", "/subsystem=undertow/configuration=filter/response-header=new-response-header:add(header-name=new-response-header,header-value=\"My Value\")", "/subsystem=undertow/configuration=filter/response-header=new-response-header:remove", "reload", "/subsystem=undertow/configuration=filter/expression-filter=buf:add(expression=\"buffer-request(buffers=1)\") /subsystem=undertow/server=default-server/host=default-host/filter-ref=buf:add", "/subsystem=undertow/configuration=filter/expression-filter=addSameSiteLax:add(expression=\"path-prefix('/mypathprefix') -> samesite-cookie(Lax)\")", "/subsystem=undertow/server=default-server/host=default-host/filter-ref=addSameSiteLax:add", "samesite-cookie(mode=<mode>)", "<subsystem xmlns=\"urn:jboss:domain:undertow:10.0\" default-server=\"default-server\" default-virtual-host=\"default-host\" default-servlet-container=\"default\" default-security-domain=\"other\"> <server name=\"default-server\"> <http-listener name=\"default\" socket-binding=\"http\" redirect-socket=\"https\" enable-http2=\"true\"/> <https-listener name=\"https\" socket-binding=\"https\" security-realm=\"ApplicationRealm\" enable-http2=\"true\"/> <host name=\"default-host\" alias=\"localhost\"> <location name=\"/\" handler=\"welcome-content\"/> <http-invoker security-realm=\"ApplicationRealm\"/> </host> </server> <handlers> <file name=\"welcome-content\" path=\"USD{jboss.home.dir}/welcome-content\"/> </handlers> </subsystem>", "/subsystem=undertow/configuration=handler/file=welcome-content:write-attribute(name=path,value=\" /path/to/content \")", "/subsystem=undertow/configuration=handler/file= NEW_FILE_HANDLER :add(path=\" /path/to/content \") /subsystem=undertow/server=default-server/host=default-host/location=\\/:write-attribute(name=handler,value= NEW_FILE_HANDLER )", "reload", "/subsystem=undertow/server=default-server/host=default-host:write-attribute(name=default-web-module,value=hello.war)", "reload", "/subsystem=undertow/server=default-server/host=default-host/location=\\/:remove", "reload", "/subsystem=undertow/servlet-container=default:write-attribute(name=default-session-timeout, value=60)", "reload", "/subsystem=undertow/servlet-container=default/setting=session-cookie:add", "/subsystem=undertow/servlet-container=default/setting=session-cookie:write-attribute(name=http-only,value=true)", "reload", "/subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:add", "/subsystem=undertow/server=default-server/host=default-host/setting=single-sign-on:write-attribute(name=http-only,value=true)", "reload", "/subsystem=undertow/server=default-server/http-listener=default:write-attribute(name=enable-http2,value=true)", "/subsystem=undertow/server=default-server/https-listener=https:write-attribute(name=enable-http2,value=true)", "/subsystem=undertow/configuration=filter/expression-filter=requestDumperExpression:add(expression=\"dump-request\")", "/subsystem=undertow/server=default-server/host=default-host/filter-ref=requestDumperExpression:add", "dump-request", "path(/test) -> dump-request" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/configuration_guide/configuring_the_web_server_undertow
Chapter 12. Clustered Samba Configuration
Chapter 12. Clustered Samba Configuration As of the Red Hat Enterprise Linux 6.2 release, the Red Hat High Availability Add-On provides support for running Clustered Samba in an active/active configuration. This requires that you install and configure CTDB on all nodes in a cluster, which you use in conjunction with GFS2 clustered file systems. Note Red Hat Enterprise Linux 6 supports a maximum of four nodes running clustered Samba. This chapter describes the procedure for configuring CTDB by configuring an example system. For information on configuring GFS2 file systems, see Global File System 2 . For information on configuring logical volumes, see Logical Volume Manager Administration . Note Simultaneous access to the data in the Samba share from outside of Samba is not supported. 12.1. CTDB Overview CTDB is a cluster implementation of the TDB database used by Samba. To use CTDB, a clustered file system must be available and shared on all nodes in the cluster. CTDB provides clustered features on top of this clustered file system. As of the Red Hat Enterprise Linux 6.2 release, CTDB also runs a cluster stack in parallel to the one provided by Red Hat Enterprise Linux clustering. CTDB manages node membership, recovery/failover, IP relocation and Samba services.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/cluster_administration/ch-clustered-samba-CA
Chapter 108. KafkaTopic schema reference
Chapter 108. KafkaTopic schema reference Property Property type Description spec KafkaTopicSpec The specification of the topic. status KafkaTopicStatus The status of the topic.
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.9/html/streams_for_apache_kafka_api_reference/type-kafkatopic-reference
probe::sunrpc.sched.execute
probe::sunrpc.sched.execute Name probe::sunrpc.sched.execute - Execute the RPC `scheduler' Synopsis sunrpc.sched.execute Values tk_pid the debugging id of the task prot the IP protocol in the RPC call vers the program version in the RPC call tk_flags the flags of the task xid the transmission id in the RPC call prog the program number in the RPC call
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-sunrpc-sched-execute
Compatibility Guide
Compatibility Guide Red Hat Ceph Storage 8 Red Hat Ceph Storage and Its Compatibility With Other Products Red Hat Ceph Storage Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/compatibility_guide/compatibility-matrix-for-red-hat-ceph-storage-7-0
Chapter 61. Salesforce Create Sink
Chapter 61. Salesforce Create Sink Creates an object in Salesforce. The body of the message must contain the JSON of the salesforce object. Example body: { "Phone": "555", "Name": "Antonia", "LastName": "Garcia" } 61.1. Configuration Options The following table summarizes the configuration options available for the salesforce-create-sink Kamelet: Property Name Description Type Default Example clientId * Consumer Key The Salesforce application consumer key string clientSecret * Consumer Secret The Salesforce application consumer secret string password * Password The Salesforce user password string userName * Username The Salesforce username string loginUrl Login URL The Salesforce instance login URL string "https://login.salesforce.com" sObjectName Object Name Type of the object string "Contact" Note Fields marked with an asterisk (*) are mandatory. 61.2. Dependencies At runtime, the salesforce-create-sink Kamelet relies upon the presence of the following dependencies: camel:salesforce camel:kamelet 61.3. Usage This section describes how you can use the salesforce-create-sink . 61.3.1. Knative Sink You can use the salesforce-create-sink Kamelet as a Knative sink by binding it to a Knative object. salesforce-create-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-create-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-create-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 61.3.1.1. Prerequisite Make sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 61.3.1.2. Procedure for using the cluster CLI Save the salesforce-create-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-create-sink-binding.yaml 61.3.1.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind channel:mychannel salesforce-create-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 61.3.2. Kafka Sink You can use the salesforce-create-sink Kamelet as a Kafka sink by binding it to a Kafka topic. salesforce-create-sink-binding.yaml apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-create-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-create-sink properties: clientId: "The Consumer Key" clientSecret: "The Consumer Secret" password: "The Password" userName: "The Username" 61.3.2.1. Prerequisites Ensure that you've installed the AMQ Streams operator in your OpenShift cluster and created a topic named my-topic in the current namespace. Make also sure you have "Red Hat Integration - Camel K" installed into the OpenShift cluster you're connected to. 61.3.2.2. Procedure for using the cluster CLI Save the salesforce-create-sink-binding.yaml file to your local drive, and then edit it as needed for your configuration. Run the sink by using the following command: oc apply -f salesforce-create-sink-binding.yaml 61.3.2.3. Procedure for using the Kamel CLI Configure and run the sink by using the following command: kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-create-sink -p "sink.clientId=The Consumer Key" -p "sink.clientSecret=The Consumer Secret" -p "sink.password=The Password" -p "sink.userName=The Username" This command creates the KameletBinding in the current namespace on the cluster. 61.4. Kamelet source file https://github.com/openshift-integration/kamelet-catalog/salesforce-create-sink.kamelet.yaml
[ "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-create-sink-binding spec: source: ref: kind: Channel apiVersion: messaging.knative.dev/v1 name: mychannel sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-create-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"", "apply -f salesforce-create-sink-binding.yaml", "kamel bind channel:mychannel salesforce-create-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\"", "apiVersion: camel.apache.org/v1alpha1 kind: KameletBinding metadata: name: salesforce-create-sink-binding spec: source: ref: kind: KafkaTopic apiVersion: kafka.strimzi.io/v1beta1 name: my-topic sink: ref: kind: Kamelet apiVersion: camel.apache.org/v1alpha1 name: salesforce-create-sink properties: clientId: \"The Consumer Key\" clientSecret: \"The Consumer Secret\" password: \"The Password\" userName: \"The Username\"", "apply -f salesforce-create-sink-binding.yaml", "kamel bind kafka.strimzi.io/v1beta1:KafkaTopic:my-topic salesforce-create-sink -p \"sink.clientId=The Consumer Key\" -p \"sink.clientSecret=The Consumer Secret\" -p \"sink.password=The Password\" -p \"sink.userName=The Username\"" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_k/1.10.7/html/kamelets_reference/salesforce-sink-create
Chapter 5. Security considerations
Chapter 5. Security considerations 5.1. FIPS-140-2 The Federal Information Processing Standard Publication 140-2 (FIPS-140-2) is a standard that defines a set of security requirements for the use of cryptographic modules. Law mandates this standard for the US government agencies and contractors and is also referenced in other international and industry specific standards. Red Hat OpenShift Data Foundation now uses the FIPS validated cryptographic modules. Red Hat Enterprise Linux OS/CoreOS (RHCOS) delivers these modules. Currently, the Cryptographic Module Validation Program (CMVP) processes the cryptography modules. You can see the state of these modules at Modules in Process List . For more up-to-date information, see the Red Hat Knowledgebase solution RHEL core crypto components . Note Enable the FIPS mode on the OpenShift Container Platform, before you install OpenShift Data Foundation. OpenShift Container Platform must run on the RHCOS nodes, as the feature does not support OpenShift Data Foundation deployment on Red Hat Enterprise Linux 7 (RHEL 7). For more information, see Installing a cluster in FIPS mode and Support for FIPS cryptography of the Installing guide in OpenShift Container Platform documentation. 5.2. Proxy environment A proxy environment is a production environment that denies direct access to the internet and provides an available HTTP or HTTPS proxy instead. Red Hat Openshift Container Platform is configured to use a proxy by modifying the proxy object for existing clusters or by configuring the proxy settings in the install-config.yaml file for new clusters. Red Hat supports deployment of OpenShift Data Foundation in proxy environments when OpenShift Container Platform has been configured according to configuring the cluster-wide proxy . 5.3. Data encryption options Encryption lets you encode your data to make it impossible to read without the required encryption keys. This mechanism protects the confidentiality of your data in the event of a physical security breach that results in a physical media to escape your custody. The per-PV encryption also provides access protection from other namespaces inside the same OpenShift Container Platform cluster. Data is encrypted when it is written to the disk, and decrypted when it is read from the disk. Working with encrypted data might incur a small penalty to performance. Encryption is only supported for new clusters deployed using Red Hat OpenShift Data Foundation 4.6 or higher. An existing encrypted cluster that is not using an external Key Management System (KMS) cannot be migrated to use an external KMS. Previously, HashiCorp Vault was the only supported KMS for Cluster-wide and Persistent Volume encryptions. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault Key/Value (KV) secret engine API, version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. As of OpenShift Data Foundation 4.12, Thales CipherTrust Manager has been introduced as an additional supported KMS. Important KMS is required for StorageClass encryption, and is optional for cluster-wide encryption. To start with, Storage class encryption requires a valid Red Hat OpenShift Data Foundation Advanced subscription. For more information, see the knowledgebase article on OpenShift Data Foundation subscriptions . Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.1. Cluster-wide encryption Red Hat OpenShift Data Foundation supports cluster-wide encryption (encryption-at-rest) for all the disks and Multicloud Object Gateway operations in the storage cluster. OpenShift Data Foundation uses Linux Unified Key System (LUKS) version 2 based encryption with a key size of 512 bits and the aes-xts-plain64 cipher where each device has a different encryption key. The keys are stored using a Kubernetes secret or an external KMS. Both methods are mutually exclusive and you can not migrate between methods. Encryption is disabled by default for block and file storage. You can enable encryption for the cluster at the time of deployment. The MultiCloud Object Gateway supports encryption by default. See the deployment guides for more information. Cluster wide encryption is supported in OpenShift Data Foundation 4.6 without Key Management System (KMS). Starting with OpenShift Data Foundation 4.7, it supports with and without HashiCorp Vault KMS. Starting with OpenShift Data Foundation 4.12, it supports with and without both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Security common practices require periodic encryption key rotation. Red Hat OpenShift Data Foundation automatically rotates encryption keys stored in kubernetes secret (non-KMS) weekly. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . Cluster wide encryption with HashiCorp Vault KMS provides two authentication methods: Token : This method allows authentication using vault tokens. A kubernetes secret containing the vault token is created in the openshift-storage namespace and is used for authentication. If this authentication method is selected then the administrator has to provide the vault token that provides access to the backend path in Vault, where the encryption keys are stored. Kubernetes : This method allows authentication with vault using serviceaccounts. If this authentication method is selected then the administrator has to provide the name of the role configured in Vault that provides access to the backend path, where the encryption keys are stored. The value of this role is then added to the ocs-kms-connection-details config map. This method is available from OpenShift Data Foundation 4.10. Currently, HashiCorp Vault is the only supported KMS. With OpenShift Data Foundation 4.7.0 and 4.7.1, only HashiCorp Vault KV secret engine, API version 1 is supported. Starting with OpenShift Data Foundation 4.7.2, HashiCorp Vault KV secret engine API, versions 1 and 2 are supported. Note OpenShift Data Foundation on IBM Cloud platform supports Hyper Protect Crypto Services (HPCS) Key Management Services (KMS) as the encryption solution in addition to HashiCorp Vault KMS. Important Red Hat works with the technology partners to provide this documentation as a service to the customers. However, Red Hat does not provide support for the Hashicorp product. For technical assistance with this product, contact Hashicorp . 5.3.2. Storage class encryption You can encrypt persistent volumes (block only) with storage class encryption using an external Key Management System (KMS) to store device encryption keys. Persistent volume encryption is only available for RADOS Block Device (RBD) persistent volumes. See how to create a storage class with persistent volume encryption . Storage class encryption is supported in OpenShift Data Foundation 4.7 or higher with HashiCorp Vault KMS. Storage class encryption is supported in OpenShift Data Foundation 4.12 or higher with both HashiCorp Vault KMS and Thales CipherTrust Manager KMS. Note Requires a valid Red Hat OpenShift Data Foundation Advanced subscription. To know how subscriptions for OpenShift Data Foundation work, see knowledgebase article on OpenShift Data Foundation subscriptions . 5.3.3. CipherTrust manager Red Hat OpenShift Data Foundation version 4.12 introduced Thales CipherTrust Manager as an additional Key Management System (KMS) provider for your deployment. Thales CipherTrust Manager provides centralized key lifecycle management. CipherTrust Manager supports Key Management Interoperability Protocol (KMIP), which enables communication between key management systems. CipherTrust Manager is enabled during deployment. 5.3.4. Data encryption in-transit via Red Hat Ceph Storage's messenger version 2 protocol (msgr2) Starting with OpenShift Data Foundation version 4.14, Red Hat Ceph Storage's messenger version 2 protocol can be used to encrypt data in-transit. This provides an important security requirement for your infrastructure. In-transit encryption can be enabled during deployment while the cluster is being created. See the deployment guide for your environment for instructions on enabling data encryption in-transit during cluster creation. The msgr2 protocol supports two connection modes: crc Provides strong initial authentication when a connection is established with cephx. Provides a crc32c integrity check to protect against bit flips. Does not provide protection against a malicious man-in-the-middle attack. Does not prevent an eavesdropper from seeing all post-authentication traffic. secure Provides strong initial authentication when a connection is established with cephx. Provides full encryption of all post-authentication traffic. Provides a cryptographic integrity check. The default mode is crc . 5.4. Encryption in Transit You need to enable IPsec so that all the network traffic between the nodes on the OVN-Kubernetes Container Network Interface (CNI) cluster network travels through an encrypted tunnel. By default, IPsec is disabled. You can enable it either during or after installing the cluster. If you need to enable IPsec after cluster installation, you must first resize your cluster MTU to account for the overhead of the IPsec ESP IP header. For more information on how to configure the IPsec encryption, see Configuring IPsec encryption of the Networking guide in OpenShift Container Platform documentation.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/planning_your_deployment/security-considerations_rhodf
Autoscale APIs
Autoscale APIs OpenShift Container Platform 4.16 Reference guide for autoscale APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/autoscale_apis/index
Introduction
Introduction This book provides information about configuring and maintaining Red Hat GFS2 (Red Hat Global File System 2), which is included in the Resilient Storage Add-On. 1. Audience This book is intended primarily for Linux system administrators who are familiar with the following activities: Linux system administration procedures, including kernel configuration Installation and configuration of shared storage networks, such as Fibre Channel SANs
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/ch-intro-gfs2
21.8. Loop Device Errors
21.8. Loop Device Errors If file-based guest images are used you may have to increase the number of configured loop devices. The default configuration allows up to eight active loop devices. If more than eight file-based guests or loop devices are needed the number of loop devices configured can be adjusted in the /etc/modprobe.d/ directory. Add the following line: This example uses 64 but you can specify another number to set the maximum loop value. You may also have to implement loop device backed guests on your system. To use a loop device backed guests for a full virtualized system, use the phy: device or file: file commands.
[ "options loop max_loop=64" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sect-virtualization-troubleshooting-loop_device_errors
Chapter 27. Red Hat Decision Manager clusters
Chapter 27. Red Hat Decision Manager clusters By clustering two or more computers, you have the benefits of high availability, enhanced collaboration, and load balancing. High availability decreases the chance of data loss when a single computer fails. When a computer fails, another computer fills the gap by providing a copy of the data that was on the failed computer. When the failed computer comes online again, it resumes its place in the cluster. There are several ways that you can cluster Red Hat Decision Manager components. This document describes how to cluster the following scenarios: Chapter 28, Red Hat Process Automation Manager clusters in a development (authoring) environment Chapter 29, KIE Server clusters in a runtime environment
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/installing_and_configuring_red_hat_decision_manager/clustering-con_clustering
3.6. Tapsets
3.6. Tapsets Tapsets are scripts that form a library of pre-written probes and functions to be used in SystemTap scripts. When a user runs a SystemTap script, SystemTap checks the script's probe events and handlers against the tapset library; SystemTap then loads the corresponding probes and functions before translating the script to C (see Section 3.1, "Architecture" for information on what transpires in a SystemTap session). Like SystemTap scripts, tapsets use the file name extension .stp . The standard library of tapsets is located in the /usr/share/systemtap/tapset/ directory by default. However, unlike SystemTap scripts, tapsets are not meant for direct execution; rather, they constitute the library from which other scripts can pull definitions. The tapset library is an abstraction layer designed to make it easier for users to define events and functions. Tapsets provide useful aliases for functions that users may want to specify as an event; knowing the proper alias to use is, for the most part, easier than remembering specific kernel functions that might vary between kernel versions. Several handlers and functions in Section 3.2.1, "Event" and SystemTap Functions are defined in tapsets. For example, thread_indent() is defined in indent.stp .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_beginners_guide/understanding-tapsets
Chapter 2. Architecture of OpenShift Data Foundation
Chapter 2. Architecture of OpenShift Data Foundation Red Hat OpenShift Data Foundation provides services for, and can run internally from Red Hat OpenShift Container Platform. Figure 2.1. Red Hat OpenShift Data Foundation architecture Red Hat OpenShift Data Foundation supports deployment into Red Hat OpenShift Container Platform clusters deployed on Installer Provisioned Infrastructure or User Provisioned Infrastructure. For details about these two approaches, see OpenShift Container Platform - Installation process . To know more about interoperability of components for the Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform, see the Red Hat OpenShift Data Foundation Supportability and Interoperability Checker . For information about the architecture and lifecycle of OpenShift Container Platform, see OpenShift Container Platform architecture . Note For IBM Power refer OpenShift Container Platform - Installation process . 2.1. About operators Red Hat OpenShift Data Foundation comprises three main operators, which codify administrative tasks and custom resources so that task and resource characteristics can be easily automated. Administrators define the desired end state of the cluster, and the OpenShift Data Foundation operators ensure the cluster is either in that state, or approaching that state, with minimal administrator intervention. OpenShift Data Foundation operator A meta-operator that codifies and enforces the recommendations and requirements of a supported Red Hat OpenShift Data Foundation deployment by drawing on other operators in specific, tested ways. This operator provides the storage cluster resource that wraps resources provided by the Rook-Ceph and NooBaa operators. Rook-Ceph operator This operator automates the packaging, deployment, management, upgrading, and scaling of persistent storage and file, block, and object services. It creates block and file storage classes for all environments, and creates an object storage class and services object bucket claims made against it in on-premises environments. Additionally, for internal mode clusters, it provides the Ceph cluster resource, which manages the deployments and services representing the following: Object storage daemons (OSDs) Monitors (MONs) Manager (MGR) Metadata servers (MDS) Object gateways (RGW) on-premises only MCG operator This operator automates the packaging, deployment, management, upgrading, and scaling of the Multicloud Object Gateway object service. It creates an object storage class and services object bucket claims made against it. Additionally, it provides the NooBaa cluster resource, which manages the deployments and services for NooBaa core, database, and endpoint. 2.2. Storage cluster deployment approaches Flexibility is a core tenet of Red Hat OpenShift Data Foundation, as evidenced by its growing list of operating modalities. This section provides you with information that will help you to select the most appropriate approach for your environments. Red Hat OpenShift Data Foundation can be deployed either entirely within OpenShift Container Platform (Internal approach) or to make available the services from a cluster running outside of OpenShift Container Platform (External approach). 2.2.1. Internal approach Deployment of Red Hat OpenShift Data Foundation entirely within Red Hat OpenShift Container Platform has all the benefits of operator based deployment and management. Internal-attached device approach in the graphical user interface can be used to deploy Red Hat OpenShift Data Foundation in internal mode using the local storage operator and local storage devices. Ease of deployment and management are the highlights of running OpenShift Data Foundation services internally on OpenShift Container Platform. There are two different deployment modalities available when Red Hat OpenShift Data Foundation is running entirely within Red Hat OpenShift Container Platform: Simple Optimized Simple deployment Red Hat OpenShift Data Foundation services run co-resident with applications, managed by operators in Red Hat OpenShift Container Platform. A simple deployment is best for situations where Storage requirements are not clear Red Hat OpenShift Data Foundation services will run co-resident with applications Creating a node instance of a specific size is difficult (bare metal) In order for Red Hat OpenShift Data Foundation to run co-resident with applications, they must have local storage devices, or portable storage devices attached to them dynamically, like EBS volumes on EC2, or vSphere Virtual Volumes on VMware, or SAN volumes dynamically provisioned by PowerVC. Optimized deployment Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes managed by Red Hat OpenShift Container Platform. An optimized approach is best for situations when: Storage requirements are clear Red Hat OpenShift Data Foundation services run on dedicated infrastructure nodes Creating a node instance of a specific size is easy (Cloud, Virtualized environment, etc.) 2.2.2. External approach Red Hat OpenShift Data Foundation exposes the Red Hat Ceph Storage services running outside of the OpenShift Container Platform cluster as storage classes. The external approach is best used when: Storage requirements are significant (600+ storage devices) Multiple OpenShift Container Platform clusters need to consume storage services from a common external cluster. Another team (SRE, Storage, etc.) needs to manage the external cluster providing storage services. Possibly pre-existing. 2.3. Node types Nodes run the container runtime, as well as services, to ensure that containers are running, and maintain network communication and separation between pods. In OpenShift Data Foundation, there are three types of nodes. Table 2.1. Types of nodes Node Type Description Master These nodes run processes that expose the Kubernetes API, watch and schedule newly created pods, maintain node health and quantity, and control interaction with underlying cloud providers. Infrastructure (Infra) Infra nodes run cluster level infrastructure services such as logging, metrics, registry, and routing. These are optional in OpenShift Container Platform clusters. In order to separate OpenShift Data Foundation layer workload from applications, it is recommended to use infra nodes for OpenShift Data Foundation in virtualized and cloud environments. To create Infra nodes, you can provision new nodes labeled as infra . See How to use dedicated worker nodes for Red Hat OpenShift Data Foundation? Worker Worker nodes are also known as application nodes since they run applications. When OpenShift Data Foundation is deployed in internal mode, a minimal cluster of 3 worker nodes is required, where the nodes are recommended to be spread across three different racks, or availability zones, to ensure availability. In order for OpenShift Data Foundation to run on worker nodes, they must either have local storage devices, or portable storage devices attached to them dynamically. When it is deployed in external mode, it runs on multiple nodes to allow rescheduling by K8S on available nodes in case of a failure. Note Nodes that run only storage workloads require a subscription for Red Hat OpenShift Data Foundation. Nodes that run other workloads in addition to storage workloads require both Red Hat OpenShift Data Foundation and Red Hat OpenShift Container Platform subscriptions. See Chapter 6, Subscriptions for more information.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.9/html/planning_your_deployment/odf-architecture_rhodf
Chapter 2. Installing CodeReady Workspaces on OpenShift 3 using the Operator
Chapter 2. Installing CodeReady Workspaces on OpenShift 3 using the Operator Operators are a method of packaging, deploying, and managing a OpenShift application which also provide the following: Repeatability of installation and upgrade. Constant health checks of every system component. Over-the-air (OTA) updates for OpenShift components and ISV content. A place to encapsulate knowledge from field engineers and spread it to all users. This chapter describes how to install CodeReady Workspaces on OpenShift 3 using the CLI management tool and the Operator method. 2.1. Installing CodeReady Workspaces on OpenShift 3 using the Operator This section describes how to install CodeReady Workspaces on OpenShift 3 with the CLI management tool to install via the Operator with SSL (HTTPS) enabled. As of 2.1.1, SSL/TLS is enabled by default as it is required by the Che-Theia IDE. Prerequisites A running instance of OpenShift 3.11. Administrator rights on this OpenShift 3 instance. The oc OpenShift 3.11 CLI management tool is installed and configured. See Installing the OpenShift 3.11 CLI . To check the version of the oc tool, use the oc version command. The crwctl CLI management tool is installed. See Installing the crwctl CLI management tool . Procedure Log in to OpenShift. See Basic Setup and Login . Run the following command to create the CodeReady Workspaces instance: Note To create the CodeReady Workspaces instance on OpenShift clusters that have not been configured with a valid certificate for the routes, run the crwctl command with the --self-signed-cert flag. Verification steps The output of the command ends with: Navigate to the CodeReady Workspaces cluster instance: https://codeready-<openshift_deployment_name>.<domain_name> . The domain uses Let's Encrypt ACME certificates.
[ "oc login", "crwctl server:start -n <openshift_namespace>", "Command server:start has completed successfully." ]
https://docs.redhat.com/en/documentation/red_hat_codeready_workspaces/2.1/html/installation_guide/installing-codeready-workspaces-on-openshift-3-using-the-operator_crw
4.4. Logical Volume Administration
4.4. Logical Volume Administration This section describes the commands that perform the various aspects of logical volume administration. 4.4.1. Creating Linear Logical Volumes To create a logical volume, use the lvcreate command. If you do not specify a name for the logical volume, the default name lvol # is used where # is the internal number of the logical volume. When you create a logical volume, the logical volume is carved from a volume group using the free extents on the physical volumes that make up the volume group. Normally logical volumes use up any space available on the underlying physical volumes on a -free basis. Modifying the logical volume frees and reallocates space in the physical volumes. The following command creates a logical volume 10 gigabytes in size in the volume group vg1 . The default unit for logical volume size is megabytes. The following command creates a 1500 megabyte linear logical volume named testlv in the volume group testvg , creating the block device /dev/testvg/testlv . The following command creates a 50 gigabyte logical volume named gfslv from the free extents in volume group vg0 . You can use the -l argument of the lvcreate command to specify the size of the logical volume in extents. You can also use this argument to specify the percentage of of the size of a related volume group, logical volume, or set of physical volumes. The suffix %VG denotes the total size of the volume group, the suffix %FREE the remaining free space in the volume group, and the suffix %PVS the free space in the specified physical volumes. For a snapshot, the size can be expressed as a percentage of the total size of the origin logical volume with the suffix %ORIGIN (100%ORIGIN provides space for the whole origin). When expressed as a percentage, the size defines an upper limit for the number of logical extents in the new logical volume. The precise number of logical extents in the new LV is not determined until the command has completed. The following command creates a logical volume called mylv that uses 60% of the total space in volume group testvg . The following command creates a logical volume called yourlv that uses all of the unallocated space in the volume group testvg . You can use -l argument of the lvcreate command to create a logical volume that uses the entire volume group. Another way to create a logical volume that uses the entire volume group is to use the vgdisplay command to find the "Total PE" size and to use those results as input to the lvcreate command. The following commands create a logical volume called mylv that fills the volume group named testvg . The underlying physical volumes used to create a logical volume can be important if the physical volume needs to be removed, so you may need to consider this possibility when you create the logical volume. For information on removing a physical volume from a volume group, see Section 4.3.7, "Removing Physical Volumes from a Volume Group" . To create a logical volume to be allocated from a specific physical volume in the volume group, specify the physical volume or volumes at the end at the lvcreate command line. The following command creates a logical volume named testlv in volume group testvg allocated from the physical volume /dev/sdg1 , You can specify which extents of a physical volume are to be used for a logical volume. The following example creates a linear logical volume out of extents 0 through 24 of physical volume /dev/sda1 and extents 50 through 124 of physical volume /dev/sdb1 in volume group testvg . The following example creates a linear logical volume out of extents 0 through 25 of physical volume /dev/sda1 and then continues laying out the logical volume at extent 100. The default policy for how the extents of a logical volume are allocated is inherit , which applies the same policy as for the volume group. These policies can be changed using the lvchange command. For information on allocation policies, see Section 4.3.1, "Creating Volume Groups" . 4.4.2. Creating Striped Volumes For large sequential reads and writes, creating a striped logical volume can improve the efficiency of the data I/O. For general information about striped volumes, see Section 2.3.2, "Striped Logical Volumes" . When you create a striped logical volume, you specify the number of stripes with the -i argument of the lvcreate command. This determines over how many physical volumes the logical volume will be striped. The number of stripes cannot be greater than the number of physical volumes in the volume group (unless the --alloc anywhere argument is used). If the underlying physical devices that make up a striped logical volume are different sizes, the maximum size of the striped volume is determined by the smallest underlying device. For example, in a two-legged stripe, the maximum size is twice the size of the smaller device. In a three-legged stripe, the maximum size is three times the size of the smallest device. The following command creates a striped logical volume across 2 physical volumes with a stripe of 64 kilobytes. The logical volume is 50 gigabytes in size, is named gfslv , and is carved out of volume group vg0 . As with linear volumes, you can specify the extents of the physical volume that you are using for the stripe. The following command creates a striped volume 100 extents in size that stripes across two physical volumes, is named stripelv and is in volume group testvg . The stripe will use sectors 0-49 of /dev/sda1 and sectors 50-99 of /dev/sdb1 . 4.4.3. RAID Logical Volumes LVM supports RAID0/1/4/5/6/10. Note RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. If you require non-exclusive mirrored volumes, you must create the volumes with a mirror segment type, as described in Section 4.4.4, "Creating Mirrored Volumes" . To create a RAID logical volume, you specify a raid type as the --type argument of the lvcreate command. Table 4.1, "RAID Segment Types" describes the possible RAID segment types. Table 4.1. RAID Segment Types Segment type Description raid1 RAID1 mirroring. This is the default value for the --type argument of the lvcreate command when you specify the -m but you do not specify striping. raid4 RAID4 dedicated parity disk raid5 Same as raid5_ls raid5_la RAID5 left asymmetric. Rotating parity 0 with data continuation raid5_ra RAID5 right asymmetric. Rotating parity N with data continuation raid5_ls RAID5 left symmetric. Rotating parity 0 with data restart raid5_rs RAID5 right symmetric. Rotating parity N with data restart raid6 Same as raid6_zr raid6_zr RAID6 zero restart Rotating parity zero (left-to-right) with data restart raid6_nr RAID6 N restart Rotating parity N (left-to-right) with data restart raid6_nc RAID6 N continue Rotating parity N (left-to-right) with data continuation raid10 Striped mirrors. This is the default value for the --type argument of the lvcreate command if you specify the -m and you specify a number of stripes that is greater than 1. Striping of mirror sets raid0/raid0_meta (Red Hat Enterprise Linux 7.3 and later) Striping. RAID0 spreads logical volume data across multiple data subvolumes in units of stripe size. This is used to increase performance. Logical volume data will be lost if any of the data subvolumes fail. For information on creating RAID0 volumes, see Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . For most users, specifying one of the five available primary types ( raid1 , raid4 , raid5 , raid6 , raid10 ) should be sufficient. When you create a RAID logical volume, LVM creates a metadata subvolume that is one extent in size for every data or parity subvolume in the array. For example, creating a 2-way RAID1 array results in two metadata subvolumes ( lv_rmeta_0 and lv_rmeta_1 ) and two data subvolumes ( lv_rimage_0 and lv_rimage_1 ). Similarly, creating a 3-way stripe (plus 1 implicit parity device) RAID4 results in 4 metadata subvolumes ( lv_rmeta_0 , lv_rmeta_1 , lv_rmeta_2 , and lv_rmeta_3 ) and 4 data subvolumes ( lv_rimage_0 , lv_rimage_1 , lv_rimage_2 , and lv_rimage_3 ). The following command creates a 2-way RAID1 array named my_lv in the volume group my_vg that is one gigabyte in size. You can create RAID1 arrays with different numbers of copies according to the value you specify for the -m argument. Similarly, you specify the number of stripes for a RAID 4/5/6 logical volume with the -i argument . You can also specify the stripe size with the -I argument. The following command creates a RAID5 array (3 stripes + 1 implicit parity drive) named my_lv in the volume group my_vg that is one gigabyte in size. Note that you specify the number of stripes just as you do for an LVM striped volume; the correct number of parity drives is added automatically. The following command creates a RAID6 array (3 stripes + 2 implicit parity drives) named my_lv in the volume group my_vg that is one gigabyte in size. After you have created a RAID logical volume with LVM, you can activate, change, remove, display, and use the volume just as you would any other LVM logical volume. When you create RAID10 logical volumes, the background I/O required to initialize the logical volumes with a sync operation can crowd out other I/O operations to LVM devices, such as updates to volume group metadata, particularly when you are creating many RAID logical volumes. This can cause the other LVM operations to slow down. You can control the rate at which a RAID logical volume is initialized by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvcreate command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. The following command creates a 2-way RAID10 array with 3 stripes that is 10 gigabytes in size with a maximum recovery rate of 128 kiB/sec/device. The array is named my_lv and is in the volume group my_vg . You can also specify minimum and maximum recovery rates for a RAID scrubbing operation. For information on RAID scrubbing, see Section 4.4.3.11, "Scrubbing a RAID Logical Volume" . Note You can generate commands to create logical volumes on RAID storage with the LVM RAID Calculator application. This application uses the information you input about your current or planned storage to generate these commands. The LVM RAID Calculator application can be found at https://access.redhat.com/labs/lvmraidcalculator/ . The following sections describes the administrative tasks you can perform on LVM RAID devices: Section 4.4.3.1, "Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later)" . Section 4.4.3.2, "Converting a Linear Device to a RAID Device" Section 4.4.3.3, "Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume" Section 4.4.3.4, "Converting a Mirrored LVM Device to a RAID1 Device" Section 4.4.3.5, "Resizing a RAID Logical Volume" Section 4.4.3.6, "Changing the Number of Images in an Existing RAID1 Device" Section 4.4.3.7, "Splitting off a RAID Image as a Separate Logical Volume" Section 4.4.3.8, "Splitting and Merging a RAID Image" Section 4.4.3.9, "Setting a RAID fault policy" Section 4.4.3.10, "Replacing a RAID device" Section 4.4.3.11, "Scrubbing a RAID Logical Volume" Section 4.4.3.12, "RAID Takeover (Red Hat Enterprise Linux 7.4 and Later)" Section 4.4.3.13, "Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later)" Section 4.4.3.14, "Controlling I/O Operations on a RAID1 Logical Volume" Section 4.4.3.15, "Changing the region size on a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and later)" 4.4.3.1. Creating RAID0 Volumes (Red Hat Enterprise Linux 7.3 and Later) The format for the command to create a RAID0 volume is as follows. Table 4.2. RAID0 Command Creation parameters Parameter Description --type raid0[_meta] Specifying raid0 creates a RAID0 volume without metadata volumes. Specifying raid0_meta creates a RAID0 volume with metadata volumes. Because RAID0 is non-resilient, it does not have to store any mirrored data blocks as RAID1/10 or calculate and store any parity blocks as RAID4/5/6 do. Hence, it does not need metadata volumes to keep state about resynchronization progress of mirrored or parity blocks. Metadata volumes become mandatory on a conversion from RAID0 to RAID4/5/6/10, however, and specifying raid0_meta preallocates those metadata volumes to prevent a respective allocation failure. --stripes Stripes Specifies the number of devices to spread the logical volume across. --stripesize StripeSize Specifies the size of each stripe in kilobytes. This is the amount of data that is written to one device before moving to the device. VolumeGroup Specifies the volume group to use. PhysicalVolumePath ... Specifies the devices to use. If this is not specified, LVM will choose the number of devices specified by the Stripes option, one for each stripe. 4.4.3.2. Converting a Linear Device to a RAID Device You can convert an existing linear logical volume to a RAID device by using the --type argument of the lvconvert command. The following command converts the linear logical volume my_lv in volume group my_vg to a 2-way RAID1 array. Since RAID logical volumes are composed of metadata and data subvolume pairs, when you convert a linear device to a RAID1 array, a new metadata subvolume is created and associated with the original logical volume on (one of) the same physical volumes that the linear volume is on. The additional images are added in metadata/data subvolume pairs. For example, if the original device is as follows: After conversion to a 2-way RAID1 array the device contains the following data and metadata subvolume pairs: If the metadata image that pairs with the original logical volume cannot be placed on the same physical volume, the lvconvert will fail. 4.4.3.3. Converting an LVM RAID1 Logical Volume to an LVM Linear Logical Volume You can convert an existing RAID1 LVM logical volume to an LVM linear logical volume with the lvconvert command by specifying the -m0 argument. This removes all the RAID data subvolumes and all the RAID metadata subvolumes that make up the RAID array, leaving the top-level RAID1 image as the linear logical volume. The following example displays an existing LVM RAID1 logical volume. The following command converts the LVM RAID1 logical volume my_vg/my_lv to an LVM linear device. When you convert an LVM RAID1 logical volume to an LVM linear volume, you can specify which physical volumes to remove. The following example shows the layout of an LVM RAID1 logical volume made up of two images: /dev/sda1 and /dev/sdb1 . In this example, the lvconvert command specifies that you want to remove /dev/sda1 , leaving /dev/sdb1 as the physical volume that makes up the linear device. 4.4.3.4. Converting a Mirrored LVM Device to a RAID1 Device You can convert an existing mirrored LVM device with a segment type of mirror to a RAID1 LVM device with the lvconvert command by specifying the --type raid1 argument. This renames the mirror subvolumes ( *_mimage_* ) to RAID subvolumes ( *_rimage_* ). In addition, the mirror log is removed and metadata subvolumes ( *_rmeta_* ) are created for the data subvolumes on the same physical volumes as the corresponding data subvolumes. The following example shows the layout of a mirrored logical volume my_vg/my_lv . The following command converts the mirrored logical volume my_vg/my_lv to a RAID1 logical volume. 4.4.3.5. Resizing a RAID Logical Volume You can resize a RAID logical volume in the following ways; You can increase the size of a RAID logical volume of any type with the lvresize or lvextend command. This does not change the number of RAID images. For striped RAID logical volumes the same stripe rounding constraints apply as when you create a striped RAID logical volume. For more information on extending a RAID volume, see Section 4.4.18, "Extending a RAID Volume" . You can reduce the size of a RAID logical volume of any type with the lvresize or lvreduce command. This does not change the number of RAID images. As with the lvextend command, the same stripe rounding constraints apply as when you create a striped RAID logical volume. For an example of a command to reduce the size of a logical volume, see Section 4.4.16, "Shrinking Logical Volumes" . As of Red Hat Enterprise Linux 7.4, you can change the number of stripes on a striped RAID logical volume ( raid4/5/6/10 ) with the --stripes N parameter of the lvconvert command. This increases or reduces the size of the RAID logical volume by the capacity of the stripes added or removed. Note that raid10 volumes are capable only of adding stripes. This capability is part of the RAID reshaping feature that allows you to change attributes of a RAID logical volume while keeping the same RAID level. For information on RAID reshaping and examples of using the lvconvert command to reshape a RAID logical volume, see the lvmraid (7) man page. 4.4.3.6. Changing the Number of Images in an Existing RAID1 Device You can change the number of images in an existing RAID1 array just as you can change the number of images in the earlier implementation of LVM mirroring. Use the lvconvert command to specify the number of additional metadata/data subvolume pairs to add or remove. For information on changing the volume configuration in the earlier implementation of LVM mirroring, see Section 4.4.4.4, "Changing Mirrored Volume Configuration" . When you add images to a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to add to the device. You can also optionally specify on which physical volumes the new metadata/data image pairs will reside. Metadata subvolumes (named *_rmeta_* ) always exist on the same physical devices as their data subvolume counterparts *_rimage_* ). The metadata/data subvolume pairs will not be created on the same physical volumes as those from another metadata/data subvolume pair in the RAID array (unless you specify --alloc anywhere ). The format for the command to add images to a RAID1 volume is as follows: For example, the following command displays the LVM device my_vg/my_lv , which is a 2-way RAID1 array: The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device: When you add an image to a RAID1 array, you can specify which physical volumes to use for the image. The following command converts the 2-way RAID1 device my_vg/my_lv to a 3-way RAID1 device, specifying that the physical volume /dev/sdd1 be used for the array: To remove images from a RAID1 array, use the following command. When you remove images from a RAID1 device with the lvconvert command, you can specify the total number of images for the resulting device, or you can specify how many images to remove from the device. You can also optionally specify the physical volumes from which to remove the device. Additionally, when an image and its associated metadata subvolume volume are removed, any higher-numbered images will be shifted down to fill the slot. If you remove lv_rimage_1 from a 3-way RAID1 array that consists of lv_rimage_0 , lv_rimage_1 , and lv_rimage_2 , this results in a RAID1 array that consists of lv_rimage_0 and lv_rimage_1 . The subvolume lv_rimage_2 will be renamed and take over the empty slot, becoming lv_rimage_1 . The following example shows the layout of a 3-way RAID1 logical volume my_vg/my_lv . The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume. The following command converts the 3-way RAID1 logical volume into a 2-way RAID1 logical volume, specifying the physical volume that contains the image to remove as /dev/sde1 . 4.4.3.7. Splitting off a RAID Image as a Separate Logical Volume You can split off an image of a RAID logical volume to form a new logical volume. The procedure for splitting off a RAID image is the same as the procedure for splitting off a redundant image of a mirrored logical volume, as described in Section 4.4.4.2, "Splitting Off a Redundant Image of a Mirrored Logical Volume" . The format of the command to split off a RAID image is as follows: Just as when you are removing a RAID image from an existing RAID1 logical volume (as described in Section 4.4.3.6, "Changing the Number of Images in an Existing RAID1 Device" ), when you remove a RAID data subvolume (and its associated metadata subvolume) from the middle of the device any higher numbered images will be shifted down to fill the slot. The index numbers on the logical volumes that make up a RAID array will thus be an unbroken sequence of integers. Note You cannot split off a RAID image if the RAID1 array is not yet in sync. The following example splits a 2-way RAID1 logical volume, my_lv , into two linear logical volumes, my_lv and new . The following example splits a 3-way RAID1 logical volume, my_lv , into a 2-way RAID1 logical volume, my_lv , and a linear logical volume, new 4.4.3.8. Splitting and Merging a RAID Image You can temporarily split off an image of a RAID1 array for read-only use while keeping track of any changes by using the --trackchanges argument in conjunction with the --splitmirrors argument of the lvconvert command. This allows you to merge the image back into the array at a later time while resyncing only those portions of the array that have changed since the image was split. The format for the lvconvert command to split off a RAID image is as follows. When you split off a RAID image with the --trackchanges argument, you can specify which image to split but you cannot change the name of the volume being split. In addition, the resulting volumes have the following constraints. The new volume you create is read-only. You cannot resize the new volume. You cannot rename the remaining array. You cannot resize the remaining array. You can activate the new volume and the remaining array independently. You can merge an image that was split off with the --trackchanges argument specified by executing a subsequent lvconvert command with the --merge argument. When you merge the image, only the portions of the array that have changed since the image was split are resynced. The format for the lvconvert command to merge a RAID image is as follows. The following example creates a RAID1 logical volume and then splits off an image from that volume while tracking changes to the remaining array. The following example splits off an image from a RAID1 volume while tracking changes to the remaining array, then merges the volume back into the array. Once you have split off an image from a RAID1 volume, you can make the split permanent by issuing a second lvconvert --splitmirrors command, repeating the initial lvconvert command that split the image without specifying the --trackchanges argument. This breaks the link that the --trackchanges argument created. After you have split an image with the --trackchanges argument, you cannot issue a subsequent lvconvert --splitmirrors command on that array unless your intent is to permanently split the image being tracked. The following sequence of commands splits an image and tracks the image and then permanently splits off the image being tracked. Note, however, that the following sequence of commands will fail. Similarly, the following sequence of commands will fail as well, since the split image is not the image being tracked. 4.4.3.9. Setting a RAID fault policy LVM RAID handles device failures in an automatic fashion based on the preferences defined by the raid_fault_policy field in the lvm.conf file. If the raid_fault_policy field is set to allocate , the system will attempt to replace the failed device with a spare device from the volume group. If there is no available spare device, this will be reported to the system log. If the raid_fault_policy field is set to warn , the system will produce a warning and the log will indicate that a device has failed. This allows the user to determine the course of action to take. As long as there are enough devices remaining to support usability, the RAID logical volume will continue to operate. 4.4.3.9.1. The allocate RAID Fault Policy In the following example, the raid_fault_policy field has been set to allocate in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sde device fails, the system log will display error messages. Since the raid_fault_policy field has been set to allocate , the failed device is replaced with a new device from the volume group. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the raid_fault_policy has been set to allocate but there are no spare devices, the allocation will fail, leaving the logical volume as it is. If the allocation fails, you have the option of fixing the drive, then deactivating and activating the logical volume; this is described in Section 4.4.3.9.2, "The warn RAID Fault Policy" . Alternately, you can replace the failed device, as described in Section 4.4.3.10, "Replacing a RAID device" . 4.4.3.9.2. The warn RAID Fault Policy In the following example, the raid_fault_policy field has been set to warn in the lvm.conf file. The RAID logical volume is laid out as follows. If the /dev/sdh device fails, the system log will display error messages. In this case, however, LVM will not automatically attempt to repair the RAID device by replacing one of the images. Instead, if the device has failed you can replace the device with the --repair argument of the lvconvert command, as shown below. Note that even though the failed device has been replaced, the display still indicates that LVM could not find the failed device. This is because, although the failed device has been removed from the RAID logical volume, the failed device has not yet been removed from the volume group. To remove the failed device from the volume group, you can execute vgreduce --removemissing VG . If the device failure is a transient failure or you are able to repair the device that failed, you can initiate recovery of the failed device with the --refresh option of the lvchange command. Previously it was necessary to deactivate and then activate the logical volume. The following command refreshes a logical volume. 4.4.3.10. Replacing a RAID device RAID is not like traditional LVM mirroring. LVM mirroring required failed devices to be removed or the mirrored logical volume would hang. RAID arrays can keep on running with failed devices. In fact, for RAID types other than RAID1, removing a device would mean converting to a lower level RAID (for example, from RAID6 to RAID5, or from RAID4 or RAID5 to RAID0). Therefore, rather than removing a failed device unconditionally and potentially allocating a replacement, LVM allows you to replace a device in a RAID volume in a one-step solution by using the --replace argument of the lvconvert command. The format for the lvconvert --replace is as follows. The following example creates a RAID1 logical volume and then replaces a device in that volume. The following example creates a RAID1 logical volume and then replaces a device in that volume, specifying which physical volume to use for the replacement. You can replace more than one RAID device at a time by specifying multiple replace arguments, as in the following example. Note When you specify a replacement drive using the lvconvert --replace command, the replacement drives should never be allocated from extra space on drives already used in the array. For example, lv_rimage_0 and lv_rimage_1 should not be located on the same physical volume. 4.4.3.11. Scrubbing a RAID Logical Volume LVM provides scrubbing support for RAID logical volumes. RAID scrubbing is the process of reading all the data and parity blocks in an array and checking to see whether they are coherent. You initiate a RAID scrubbing operation with the --syncaction option of the lvchange command. You specify either a check or repair operation. A check operation goes over the array and records the number of discrepancies in the array but does not repair them. A repair operation corrects the discrepancies as it finds them. The format of the command to scrub a RAID logical volume is as follows: Note The lvchange --syncaction repair vg/raid_lv operation does not perform the same function as the lvconvert --repair vg/raid_lv operation. The lvchange --syncaction repair operation initiates a background synchronization operation on the array, while the lvconvert --repair operation is designed to repair/replace failed devices in a mirror or RAID logical volume. In support of the new RAID scrubbing operation, the lvs command now supports two new printable fields: raid_sync_action and raid_mismatch_count . These fields are not printed by default. To display these fields you specify them with the -o parameter of the lvs , as follows. The raid_sync_action field displays the current synchronization operation that the raid volume is performing. It can be one of the following values: idle : All sync operations complete (doing nothing) resync : Initializing an array or recovering after a machine failure recover : Replacing a device in the array check : Looking for array inconsistencies repair : Looking for and repairing inconsistencies The raid_mismatch_count field displays the number of discrepancies found during a check operation. The Cpy%Sync field of the lvs command now prints the progress of any of the raid_sync_action operations, including check and repair . The lv_attr field of the lvs command output now provides additional indicators in support of the RAID scrubbing operation. Bit 9 of this field displays the health of the logical volume, and it now supports the following indicators. ( m )ismatches indicates that there are discrepancies in a RAID logical volume. This character is shown after a scrubbing operation has detected that portions of the RAID are not coherent. ( r )efresh indicates that a device in a RAID array has suffered a failure and the kernel regards it as failed, even though LVM can read the device label and considers the device to be operational. The logical volume should be (r)efreshed to notify the kernel that the device is now available, or the device should be (r)eplaced if it is suspected of having failed. For information on the lvs command, see Section 4.8.2, "Object Display Fields" . When you perform a RAID scrubbing operation, the background I/O required by the sync operations can crowd out other I/O operations to LVM devices, such as updates to volume group metadata. This can cause the other LVM operations to slow down. You can control the rate at which the RAID logical volume is scrubbed by implementing recovery throttling. You control the rate at which sync operations are performed by setting the minimum and maximum I/O rate for those operations with the --minrecoveryrate and --maxrecoveryrate options of the lvchange command. You specify these options as follows. --maxrecoveryrate Rate [bBsSkKmMgG] Sets the maximum recovery rate for a RAID logical volume so that it will not crowd out nominal I/O operations. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. Setting the recovery rate to 0 means it will be unbounded. --minrecoveryrate Rate [bBsSkKmMgG] Sets the minimum recovery rate for a RAID logical volume to ensure that I/O for sync operations achieves a minimum throughput, even when heavy nominal I/O is present. The Rate is specified as an amount per second for each device in the array. If no suffix is given, then kiB/sec/device is assumed. 4.4.3.12. RAID Takeover (Red Hat Enterprise Linux 7.4 and Later) LVM supports Raid takeover , which means converting a RAID logical volume from one RAID level to another (such as from RAID 5 to RAID 6). Changing the RAID level is usually done to increase or decrease resilience to device failures or to restripe logical volumes. You use the lvconvert for RAID takeover. For information on RAID takeover and for examples of using the lvconvert to convert a RAID logical volume, see the lvmraid (7) man page. 4.4.3.13. Reshaping a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and Later) RAID reshaping means changing attributes of a RAID logical volume while keeping the same RAID level. Some attributes you can change include RAID layout, stripe size, and number of stripes. For information on RAID reshaping and examples of using the lvconvert command to reshape a RAID logical volume, see the lvmraid (7) man page. 4.4.3.14. Controlling I/O Operations on a RAID1 Logical Volume You can control the I/O operations for a device in a RAID1 logical volume by using the --writemostly and --writebehind parameters of the lvchange command. The format for using these parameters is as follows. --[raid]writemostly PhysicalVolume [:{t|y|n}] Marks a device in a RAID1 logical volume as write-mostly . All reads to these drives will be avoided unless necessary. Setting this parameter keeps the number of I/O operations to the drive to a minimum. By default, the write-mostly attribute is set to yes for the specified physical volume in the logical volume. It is possible to remove the write-mostly flag by appending :n to the physical volume or to toggle the value by specifying :t . The --writemostly argument can be specified more than one time in a single command, making it possible to toggle the write-mostly attributes for all the physical volumes in a logical volume at once. --[raid]writebehind IOCount Specifies the maximum number of outstanding writes that are allowed to devices in a RAID1 logical volume that are marked as write-mostly . Once this value is exceeded, writes become synchronous, causing all writes to the constituent devices to complete before the array signals the write has completed. Setting the value to zero clears the preference and allows the system to choose the value arbitrarily. 4.4.3.15. Changing the region size on a RAID Logical Volume (Red Hat Enterprise Linux 7.4 and later) When you create a RAID logical volume, the region size for the logical volume will be the value of the raid_region_size parameter in the /etc/lvm/lvm.conf file. You can override this default value with the -R option of the lvcreate command. After you have created a RAID logical volume, you can change the region size of the volume with the -R option of the lvconvert command. The following example changes the region size of logical volume vg/raidlv to 4096K. The RAID volume must be synced in order to change the region size. 4.4.4. Creating Mirrored Volumes For the Red Hat Enterprise Linux 7.0 release, LVM supports RAID 1/4/5/6/10, as described in Section 4.4.3, "RAID Logical Volumes" . RAID logical volumes are not cluster-aware. While RAID logical volumes can be created and activated exclusively on one machine, they cannot be activated simultaneously on more than one machine. If you require non-exclusive mirrored volumes, you must create the volumes with a mirror segment type, as described in this section. Note For information on converting an existing LVM device with a segment type of mirror to a RAID1 LVM device, see Section 4.4.3.4, "Converting a Mirrored LVM Device to a RAID1 Device" . Note Creating a mirrored LVM logical volume in a cluster requires the same commands and procedures as creating a mirrored LVM logical volume with a segment type of mirror on a single node. However, in order to create a mirrored LVM volume in a cluster, the cluster and cluster mirror infrastructure must be running, the cluster must be quorate, and the locking type in the lvm.conf file must be set correctly to enable cluster locking. For an example of creating a mirrored volume in a cluster, see Section 5.5, "Creating a Mirrored LVM Logical Volume in a Cluster" . Attempting to run multiple LVM mirror creation and conversion commands in quick succession from multiple nodes in a cluster might cause a backlog of these commands. This might cause some of the requested operations to time out and, subsequently, fail. To avoid this issue, it is recommended that cluster mirror creation commands be executed from one node of the cluster. When you create a mirrored volume, you specify the number of copies of the data to make with the -m argument of the lvcreate command. Specifying -m1 creates one mirror, which yields two copies of the file system: a linear logical volume plus one copy. Similarly, specifying -m2 creates two mirrors, yielding three copies of the file system. The following command creates a mirrored logical volume with a single mirror. The volume is 50 gigabytes in size, is named mirrorlv , and is carved out of volume group vg0 : An LVM mirror divides the device being copied into regions that, by default, are 512KB in size. You can use the -R argument of the lvcreate command to specify the region size in megabytes. You can also change the default region size by editing the mirror_region_size setting in the lvm.conf file. Note Due to limitations in the cluster infrastructure, cluster mirrors greater than 1.5TB cannot be created with the default region size of 512KB. Users that require larger mirrors should increase the region size from its default to something larger. Failure to increase the region size will cause LVM creation to hang and may hang other LVM commands as well. As a general guideline for specifying the region size for mirrors that are larger than 1.5TB, you could take your mirror size in terabytes and round up that number to the power of 2, using that number as the -R argument to the lvcreate command. For example, if your mirror size is 1.5TB, you could specify -R 2 . If your mirror size is 3TB, you could specify -R 4 . For a mirror size of 5TB, you could specify -R 8 . The following command creates a mirrored logical volume with a region size of 2MB: When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. LVM maintains a small log which it uses to keep track of which regions are in sync with the mirror or mirrors. By default, this log is kept on disk, which keeps it persistent across reboots and ensures that the mirror does not need to be re-synced every time a machine reboots or crashes. You can specify instead that this log be kept in memory with the --mirrorlog core argument; this eliminates the need for an extra log device, but it requires that the entire mirror be resynchronized at every reboot. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named ondiskmirvol and has a single mirror. The volume is 12MB in size and keeps the mirror log in memory. The mirror log is created on a separate device from the devices on which any of the mirror legs are created. It is possible, however, to create the mirror log on the same device as one of the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a mirror even if you have only two underlying devices. The following command creates a mirrored logical volume with a single mirror for which the mirror log is on the same device as one of the mirror legs. In this example, the volume group vg0 consists of only two devices. This command creates a 500 MB volume named mirrorlv in the vg0 volume group. Note With clustered mirrors, the mirror log management is completely the responsibility of the cluster node with the currently lowest cluster ID. Therefore, when the device holding the cluster mirror log becomes unavailable on a subset of the cluster, the clustered mirror can continue operating without any impact, as long as the cluster node with lowest ID retains access to the mirror log. Since the mirror is undisturbed, no automatic corrective action (repair) is issued, either. When the lowest-ID cluster node loses access to the mirror log, however, automatic action will kick in (regardless of accessibility of the log from other nodes). To create a mirror log that is itself mirrored, you can specify the --mirrorlog mirrored argument. The following command creates a mirrored logical volume from the volume group bigvg . The logical volume is named twologvol and has a single mirror. The volume is 12MB in size and the mirror log is mirrored, with each log kept on a separate device. Just as with a standard mirror log, it is possible to create the redundant mirror logs on the same device as the mirror legs by using the --alloc anywhere argument of the vgcreate command. This may degrade performance, but it allows you to create a redundant mirror log even if you do not have sufficient underlying devices for each log to be kept on a separate device than the mirror legs. When a mirror is created, the mirror regions are synchronized. For large mirror components, the sync process may take a long time. When you are creating a new mirror that does not need to be revived, you can specify the --nosync argument to indicate that an initial synchronization from the first device is not required. You can specify which devices to use for the mirror legs and log, and which extents of the devices to use. To force the log onto a particular disk, specify exactly one extent on the disk on which it will be placed. LVM does not necessary respect the order in which devices are listed in the command line. If any physical volumes are listed that is the only space on which allocation will take place. Any physical extents included in the list that are already allocated will get ignored. The following command creates a mirrored logical volume with a single mirror and a single log that is not mirrored. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on device /dev/sda1 , the second leg of the mirror is on device /dev/sdb1 , and the mirror log is on /dev/sdc1 . The following command creates a mirrored logical volume with a single mirror. The volume is 500 MB in size, it is named mirrorlv , and it is carved out of volume group vg0 . The first leg of the mirror is on extents 0 through 499 of device /dev/sda1 , the second leg of the mirror is on extents 0 through 499 of device /dev/sdb1 , and the mirror log starts on extent 0 of device /dev/sdc1 . These are 1MB extents. If any of the specified extents have already been allocated, they will be ignored. Note You can combine striping and mirroring in a single logical volume. Creating a logical volume while simultaneously specifying the number of mirrors ( --mirrors X ) and the number of stripes ( --stripes Y ) results in a mirror device whose constituent devices are striped. 4.4.4.1. Mirrored Logical Volume Failure Policy You can define how a mirrored logical volume behaves in the event of a device failure with the mirror_image_fault_policy and mirror_log_fault_policy parameters in the activation section of the lvm.conf file. When these parameters are set to remove , the system attempts to remove the faulty device and run without it. When these parameters are set to allocate , the system attempts to remove the faulty device and tries to allocate space on a new device to be a replacement for the failed device. This policy acts like the remove policy if no suitable device and space can be allocated for the replacement. By default, the mirror_log_fault_policy parameter is set to allocate . Using this policy for the log is fast and maintains the ability to remember the sync state through crashes and reboots. If you set this policy to remove , when a log device fails the mirror converts to using an in-memory log; in this instance, the mirror will not remember its sync status across crashes and reboots and the entire mirror will be re-synced. By default, the mirror_image_fault_policy parameter is set to remove . With this policy, if a mirror image fails the mirror will convert to a non-mirrored device if there is only one remaining good copy. Setting this policy to allocate for a mirror device requires the mirror to resynchronize the devices; this is a slow process, but it preserves the mirror characteristic of the device. Note When an LVM mirror suffers a device failure, a two-stage recovery takes place. The first stage involves removing the failed devices. This can result in the mirror being reduced to a linear device. The second stage, if the mirror_log_fault_policy parameter is set to allocate , is to attempt to replace any of the failed devices. Note, however, that there is no guarantee that the second stage will choose devices previously in-use by the mirror that had not been part of the failure if others are available. For information on manually recovering from an LVM mirror failure, see Section 6.2, "Recovering from LVM Mirror Failure" . 4.4.4.2. Splitting Off a Redundant Image of a Mirrored Logical Volume You can split off a redundant image of a mirrored logical volume to form a new logical volume. To split off an image, use the --splitmirrors argument of the lvconvert command, specifying the number of redundant images to split off. You must use the --name argument of the command to specify a name for the newly-split-off logical volume. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs. In this example, LVM selects which devices to split off. You can specify which devices to split off. The following command splits off a new logical volume named copy from the mirrored logical volume vg/lv . The new logical volume contains two mirror legs consisting of devices /dev/sdc1 and /dev/sde1 . 4.4.4.3. Repairing a Mirrored Logical Device You can use the lvconvert --repair command to repair a mirror after a disk failure. This brings the mirror back into a consistent state. The lvconvert --repair command is an interactive command that prompts you to indicate whether you want the system to attempt to replace any failed devices. To skip the prompts and replace all of the failed devices, specify the -y option on the command line. To skip the prompts and replace none of the failed devices, specify the -f option on the command line. To skip the prompts and still indicate different replacement policies for the mirror image and the mirror log, you can specify the --use-policies argument to use the device replacement policies specified by the mirror_log_fault_policy and mirror_device_fault_policy parameters in the lvm.conf file. 4.4.4.4. Changing Mirrored Volume Configuration You can increase or decrease the number of mirrors that a logical volume contains by using the lvconvert command. This allows you to convert a logical volume from a mirrored volume to a linear volume or from a linear volume to a mirrored volume. You can also use this command to reconfigure other mirror parameters of an existing logical volume, such as corelog . When you convert a linear volume to a mirrored volume, you are creating mirror legs for an existing volume. This means that your volume group must contain the devices and space for the mirror legs and for the mirror log. If you lose a leg of a mirror, LVM converts the volume to a linear volume so that you still have access to the volume, without the mirror redundancy. After you replace the leg, use the lvconvert command to restore the mirror. This procedure is provided in Section 6.2, "Recovering from LVM Mirror Failure" . The following command converts the linear logical volume vg00/lvol1 to a mirrored logical volume. The following command converts the mirrored logical volume vg00/lvol1 to a linear logical volume, removing the mirror leg. The following example adds an additional mirror leg to the existing logical volume vg00/lvol1 . This example shows the configuration of the volume before and after the lvconvert command changed the volume to a volume with two mirror legs. 4.4.5. Creating Thinly-Provisioned Logical Volumes Logical volumes can be thinly provisioned. This allows you to create logical volumes that are larger than the available extents. Using thin provisioning, you can manage a storage pool of free space, known as a thin pool, which can be allocated to an arbitrary number of devices when needed by applications. You can then create devices that can be bound to the thin pool for later allocation when an application actually writes to the logical volume. The thin pool can be expanded dynamically when needed for cost-effective allocation of storage space. Note This section provides an overview of the basic commands you use to create and grow thinly-provisioned logical volumes. For detailed information on LVM thin provisioning as well as information on using the LVM commands and utilities with thinly-provisioned logical volumes, see the lvmthin (7) man page. Note Thin volumes are not supported across the nodes in a cluster. The thin pool and all its thin volumes must be exclusively activated on only one cluster node. To create a thin volume, perform the following tasks: Create a volume group with the vgcreate command. Create a thin pool with the lvcreate command. Create a thin volume in the thin pool with the lvcreate command. You can use the -T (or --thin ) option of the lvcreate command to create either a thin pool or a thin volume. You can also use -T option of the lvcreate command to create both a thin pool and a thin volume in that pool at the same time with a single command. The following command uses the -T option of the lvcreate command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size. Note that since you are creating a pool of physical space, you must specify the size of the pool. The -T option of the lvcreate command does not take an argument; it deduces what type of device is to be created from the other options the command specifies. The following command uses the -T option of the lvcreate command to create a thin volume named thinvolume in the thin pool vg001/mythinpool . Note that in this case you are specifying virtual size, and that you are specifying a virtual size for the volume that is greater than the pool that contains it. The following command uses the -T option of the lvcreate command to create a thin pool and a thin volume in that pool by specifying both a size and a virtual size argument for the lvcreate command. This command creates a thin pool named mythinpool in the volume group vg001 and it also creates a thin volume named thinvolume in that pool. You can also create a thin pool by specifying the --thinpool parameter of the lvcreate command. Unlike the -T option, the --thinpool parameter requires an argument, which is the name of the thin pool logical volume that you are creating. The following example specifies the --thinpool parameter of the lvcreate command to create a thin pool named mythinpool in the volume group vg001 and that is 100M in size: Use the following criteria for using chunk size: Smaller chunk size requires more metadata and hinders the performance, but it provides better space utilization with snapshots. Huge chunk size requires less metadata manipulation but makes the snapshot less efficient. LVM2 calculates chunk size in the following manner: By default, LVM starts with a 64KiB chunk size and increases its value when the resulting size of the thin pool metadata device grows above 128MiB, so the metadata size remains compact. This may result in some big chunk size values, which is less efficient for snapshot usage. In this case, the smaller chunk size and bigger metadata size is a better option. If the volume data size is in the range of TiB, use ~15.8GiB metadata size, which is the maximum supported size, and use the chunk size as per your requirement. But it is not possible to increase the metadata size if you need to extend this volume data size and have a small chunk size. Warning Red Hat recommends to use at least the default chunk size. If the chunk size is too small and your volume runs out of space for metadata, the volume is unable to create data. Monitor your logical volumes to ensure that they are expanded or more storage created before metadata volumes become completely full. Ensure that you set up your thin pool with a large enough chunk size so that they do not run out of room for metadata. Striping is supported for pool creation. The following command creates a 100M thin pool named pool in volume group vg001 with two 64 kB stripes and a chunk size of 256 kB. It also creates a 1T thin volume, vg00/thin_lv . You can extend the size of a thin volume with the lvextend command. You cannot, however, reduce the size of a thin pool. The following command resizes an existing thin pool that is 100M in size by extending it another 100M. As with other types of logical volumes, you can rename the volume with the lvrename , you can remove the volume with the lvremove , and you can display information about the volume with the lvs and lvdisplay commands. By default, the lvcreate command sets the size of the thin pool's metadata logical volume according to the formula (Pool_LV_size / Pool_LV_chunk_size * 64). If you will have large numbers of snapshots or if you have small chunk sizes for your thin pool and thus expect significant growth of the size of the thin pool at a later time, you may need to increase the default value of the thin pool's metadata volume with the --poolmetadatasize parameter of the lvcreate command. The supported value for the thin pool's metadata logical volume is in the range between 2MiB and 16GiB. You can use the --thinpool parameter of the lvconvert command to convert an existing logical volume to a thin pool volume. When you convert an existing logical volume to a thin pool volume, you must use the --poolmetadata parameter in conjunction with the --thinpool parameter of the lvconvert to convert an existing logical volume to the thin pool volume's metadata volume. Note Converting a logical volume to a thin pool volume or a thin pool metadata volume destroys the content of the logical volume, since in this case the lvconvert does not preserve the content of the devices but instead overwrites the content. The following example converts the existing logical volume lv1 in volume group vg001 to a thin pool volume and converts the existing logical volume lv2 in volume group vg001 to the metadata volume for that thin pool volume. 4.4.6. Creating Snapshot Volumes Note LVM supports thinly-provisioned snapshots. For information on creating thinly-provisioned snapshot volumes, see Section 4.4.7, "Creating Thinly-Provisioned Snapshot Volumes" . Use the -s argument of the lvcreate command to create a snapshot volume. A snapshot volume is writable. Note LVM snapshots are not supported across the nodes in a cluster. You cannot create a snapshot volume in a clustered volume group. However, if you need to create a consistent backup of data on a clustered logical volume you can activate the volume exclusively and then create the snapshot. For information on activating logical volumes exclusively on one node, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . Note LVM snapshots are supported for mirrored logical volumes. Snapshots are supported for RAID logical volumes. For information on creating RAID logical volumes, see Section 4.4.3, "RAID Logical Volumes" . LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. If you specify a snapshot volume that is larger than this, the system will create a snapshot volume that is only as large as will be needed for the size of the origin. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 4.4.20, "Controlling Logical Volume Activation" . The following command creates a snapshot logical volume that is 100 MB in size named /dev/vg00/snap . This creates a snapshot of the origin logical volume named /dev/vg00/lvol1 . If the original logical volume contains a file system, you can mount the snapshot logical volume on an arbitrary directory in order to access the contents of the file system to run a backup while the original file system continues to get updated. After you create a snapshot logical volume, specifying the origin volume on the lvdisplay command yields output that includes a list of all snapshot logical volumes and their status (active or inactive). The following example shows the status of the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. The lvs command, by default, displays the origin volume and the current percentage of the snapshot volume being used. The following example shows the default output for the lvs command for a system that includes the logical volume /dev/new_vg/lvol0 , for which a snapshot volume /dev/new_vg/newvgsnap has been created. Warning Because the snapshot increases in size as the origin volume changes, it is important to monitor the percentage of the snapshot volume regularly with the lvs command to be sure it does not fill. A snapshot that is 100% full is lost completely, as a write to unchanged parts of the origin would be unable to succeed without corrupting the snapshot. In addition to the snapshot itself being invalidated when full, any mounted file systems on that snapshot device are forcibly unmounted, avoiding the inevitable file system errors upon access to the mount point. In addition, you can specify the snapshot_autoextend_threshold option in the lvm.conf file. This option allows automatic extension of a snapshot whenever the remaining snapshot space drops below the threshold you set. This feature requires that there be unallocated space in the volume group. LVM does not allow you to create a snapshot volume that is larger than the size of the origin volume plus needed metadata for the volume. Similarly, automatic extension of a snapshot will not increase the size of a snapshot volume beyond the maximum calculated size that is necessary for the snapshot. Once a snapshot has grown large enough to cover the origin, it is no longer monitored for automatic extension. Information on setting snapshot_autoextend_threshold and snapshot_autoextend_percent is provided in the lvm.conf file itself. For information about the lvm.conf file, see Appendix B, The LVM Configuration Files . 4.4.7. Creating Thinly-Provisioned Snapshot Volumes Red Hat Enterprise Linux provides support for thinly-provisioned snapshot volumes. For information on the benefits and limitations of thin snapshot volumes, see Section 2.3.6, "Thinly-Provisioned Snapshot Volumes" . Note This section provides an overview of the basic commands you use to create and grow thinly-provisioned snapshot volumes. For detailed information on LVM thin provisioning as well as information on using the LVM commands and utilities with thinly-provisioned logical volumes, see the lvmthin (7) man page. Important When creating a thin snapshot volume, you do not specify the size of the volume. If you specify a size parameter, the snapshot that will be created will not be a thin snapshot volume and will not use the thin pool for storing data. For example, the command lvcreate -s vg/thinvolume -L10M will not create a thin snapshot, even though the origin volume is a thin volume. Thin snapshots can be created for thinly-provisioned origin volumes, or for origin volumes that are not thinly-provisioned. You can specify a name for the snapshot volume with the --name option of the lvcreate command. The following command creates a thinly-provisioned snapshot volume of the thinly-provisioned logical volume vg001/thinvolume that is named mysnapshot1 . Note When using thin provisioning, it is important that the storage administrator monitor the storage pool and add more capacity if it starts to become full. For information on extending the size of a thin volume, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" A thin snapshot volume has the same characteristics as any other thin volume. You can independently activate the volume, extend the volume, rename the volume, remove the volume, and even snapshot the volume. By default, a snapshot volume is skipped during normal activation commands. For information on controlling the activation of a snapshot volume, see Section 4.4.20, "Controlling Logical Volume Activation" . You can also create a thinly-provisioned snapshot of a non-thinly-provisioned logical volume. Since the non-thinly-provisioned logical volume is not contained within a thin pool, it is referred to as an external origin . External origin volumes can be used and shared by many thinly-provisioned snapshot volumes, even from different thin pools. The external origin must be inactive and read-only at the time the thinly-provisioned snapshot is created. To create a thinly-provisioned snapshot of an external origin, you must specify the --thinpool option. The following command creates a thin snapshot volume of the read-only inactive volume origin_volume . The thin snapshot volume is named mythinsnap . The logical volume origin_volume then becomes the thin external origin for the thin snapshot volume mythinsnap in volume group vg001 that will use the existing thin pool vg001/pool . Because the origin volume must be in the same volume group as the snapshot volume, you do not need to specify the volume group when specifying the origin logical volume. You can create a second thinly-provisioned snapshot volume of the first snapshot volume, as in the following command. As of Red Hat Enterprise Linux 7.2, you can display a list of all ancestors and descendants of a thin snapshot logical volume by specifying the lv_ancestors and lv_descendants reporting fields of the lvs command. In the following example: stack1 is an origin volume in volume group vg001 . stack2 is a snapshot of stack1 stack3 is a snapshot of stack2 stack4 is a snapshot of stack3 Additionally: stack5 is also a snapshot of stack2 stack6 is a snapshot of stack5 Note The lv_ancestors and lv_descendants fields display existing dependencies but do not track removed entries which can break a dependency chain if the entry was removed from the middle of the chain. For example, if you remove the logical volume stack3 from this sample configuration, the display is as follows. As of Red Hat Enterprise Linux 7.3, however, you can configure your system to track and display logical volumes that have been removed, and you can display the full dependency chain that includes those volumes by specifying the lv_ancestors_full and lv_descendants_full fields. For information on tracking, displaying, and removing historical logical volumes, see Section 4.4.21, "Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later)" . 4.4.8. Creating LVM Cache Logical Volumes As of the Red Hat Enterprise Linux 7.1 release, LVM provides full support for LVM cache logical volumes. A cache logical volume uses a small logical volume consisting of fast block devices (such as SSD drives) to improve the performance of a larger and slower logical volume by storing the frequently used blocks on the smaller, faster logical volume. LVM caching uses the following LVM logical volume types. All of these associated logical volumes must be in the same volume group. Origin logical volume - the large, slow logical volume Cache pool logical volume - the small, fast logical volume, which is composed of two devices: the cache data logical volume, and the cache metadata logical volume Cache data logical volume - the logical volume containing the data blocks for the cache pool logical volume Cache metadata logical volume - the logical volume containing the metadata for the cache pool logical volume, which holds the accounting information that specifies where data blocks are stored (for example, on the origin logical volume or the cache data logical volume). Cache logical volume - the logical volume containing the origin logical volume and the cache pool logical volume. This is the resultant usable device which encapsulates the various cache volume components. The following procedure creates an LVM cache logical volume. Create a volume group that contains a slow physical volume and a fast physical volume. In this example. /dev/sde1 is a slow device and /dev/sdf1 is a fast device and both devices are contained in volume group VG . Create the origin volume. This example creates an origin volume named lv that is ten gigabytes in size and that consists of /dev/sde1 , the slow physical volume. Create the cache pool logical volume. This example creates the cache pool logical volume named cpool on the fast device /dev/sdf1 , which is part of the volume group VG . The cache pool logical volume this command creates consists of the hidden cache data logical volume cpool_cdata and the hidden cache metadata logical volume cpool_cmeta . For more complicated configurations you may need to create the cache data and the cache metadata logical volumes individually and then combine the volumes into a cache pool logical volume. For information on this procedure, see the lvmcache (7) man page. Create the cache logical volume by linking the cache pool logical volume to the origin logical volume. The resulting user-accessible cache logical volume takes the name of the origin logical volume. The origin logical volume becomes a hidden logical volume with _corig appended to the original name. Note that this conversion can be done live, although you must ensure you have performed a backup first. Optionally, as of Red Hat Enterprise Linux release 7.2, you can convert the cached logical volume to a thin pool logical volume. Note that any thin logical volumes created from the pool will share the cache. The following command uses the fast device, /dev/sdf1 , for allocating the thin pool metadata ( lv_tmeta ). This is the same device that is used by the cache pool volume, which means that the thin pool metadata volume shares that device with both the cache data logical volume cpool_cdata and the cache metadata logical volume cpool_cmeta . For further information on LVM cache volumes, including additional administrative examples, see the lvmcache (7) man page. For information on creating thinly-provisioned logical volumes, see Section 4.4.5, "Creating Thinly-Provisioned Logical Volumes" . 4.4.9. Merging Snapshot Volumes You can use the --merge option of the lvconvert command to merge a snapshot into its origin volume. If both the origin and snapshot volume are not open, the merge will start immediately. Otherwise, the merge will start the first time either the origin or snapshot are activated and both are closed. Merging a snapshot into an origin that cannot be closed, for example a root file system, is deferred until the time the origin volume is activated. When merging starts, the resulting logical volume will have the origin's name, minor number and UUID. While the merge is in progress, reads or writes to the origin appear as they were directed to the snapshot being merged. When the merge finishes, the merged snapshot is removed. The following command merges snapshot volume vg00/lvol1_snap into its origin. You can specify multiple snapshots on the command line, or you can use LVM object tags to specify that multiple snapshots be merged to their respective origins. In the following example, logical volumes vg00/lvol1 , vg00/lvol2 , and vg00/lvol3 are all tagged with the tag @some_tag . The following command merges the snapshot logical volumes for all three volumes serially: vg00/lvol1 , then vg00/lvol2 , then vg00/lvol3 . If the --background option were used, all snapshot logical volume merges would start in parallel. For information on tagging LVM objects, see Appendix D, LVM Object Tags . For further information on the lvconvert --merge command, see the lvconvert (8) man page. 4.4.10. Persistent Device Numbers Major and minor device numbers are allocated dynamically at module load. Some applications work best if the block device is always activated with the same device (major and minor) number. You can specify these with the lvcreate and the lvchange commands by using the following arguments: Use a large minor number to be sure that it has not already been allocated to another device dynamically. If you are exporting a file system using NFS, specifying the fsid parameter in the exports file may avoid the need to set a persistent device number within LVM. 4.4.11. Changing the Parameters of a Logical Volume Group To change the parameters of a logical volume, use the lvchange command. For a listing of the parameters you can change, see the lvchange (8) man page. You can use the lvchange command to activate and deactivate logical volumes. To activate and deactivate all the logical volumes in a volume group at the same time, use the vgchange command, as described in Section 4.3.9, "Changing the Parameters of a Volume Group" . The following command changes the permission on volume lvol1 in volume group vg00 to be read-only. 4.4.12. Renaming Logical Volumes To rename an existing logical volume, use the lvrename command. Either of the following commands renames logical volume lvold in volume group vg02 to lvnew . Renaming the root logical volume requires additional reconfiguration. For information on renaming a root volume, see How to rename root volume group or logical volume in Red Hat Enterprise Linux . For more information on activating logical volumes on individual nodes in a cluster, see Section 4.7, "Activating Logical Volumes on Individual Nodes in a Cluster" . 4.4.13. Removing Logical Volumes To remove an inactive logical volume, use the lvremove command. If the logical volume is currently mounted, unmount the volume before removing it. In addition, in a clustered environment you must deactivate a logical volume before it can be removed. The following command removes the logical volume /dev/testvg/testlv from the volume group testvg . Note that in this case the logical volume has not been deactivated. You could explicitly deactivate the logical volume before removing it with the lvchange -an command, in which case you would not see the prompt verifying whether you want to remove an active logical volume. 4.4.14. Displaying Logical Volumes There are three commands you can use to display properties of LVM logical volumes: lvs , lvdisplay , and lvscan . The lvs command provides logical volume information in a configurable form, displaying one line per logical volume. The lvs command provides a great deal of format control, and is useful for scripting. For information on using the lvs command to customize your output, see Section 4.8, "Customized Reporting for LVM" . The lvdisplay command displays logical volume properties (such as size, layout, and mapping) in a fixed format. The following command shows the attributes of lvol2 in vg00 . If snapshot logical volumes have been created for this original logical volume, this command shows a list of all snapshot logical volumes and their status (active or inactive) as well. The lvscan command scans for all logical volumes in the system and lists them, as in the following example. 4.4.15. Growing Logical Volumes To increase the size of a logical volume, use the lvextend command. When you extend the logical volume, you can indicate how much you want to extend the volume, or how large you want it to be after you extend it. The following command extends the logical volume /dev/myvg/homevol to 12 gigabytes. The following command adds another gigabyte to the logical volume /dev/myvg/homevol . As with the lvcreate command, you can use the -l argument of the lvextend command to specify the number of extents by which to increase the size of the logical volume. You can also use this argument to specify a percentage of the volume group, or a percentage of the remaining free space in the volume group. The following command extends the logical volume called testlv to fill all of the unallocated space in the volume group myvg . After you have extended the logical volume it is necessary to increase the file system size to match. By default, most file system resizing tools will increase the size of the file system to be the size of the underlying logical volume so you do not need to worry about specifying the same size for each of the two commands. 4.4.16. Shrinking Logical Volumes You can reduce the size of a logical volume with the lvreduce command. Note Shrinking is not supported on a GFS2 or XFS file system, so you cannot reduce the size of a logical volume that contains a GFS2 or XFS file system. If the logical volume you are reducing contains a file system, to prevent data loss you must ensure that the file system is not using the space in the logical volume that is being reduced. For this reason, it is recommended that you use the --resizefs option of the lvreduce command when the logical volume contains a file system. When you use this option, the lvreduce command attempts to reduce the file system before shrinking the logical volume. If shrinking the file system fails, as can occur if the file system is full or the file system does not support shrinking, then the lvreduce command will fail and not attempt to shrink the logical volume. Warning In most cases, the lvreduce command warns about possible data loss and asks for a confirmation. However, you should not rely on these confirmation prompts to prevent data loss because in some cases you will not see these prompts, such as when the logical volume is inactive or the --resizefs option is not used. Note that using the --test option of the lvreduce command does not indicate where the operation is safe, as this option does not check the file system or test the file system resize. The following command shrinks the logical volume lvol1 in volume group vg00 to be 64 megabytes. In this example, lvol1 contains a file system, which this command resizes together with the logical volume. This example shows the output to the command. Specifying the - sign before the resize value indicates that the value will be subtracted from the logical volume's actual size. The following example shows the command you would use if, instead of shrinking a logical volume to an absolute size of 64 megabytes, you wanted to shrink the volume by a value 64 megabytes. 4.4.17. Extending a Striped Volume In order to increase the size of a striped logical volume, there must be enough free space on the underlying physical volumes that make up the volume group to support the stripe. For example, if you have a two-way stripe that that uses up an entire volume group, adding a single physical volume to the volume group will not enable you to extend the stripe. Instead, you must add at least two physical volumes to the volume group. For example, consider a volume group vg that consists of two underlying physical volumes, as displayed with the following vgs command. You can create a stripe using the entire amount of space in the volume group. Note that the volume group now has no more free space. The following command adds another physical volume to the volume group, which then has 135 gigabytes of additional space. At this point you cannot extend the striped logical volume to the full size of the volume group, because two underlying devices are needed in order to stripe the data. To extend the striped logical volume, add another physical volume and then extend the logical volume. In this example, having added two physical volumes to the volume group we can extend the logical volume to the full size of the volume group. If you do not have enough underlying physical devices to extend the striped logical volume, it is possible to extend the volume anyway if it does not matter that the extension is not striped, which may result in uneven performance. When adding space to the logical volume, the default operation is to use the same striping parameters of the last segment of the existing logical volume, but you can override those parameters. The following example extends the existing striped logical volume to use the remaining free space after the initial lvextend command fails. 4.4.18. Extending a RAID Volume You can grow RAID logical volumes with the lvextend command without performing a synchronization of the new RAID regions. If you specify the --nosync option when you create a RAID logical volume with the lvcreate command, the RAID regions are not synchronized when the logical volume is created. If you later extend a RAID logical volume that you have created with the --nosync option, the RAID extensions are not synchronized at that time, either. You can determine whether an existing logical volume was created with the --nosync option by using the lvs command to display the volume's attributes. A logical volume will show "R" as the first character in the attribute field if it is a RAID volume that was created without an initial synchronization, and it will show "r" if it was created with initial synchronization. The following command displays the attributes of a RAID logical volume named lv that was created without initial synchronization, showing "R" as the first character in the attribute field. The seventh character in the attribute field is "r", indicating a target type of RAID. For information on the meaning of the attribute field, see Table 4.5, "lvs Display Fields" . If you grow this logical volume with the lvextend command, the RAID extension will not be resynchronized. If you created a RAID logical volume without specifying the --nosync option of the lvcreate command, you can grow the logical volume without resynchronizing the mirror by specifying the --nosync option of the lvextend command. The following example extends a RAID logical volume that was created without the --nosync option, indicated that the RAID volume was synchronized when it was created. This example, however, specifies that the volume not be synchronized when the volume is extended. Note that the volume has an attribute of "r", but after executing the lvextend command with the --nosync option the volume has an attribute of "R". If a RAID volume is inactive, it will not automatically skip synchronization when you extend the volume, even if you create the volume with the --nosync option specified. Instead, you will be prompted whether to do a full resync of the extended portion of the logical volume. Note If a RAID volume is performing recovery, you cannot extend the logical volume if you created or extended the volume with the --nosync option specified. If you did not specify the --nosync option, however, you can extend the RAID volume while it is recovering. 4.4.19. Extending a Logical Volume with the cling Allocation Policy When extending an LVM volume, you can use the --alloc cling option of the lvextend command to specify the cling allocation policy. This policy will choose space on the same physical volumes as the last segment of the existing logical volume. If there is insufficient space on the physical volumes and a list of tags is defined in the lvm.conf file, LVM will check whether any of the tags are attached to the physical volumes and seek to match those physical volume tags between existing extents and new extents. For example, if you have logical volumes that are mirrored between two sites within a single volume group, you can tag the physical volumes according to where they are situated by tagging the physical volumes with @site1 and @site2 tags. You can then specify the following line in the lvm.conf file: For information on tagging physical volumes, see Appendix D, LVM Object Tags . In the following example, the lvm.conf file has been modified to contain the following line: Also in this example, a volume group taft has been created that consists of the physical volumes /dev/sdb1 , /dev/sdc1 , /dev/sdd1 , /dev/sde1 , /dev/sdf1 , /dev/sdg1 , and /dev/sdh1 . These physical volumes have been tagged with tags A , B , and C . The example does not use the C tag, but this will show that LVM uses the tags to select which physical volumes to use for the mirror legs. The following command creates a 10 gigabyte mirrored volume from the volume group taft . The following command shows which devices are used for the mirror legs and RAID metadata subvolumes. The following command extends the size of the mirrored volume, using the cling allocation policy to indicate that the mirror legs should be extended using physical volumes with the same tag. The following display command shows that the mirror legs have been extended using physical volumes with the same tag as the leg. Note that the physical volumes with a tag of C were ignored. 4.4.20. Controlling Logical Volume Activation You can flag a logical volume to be skipped during normal activation commands with the -k or --setactivationskip {y|n} option of the lvcreate or lvchange command. This flag is not applied during deactivation. You can determine whether this flag is set for a logical volume with the lvs command, which displays the k attribute as in the following example. By default, thin snapshot volumes are flagged for activation skip. You can activate a logical volume with the k attribute set by using the -K or --ignoreactivationskip option in addition to the standard -ay or --activate y option. The following command activates a thin snapshot logical volume. The persistent "activation skip" flag can be turned off when the logical volume is created by specifying the -kn or --setactivationskip n option of the lvcreate command. You can turn the flag off for an existing logical volume by specifying the -kn or --setactivationskip n option of the lvchange command. You can turn the flag on again with the -ky or --setactivationskip y option. The following command creates a snapshot logical volume without the activation skip flag The following command removes the activation skip flag from a snapshot logical volume. You can control the default activation skip setting with the auto_set_activation_skip setting in the /etc/lvm/lvm.conf file. 4.4.21. Tracking and Displaying Historical Logical Volumes (Red Hat Enterprise Linux 7.3 and Later) As of Red Hat Enterprise Linux 7.3, you can configure your system to track thin snapshot and thin logical volumes that have been removed by enabling the record_lvs_history metadata option in the lvm.conf configuration file. This allows you to display a full thin snapshot dependency chain that includes logical volumes that have been removed from the original dependency chain and have become historical logical volumes. You can configure your system to retain historical volumes for a defined period of time by specifying the retention time, in seconds, with the lvs_history_retention_time metadata option in the lvm.conf configuration file. A historical logical volume retains a simplified representation of the logical volume that has been removed, including the following reporting fields for the volume: lv_time_removed : the removal time of the logical volume lv_time : the creation time of the logical volume lv_name : the name of the logical volume lv_uuid : the UUID of the logical volume vg_name : the volume group that contains the logical volume. When a volume is removed, the historical logical volume name acquires a hypen as a prefix. For example, when you remove the logical volume lvol1 , the name of the historical volume is -lvol1 . A historical logical volume cannot be reactivated. Even when the record_lvs_history metadata option enabled, you can prevent the retention of historical logical volumes on an individual basis when you remove a logical volume by specifying the --nohistory option of the lvremove command. To include historical logical volumes in volume display, you specify the -H|--history option of an LVM display command. You can display a full thin snapshot dependency chain that includes historical volumes by specifying the lv_full_ancestors and lv_full_descendants reporting fields along with the -H option. The following series of commands provides examples of how you can display and manage historical logical volumes. Ensure that historical logical volumes are retained by setting record_lvs_history=1 in the lvm.conf file. This metadata option is not enabled by default. Enter the following command to display a thin provisioned snapshot chain. In this example: lvol1 is an origin volume, the first volume in the chain. lvol2 is a snapshot of lvol1 . lvol3 is a snapshot of lvol2 . lvol4 is a snapshot of lvol3 . lvol5 is also a snapshot of lvol3 . Note that even though the example lvs display command includes the -H option, no thin snapshot volume has yet been removed and there are no historical logical volumes to display. Remove logical volume lvol3 from the snapshot chain, then run the following lvs command again to see how historical logical volumes are displayed, along with their ancestors and descendants. You can use the lv_time_removed reporting field to display the time a historical volume was removed. You can reference historical logical volumes individually in a display command by specifying the vgname/lvname format, as in the following example. Note that the fifth bit in the lv_attr field is set to h to indicate the volume is a historical volume. LVM does not keep historical logical volumes if the volume has no live descendant. This means that if you remove a logical volume at the end of a snapshot chain, the logical volume is not retained as a historical logical volume. Run the following commands to remove the volume lvol1 and lvol2 and to see how the lvs command displays the volumes once they have been removed. To remove a historical logical volume completely, you can run the lvremove command again, specifying the name of the historical volume that now includes the hyphen, as in the following example. A historical logical volumes is retained as long as there is a chain that includes live volumes in its descendants. This means that removing a historical logical volume also removes all of the logical volumes in the chain if no existing descendant is linked to them, as shown in the following example.
[ "lvcreate -L 10G vg1", "lvcreate -L 1500 -n testlv testvg", "lvcreate -L 50G -n gfslv vg0", "lvcreate -l 60%VG -n mylv testvg", "lvcreate -l 100%FREE -n yourlv testvg", "vgdisplay testvg | grep \"Total PE\" Total PE 10230 lvcreate -l 10230 -n mylv testvg", "lvcreate -L 1500 -n testlv testvg /dev/sdg1", "lvcreate -l 100 -n testlv testvg /dev/sda1:0-24 /dev/sdb1:50-124", "lvcreate -l 100 -n testlv testvg /dev/sda1:0-25:100-", "lvcreate -L 50G -i 2 -I 64 -n gfslv vg0", "lvcreate -l 100 -i 2 -n stripelv testvg /dev/sda1:0-49 /dev/sdb1:50-99 Using default stripesize 64.00 KB Logical volume \"stripelv\" created", "lvcreate --type raid1 -m 1 -L 1G -n my_lv my_vg", "lvcreate --type raid5 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid6 -i 3 -L 1G -n my_lv my_vg", "lvcreate --type raid10 -i 2 -m 1 -L 10G --maxrecoveryrate 128 -n my_lv my_vg", "lvcreate --type raid0[_meta] --stripes Stripes --stripesize StripeSize VolumeGroup [ PhysicalVolumePath ...]", "lvconvert --type raid1 -m 1 my_vg/my_lv", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(0)", "lvconvert --type raid1 -m 1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m0 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m0 my_vg/my_lv /dev/sda1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdb1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 15.20 my_lv_mimage_0(0),my_lv_mimage_1(0) [my_lv_mimage_0] /dev/sde1(0) [my_lv_mimage_1] /dev/sdf1(0) [my_lv_mlog] /dev/sdd1(0)", "lvconvert --type raid1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(0) [my_lv_rmeta_0] /dev/sde1(125) [my_lv_rmeta_1] /dev/sdf1(125)", "lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m + num_additional_images vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m 2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 6.25 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(0) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(256) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 56.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) lvconvert -m 2 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert -m new_absolute_count vg/lv [ removable_PVs ] lvconvert -m - num_fewer_images vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvconvert -m1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0)", "lvconvert -m1 my_vg/my_lv /dev/sde1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdf1(1) [my_lv_rimage_1] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdf1(0) [my_lv_rmeta_1] /dev/sdg1(0)", "lvconvert --splitmirrors count -n splitname vg/lv [ removable_PVs ]", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 12.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sde1(1) new /dev/sdf1(1)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0) lvconvert --splitmirror 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) new /dev/sdg1(1)", "lvconvert --splitmirrors count --trackchanges vg/lv [ removable_PVs ]", "lvconvert --merge raid_image", "lvcreate --type raid1 -m 2 -L 1G -n my_lv .vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) [my_lv_rimage_2] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0) lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_2 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_2' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc1(1) my_lv_rimage_2 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc1(0) [my_lv_rmeta_2] /dev/sdd1(0)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) my_lv_rimage_1 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0) lvconvert --merge my_vg/my_lv_rimage_1 my_vg/my_lv_rimage_1 successfully merged back into my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvconvert --splitmirrors 1 -n new my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv /dev/sdc1(1) new /dev/sdd1(1)", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv Cannot track more than one split image at a time", "lvconvert --splitmirrors 1 --trackchanges my_vg/my_lv my_lv_rimage_1 split from my_lv for read-only purposes. Use 'lvconvert --merge my_vg/my_lv_rimage_1' to merge back into my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sdc1(1) my_lv_rimage_1 /dev/sdd1(1) [my_lv_rmeta_0] /dev/sdc1(0) [my_lv_rmeta_1] /dev/sdd1(0) lvconvert --splitmirrors 1 -n new my_vg/my_lv /dev/sdc1 Unable to split additional image from my_lv while tracking changes for my_lv_rimage_1", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "grep lvm /var/log/messages Jan 17 15:57:18 bp-01 lvm[8599]: Device #0 of raid1 array, my_vg-my_lv, has failed. Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994294784: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 250994376704: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 0: Input/output error Jan 17 15:57:18 bp-01 lvm[8599]: /dev/sde1: read failed after 0 of 2048 at 4096: Input/output error Jan 17 15:57:19 bp-01 lvm[8599]: Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. Jan 17 15:57:27 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is not in-sync. Jan 17 15:57:36 bp-01 lvm[8599]: raid1 array, my_vg-my_lv, is now in-sync.", "lvs -a -o name,copy_percent,devices vg Couldn't find device with uuid 3lugiV-3eSP-AFAR-sdrP-H20O-wM2M-qdMANy. LV Copy% Devices lv 100.00 lv_rimage_0(0),lv_rimage_1(0),lv_rimage_2(0) [lv_rimage_0] /dev/sdh1(1) [lv_rimage_1] /dev/sdf1(1) [lv_rimage_2] /dev/sdg1(1) [lv_rmeta_0] /dev/sdh1(0) [lv_rmeta_1] /dev/sdf1(0) [lv_rmeta_2] /dev/sdg1(0)", "lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdh1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sdh1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvconvert --repair my_vg/my_lv /dev/sdh1: read failed after 0 of 2048 at 250994294784: Input/output error /dev/sdh1: read failed after 0 of 2048 at 250994376704: Input/output error /dev/sdh1: read failed after 0 of 2048 at 0: Input/output error /dev/sdh1: read failed after 0 of 2048 at 4096: Input/output error Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. Attempt to replace failed RAID images (requires full device resync)? [y/n]: y lvs -a -o name,copy_percent,devices my_vg Couldn't find device with uuid fbI0YO-GX7x-firU-Vy5o-vzwx-vAKZ-feRxfF. LV Copy% Devices my_lv 64.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sde1(1) [my_lv_rimage_1] /dev/sdf1(1) [my_lv_rimage_2] /dev/sdg1(1) [my_lv_rmeta_0] /dev/sde1(0) [my_lv_rmeta_1] /dev/sdf1(0) [my_lv_rmeta_2] /dev/sdg1(0)", "lvchange --refresh my_vg/my_lv", "lvconvert --replace dev_to_remove vg/lv [ possible_replacements ]", "lvcreate --type raid1 -m 2 -L 1G -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdb2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdb2(0) [my_lv_rmeta_2] /dev/sdc1(0) lvconvert --replace /dev/sdb2 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 37.50 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sdb1(1) [my_lv_rimage_1] /dev/sdc2(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sdb1(0) [my_lv_rmeta_1] /dev/sdc2(0) [my_lv_rmeta_2] /dev/sdc1(0)", "lvcreate --type raid1 -m 1 -L 100 -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) pvs PV VG Fmt Attr PSize PFree /dev/sda1 my_vg lvm2 a-- 1020.00m 916.00m /dev/sdb1 my_vg lvm2 a-- 1020.00m 916.00m /dev/sdc1 my_vg lvm2 a-- 1020.00m 1020.00m /dev/sdd1 my_vg lvm2 a-- 1020.00m 1020.00m lvconvert --replace /dev/sdb1 my_vg/my_lv /dev/sdd1 lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 28.00 my_lv_rimage_0(0),my_lv_rimage_1(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0)", "lvcreate --type raid1 -m 2 -L 100 -n my_lv my_vg Logical volume \"my_lv\" created lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 100.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdb1(1) [my_lv_rimage_2] /dev/sdc1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdb1(0) [my_lv_rmeta_2] /dev/sdc1(0) lvconvert --replace /dev/sdb1 --replace /dev/sdc1 my_vg/my_lv lvs -a -o name,copy_percent,devices my_vg LV Copy% Devices my_lv 60.00 my_lv_rimage_0(0),my_lv_rimage_1(0),my_lv_rimage_2(0) [my_lv_rimage_0] /dev/sda1(1) [my_lv_rimage_1] /dev/sdd1(1) [my_lv_rimage_2] /dev/sde1(1) [my_lv_rmeta_0] /dev/sda1(0) [my_lv_rmeta_1] /dev/sdd1(0) [my_lv_rmeta_2] /dev/sde1(0)", "lvchange --syncaction {check|repair} vg/raid_lv", "lvs -o +raid_sync_action,raid_mismatch_count vg/lv", "lvconvert -R 4096K vg/raid1 Do you really want to change the region_size 512.00 KiB of LV vg/raid1 to 4.00 MiB? [y/n]: y Changed region size on RAID LV vg/raid1 to 4.00 MiB.", "lvcreate --type mirror -L 50G -m 1 -n mirrorlv vg0", "lvcreate --type mirror -m 1 -L 2T -R 2 -n mirror vol_group", "lvcreate --type mirror -L 12MB -m 1 --mirrorlog core -n ondiskmirvol bigvg Logical volume \"ondiskmirvol\" created", "lvcreate --type mirror -L 500M -m 1 -n mirrorlv -alloc anywhere vg0", "lvcreate --type mirror -L 12MB -m 1 --mirrorlog mirrored -n twologvol bigvg Logical volume \"twologvol\" created", "lvcreate --type mirror -L 500M -m 1 -n mirrorlv vg0 /dev/sda1 /dev/sdb1 /dev/sdc1", "lvcreate --type mirror -L 500M -m 1 -n mirrorlv vg0 /dev/sda1:0-499 /dev/sdb1:0-499 /dev/sdc1:0", "lvconvert --splitmirrors 2 --name copy vg/lv", "lvconvert --splitmirrors 2 --name copy vg/lv /dev/sd[ce]1", "lvconvert -m1 vg00/lvol1", "lvconvert -m0 vg00/lvol1", "lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mlog] /dev/sdd1(0) lvconvert -m 2 vg00/lvol1 vg00/lvol1: Converted: 13.0% vg00/lvol1: Converted: 100.0% Logical volume lvol1 converted. lvs -a -o name,copy_percent,devices vg00 LV Copy% Devices lvol1 100.00 lvol1_mimage_0(0),lvol1_mimage_1(0),lvol1_mimage_2(0) [lvol1_mimage_0] /dev/sda1(0) [lvol1_mimage_1] /dev/sdb1(0) [lvol1_mimage_2] /dev/sdc1(0) [lvol1_mlog] /dev/sdd1(0)", "lvcreate -L 100M -T vg001/mythinpool Rounding up size to full physical extent 4.00 MiB Logical volume \"mythinpool\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert my mythinpool vg001 twi-a-tz 100.00m 0.00", "lvcreate -V 1G -T vg001/mythinpool -n thinvolume Logical volume \"thinvolume\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvcreate -L 100M -T vg001/mythinpool -V 1G -n thinvolume Rounding up size to full physical extent 4.00 MiB Logical volume \"thinvolume\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvcreate -L 100M --thinpool mythinpool vg001 Rounding up size to full physical extent 4.00 MiB Logical volume \"mythinpool\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 100.00m 0.00", "lvcreate -i 2 -I 64 -c 256 -L 100M -T vg00/pool -V 1T --name thin_lv", "lvextend -L+100M vg001/mythinpool Extending logical volume mythinpool to 200.00 MiB Logical volume mythinpool successfully resized lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mythinpool vg001 twi-a-tz 200.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvconvert --thinpool vg001/lv1 --poolmetadata vg001/lv2 Converted vg001/lv1 to thin pool.", "lvcreate --size 100M --snapshot --name snap /dev/vg00/lvol1", "lvdisplay /dev/new_vg/lvol0 --- Logical volume --- LV Name /dev/new_vg/lvol0 VG Name new_vg LV UUID LBy1Tz-sr23-OjsI-LT03-nHLC-y8XW-EhCl78 LV Write Access read/write LV snapshot status source of /dev/new_vg/newvgsnap1 [active] LV Status available # open 0 LV Size 52.00 MB Current LE 13 Segments 1 Allocation inherit Read ahead sectors 0 Block device 253:2", "lvs LV VG Attr LSize Origin Snap% Move Log Copy% lvol0 new_vg owi-a- 52.00M newvgsnap1 new_vg swi-a- 8.00M lvol0 0.20", "lvcreate -s --name mysnapshot1 vg001/thinvolume Logical volume \"mysnapshot1\" created lvs LV VG Attr LSize Pool Origin Data% Move Log Copy% Convert mysnapshot1 vg001 Vwi-a-tz 1.00g mythinpool thinvolume 0.00 mythinpool vg001 twi-a-tz 100.00m 0.00 thinvolume vg001 Vwi-a-tz 1.00g mythinpool 0.00", "lvcreate -s --thinpool vg001/pool origin_volume --name mythinsnap", "lvcreate -s vg001/mythinsnap --name my2ndthinsnap", "lvs -o name,lv_ancestors,lv_descendants vg001 LV Ancestors Descendants stack1 stack2,stack3,stack4,stack5,stack6 stack2 stack1 stack3,stack4,stack5,stack6 stack3 stack2,stack1 stack4 stack4 stack3,stack2,stack1 stack5 stack2,stack1 stack6 stack6 stack5,stack2,stack1 pool", "lvs -o name,lv_ancestors,lv_descendants vg001 LV Ancestors Descendants stack1 stack2,stack5,stack6 stack2 stack1 stack5,stack6 stack4 stack5 stack2,stack1 stack6 stack6 stack5,stack2,stack1 pool", "pvcreate /dev/sde1 pvcreate /dev/sdf1 vgcreate VG /dev/sde1 /dev/sdf1", "lvcreate -L 10G -n lv VG /dev/sde1", "lvcreate --type cache-pool -L 5G -n cpool VG /dev/sdf1 Using default stripesize 64.00 KiB. Logical volume \"cpool\" created. lvs -a -o name,size,attr,devices VG LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2)", "lvconvert --type cache --cachepool cpool VG/lv Logical volume cpool is now cached. lvs -a -o name,size,attr,devices vg LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2) lv 10.00g Cwi-a-C--- lv_corig(0) [lv_corig] 10.00g owi-aoC--- /dev/sde1(0) [lvol0_pmspare] 8.00m ewi------- /dev/sdf1(0)", "lvconvert --type thin-pool VG/lv /dev/sdf1 WARNING: Converting logical volume VG/lv to thin pool's data volume with metadata wiping. THIS WILL DESTROY CONTENT OF LOGICAL VOLUME (filesystem etc.) Do you really want to convert VG/lv? [y/n]: y Converted VG/lv to thin pool. lvs -a -o name,size,attr,devices vg LV LSize Attr Devices [cpool] 5.00g Cwi---C--- cpool_cdata(0) [cpool_cdata] 5.00g Cwi-ao---- /dev/sdf1(4) [cpool_cmeta] 8.00m ewi-ao---- /dev/sdf1(2) lv 10.00g twi-a-tz-- lv_tdata(0) [lv_tdata] 10.00g Cwi-aoC--- lv_tdata_corig(0) [lv_tdata_corig] 10.00g owi-aoC--- /dev/sde1(0) [lv_tmeta] 12.00m ewi-ao---- /dev/sdf1(1284) [lvol0_pmspare] 12.00m ewi------- /dev/sdf1(0) [lvol0_pmspare] 12.00m ewi------- /dev/sdf1(1287)", "lvconvert --merge vg00/lvol1_snap", "lvconvert --merge @some_tag", "--persistent y --major major --minor minor", "lvchange -pr vg00/lvol1", "lvrename /dev/vg02/lvold /dev/vg02/lvnew", "lvrename vg02 lvold lvnew", "lvremove /dev/testvg/testlv Do you really want to remove active logical volume \"testlv\"? [y/n]: y Logical volume \"testlv\" successfully removed", "lvdisplay -v /dev/vg00/lvol2", "lvscan ACTIVE '/dev/vg0/gfslv' [1.46 GB] inherit", "lvextend -L12G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 12 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended", "lvextend -L+1G /dev/myvg/homevol lvextend -- extending logical volume \"/dev/myvg/homevol\" to 13 GB lvextend -- doing automatic backup of volume group \"myvg\" lvextend -- logical volume \"/dev/myvg/homevol\" successfully extended", "lvextend -l +100%FREE /dev/myvg/testlv Extending logical volume testlv to 68.59 GB Logical volume testlv successfully resized", "lvreduce --resizefs -L 64M vg00/lvol1 fsck from util-linux 2.23.2 /dev/mapper/vg00-lvol1: clean, 11/25688 files, 8896/102400 blocks resize2fs 1.42.9 (28-Dec-2013) Resizing the filesystem on /dev/mapper/vg00-lvol1 to 65536 (1k) blocks. The filesystem on /dev/mapper/vg00-lvol1 is now 65536 blocks long. Size of logical volume vg00/lvol1 changed from 100.00 MiB (25 extents) to 64.00 MiB (16 extents). Logical volume vg00/lvol1 successfully resized.", "lvreduce --resizefs -L -64M vg00/lvol1", "vgs VG #PV #LV #SN Attr VSize VFree vg 2 0 0 wz--n- 271.31G 271.31G", "lvcreate -n stripe1 -L 271.31G -i 2 vg Using default stripesize 64.00 KB Rounding up size to full physical extent 271.31 GB Logical volume \"stripe1\" created lvs -a -o +devices LV VG Attr LSize Origin Snap% Move Log Copy% Devices stripe1 vg -wi-a- 271.31G /dev/sda1(0),/dev/sdb1(0)", "vgs VG #PV #LV #SN Attr VSize VFree vg 2 1 0 wz--n- 271.31G 0", "vgextend vg /dev/sdc1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 3 1 0 wz--n- 406.97G 135.66G", "lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required", "vgextend vg /dev/sdd1 Volume group \"vg\" successfully extended vgs VG #PV #LV #SN Attr VSize VFree vg 4 1 0 wz--n- 542.62G 271.31G lvextend vg/stripe1 -L 542G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 542.00 GB Logical volume stripe1 successfully resized", "lvextend vg/stripe1 -L 406G Using stripesize of last segment 64.00 KB Extending logical volume stripe1 to 406.00 GB Insufficient suitable allocatable extents for logical volume stripe1: 34480 more required lvextend -i1 -l+100%FREE vg/stripe1", "lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg Rwi-a-r- 5.00g 100.00", "lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg rwi-a-r- 20.00m 100.00 lvextend -L +5G vg/lv --nosync Extending 2 mirror images. Extending logical volume lv to 5.02 GiB Logical volume lv successfully resized lvs vg LV VG Attr LSize Pool Origin Snap% Move Log Cpy%Sync Convert lv vg Rwi-a-r- 5.02g 100.00", "cling_tag_list = [ \"@site1\", \"@site2\" ]", "cling_tag_list = [ \"@A\", \"@B\" ]", "pvs -a -o +pv_tags /dev/sd[bcdefgh] PV VG Fmt Attr PSize PFree PV Tags /dev/sdb1 taft lvm2 a-- 15.00g 15.00g A /dev/sdc1 taft lvm2 a-- 15.00g 15.00g B /dev/sdd1 taft lvm2 a-- 15.00g 15.00g B /dev/sde1 taft lvm2 a-- 15.00g 15.00g C /dev/sdf1 taft lvm2 a-- 15.00g 15.00g C /dev/sdg1 taft lvm2 a-- 15.00g 15.00g A /dev/sdh1 taft lvm2 a-- 15.00g 15.00g A", "lvcreate --type raid1 -m 1 -n mirror --nosync -L 10G taft WARNING: New raid1 won't be synchronised. Don't read what you didn't write! Logical volume \"mirror\" created", "lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices mirror taft Rwi-a-r--- 10.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0) [mirror_rimage_0] taft iwi-aor--- 10.00g /dev/sdb1(1) [mirror_rimage_1] taft iwi-aor--- 10.00g /dev/sdc1(1) [mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0) [mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)", "lvextend --alloc cling -L +10G taft/mirror Extending 2 mirror images. Extending logical volume mirror to 20.00 GiB Logical volume mirror successfully resized", "lvs -a -o +devices LV VG Attr LSize Log Cpy%Sync Devices mirror taft Rwi-a-r--- 20.00g 100.00 mirror_rimage_0(0),mirror_rimage_1(0) [mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdb1(1) [mirror_rimage_0] taft iwi-aor--- 20.00g /dev/sdg1(0) [mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdc1(1) [mirror_rimage_1] taft iwi-aor--- 20.00g /dev/sdd1(0) [mirror_rmeta_0] taft ewi-aor--- 4.00m /dev/sdb1(0) [mirror_rmeta_1] taft ewi-aor--- 4.00m /dev/sdc1(0)", "lvs vg/thin1s1 LV VG Attr LSize Pool Origin thin1s1 vg Vwi---tz-k 1.00t pool0 thin1", "lvchange -ay -K VG/SnapLV", "lvcreate --type thin -n SnapLV -kn -s ThinLV --thinpool VG/ThinPoolLV", "lvchange -kn VG/SnapLV", "lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,lvol3,lvol4,lvol5 lvol2 lvol1 lvol3,lvol4,lvol5 lvol3 lvol2,lvol1 lvol4,lvol5 lvol4 lvol3,lvol2,lvol1 lvol5 lvol3,lvol2,lvol1 pool", "lvremove -f vg/lvol3 Logical volume \"lvol3\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,-lvol3,lvol4,lvol5 lvol2 lvol1 -lvol3,lvol4,lvol5 -lvol3 lvol2,lvol1 lvol4,lvol5 lvol4 -lvol3,lvol2,lvol1 lvol5 -lvol3,lvol2,lvol1 pool", "lvs -H -o name,full_ancestors,full_descendants,time_removed LV FAncestors FDescendants RTime lvol1 lvol2,-lvol3,lvol4,lvol5 lvol2 lvol1 -lvol3,lvol4,lvol5 -lvol3 lvol2,lvol1 lvol4,lvol5 2016-03-14 14:14:32 +0100 lvol4 -lvol3,lvol2,lvol1 lvol5 -lvol3,lvol2,lvol1 pool", "lvs -H vg/-lvol3 LV VG Attr LSize -lvol3 vg ----h----- 0", "lvremove -f vg/lvol5 Automatically removing historical logical volume vg/-lvol5. Logical volume \"lvol5\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants lvol1 lvol2,-lvol3,lvol4 lvol2 lvol1 -lvol3,lvol4 -lvol3 lvol2,lvol1 lvol4 lvol4 -lvol3,lvol2,lvol1 pool", "lvremove -f vg/lvol1 vg/lvol2 Logical volume \"lvol1\" successfully removed Logical volume \"lvol2\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants -lvol1 -lvol2,-lvol3,lvol4 -lvol2 -lvol1 -lvol3,lvol4 -lvol3 -lvol2,-lvol1 lvol4 lvol4 -lvol3,-lvol2,-lvol1 pool", "lvremove -f vg/-lvol3 Historical logical volume \"lvol3\" successfully removed lvs -H -o name,full_ancestors,full_descendants LV FAncestors FDescendants -lvol1 -lvol2,lvol4 -lvol2 -lvol1 lvol4 lvol4 -lvol2,-lvol1 pool", "lvremove -f vg/lvol4 Automatically removing historical logical volume vg/-lvol1. Automatically removing historical logical volume vg/-lvol2. Automatically removing historical logical volume vg/-lvol4. Logical volume \"lvol4\" successfully removed" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/logical_volume_manager_administration/lv
Chapter 24. Setting up Stratis file systems
Chapter 24. Setting up Stratis file systems Stratis runs as a service to manage pools of physical storage devices, simplifying local storage management with ease of use while helping you set up and manage complex storage configurations. Important Stratis is a Technology Preview feature only. Technology Preview features are not supported with Red Hat production service level agreements (SLAs) and might not be functionally complete. Red Hat does not recommend using them in production. These features provide early access to upcoming product features, enabling customers to test functionality and provide feedback during the development process. For more information about the support scope of Red Hat Technology Preview features, see https://access.redhat.com/support/offerings/techpreview . 24.1. What is Stratis Stratis is a local storage-management solution for Linux. It is focused on simplicity and ease of use, and gives you access to advanced storage features. Stratis makes the following activities easier: Initial configuration of storage Making changes later Using advanced storage features Stratis is a local storage management system that supports advanced storage features. The central concept of Stratis is a storage pool . This pool is created from one or more local disks or partitions, and file systems are created from the pool. The pool enables many useful features, such as: File system snapshots Thin provisioning Tiering Encryption Additional resources Stratis website 24.2. Components of a Stratis volume Learn about the components that comprise a Stratis volume. Externally, Stratis presents the following volume components on the command line and the API: blockdev Block devices, such as a disk or a disk partition. pool Composed of one or more block devices. A pool has a fixed total size, equal to the size of the block devices. The pool contains most Stratis layers, such as the non-volatile data cache using the dm-cache target. Stratis creates a /dev/stratis/ my-pool / directory for each pool. This directory contains links to devices that represent Stratis file systems in the pool. filesystem Each pool can contain one or more file systems, which store files. File systems are thinly provisioned and do not have a fixed total size. The actual size of a file system grows with the data stored on it. If the size of the data approaches the virtual size of the file system, Stratis grows the thin volume and the file system automatically. The file systems are formatted with XFS. Important Stratis tracks information about file systems created using Stratis that XFS is not aware of, and changes made using XFS do not automatically create updates in Stratis. Users must not reformat or reconfigure XFS file systems that are managed by Stratis. Stratis creates links to file systems at the /dev/stratis/ my-pool / my-fs path. Note Stratis uses many Device Mapper devices, which show up in dmsetup listings and the /proc/partitions file. Similarly, the lsblk command output reflects the internal workings and layers of Stratis. 24.3. Block devices usable with Stratis Storage devices that can be used with Stratis. Supported devices Stratis pools have been tested to work on these types of block devices: LUKS LVM logical volumes MD RAID DM Multipath iSCSI HDDs and SSDs NVMe devices Unsupported devices Because Stratis contains a thin-provisioning layer, Red Hat does not recommend placing a Stratis pool on block devices that are already thinly-provisioned. 24.4. Installing Stratis Install the required packages for Stratis. Procedure Install packages that provide the Stratis service and command-line utilities: Verify that the stratisd service is enabled: 24.5. Creating an unencrypted Stratis pool You can create an unencrypted Stratis pool from one or more block devices. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition device for creating the Stratis pool. For information about partitioning DASD devices, see Configuring a Linux instance on IBM Z . Note You cannot encrypt an unencrypted Stratis pool. Procedure Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool: where block-device is the path to the block device; for example, /dev/sdb . Create the new unencrypted Stratis pool on the selected block device: where block-device is the path to an empty or wiped block device. You can also specify multiple block devices on a single line by using the following command: Verification Verify that the new Stratis pool was created: 24.6. Creating an unencrypted Stratis pool by using the web console You can use the web console to create an unencrypted Stratis pool from one or more block devices. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Note You cannot encrypt an unencrypted Stratis pool after it is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button and select Create Stratis pool . In the Name field, enter a name for the Stratis pool. Select the Block devices from which you want to create the Stratis pool. Optional: If you want to specify the maximum size for each file system that is created in pool, select Manage filesystem sizes . Click Create . Verification Go to the Storage section and verify that you can see the new Stratis pool in the Devices table. 24.7. Creating an encrypted Stratis pool To secure your data, you can create an encrypted Stratis pool from one or more block devices. When you create an encrypted Stratis pool, the kernel keyring is used as the primary encryption mechanism. After subsequent system reboots this kernel keyring is used to unlock the encrypted Stratis pool. When creating an encrypted Stratis pool from one or more block devices, note the following: Each block device is encrypted using the cryptsetup library and implements the LUKS2 format. Each Stratis pool can either have a unique key or share the same key with other pools. These keys are stored in the kernel keyring. The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It is not possible to have both encrypted and unencrypted block devices in the same Stratis pool. Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted. Prerequisites Stratis v2.1.0 or later is installed. For more information, see Installing Stratis . The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. The block devices on which you are creating a Stratis pool are at least 1GB in size each. On the IBM Z architecture, the /dev/dasd* block devices must be partitioned. Use the partition in the Stratis pool. For information about partitioning DASD devices, see Configuring a Linux instance on IBM Z . Procedure Erase any file system, partition table, or RAID signatures that exist on each block device that you want to use in the Stratis pool: where block-device is the path to the block device; for example, /dev/sdb . If you have not created a key set already, run the following command and follow the prompts to create a key set to use for the encryption. where key-description is a reference to the key that gets created in the kernel keyring. Create the encrypted Stratis pool and specify the key description to use for the encryption. You can also specify the key path using the --keyfile-path option instead of using the key-description option. where key-description References the key that exists in the kernel keyring, which you created in the step. my-pool Specifies the name of the new Stratis pool. block-device Specifies the path to an empty or wiped block device. You can also specify multiple block devices on a single line by using the following command: Verification Verify that the new Stratis pool was created: 24.8. Creating an encrypted Stratis pool by using the web console To secure your data, you can use the web console to create an encrypted Stratis pool from one or more block devices. When creating an encrypted Stratis pool from one or more block devices, note the following: Each block device is encrypted using the cryptsetup library and implements the LUKS2 format. Each Stratis pool can either have a unique key or share the same key with other pools. These keys are stored in the kernel keyring. The block devices that comprise a Stratis pool must be either all encrypted or all unencrypted. It is not possible to have both encrypted and unencrypted block devices in the same Stratis pool. Block devices added to the data tier of an encrypted Stratis pool are automatically encrypted. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Stratis v2.1.0 or later is installed. The stratisd service is running. The block devices on which you are creating a Stratis pool are not in use and are not mounted. Each block device on which you are creating a Stratis pool is at least 1 GB. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the menu button and select Create Stratis pool . In the Name field, enter a name for the Stratis pool. Select the Block devices from which you want to create the Stratis pool. Select the type of encryption, you can use a passphrase, a Tang keyserver, or both: Passphrase: Enter a passphrase. Confirm the passphrase. Tang keyserver: Enter the keyserver address. For more information, see Deploying a Tang server with SELinux in enforcing mode . Optional: If you want to specify the maximum size for each file system that is created in pool, select Manage filesystem sizes . Click Create . Verification Go to the Storage section and verify that you can see the new Stratis pool in the Devices table. 24.9. Renaming a Stratis pool by using the web console You can use the web console to rename an existing Stratis pool. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . Stratis is installed. The web console detects and installs Stratis by default. However, for manually installing Stratis, see Installing Stratis . The stratisd service is running. A Stratis pool is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . In the Storage table, click the Stratis pool you want to rename. On the Stratis pool page, click edit to the Name field. In the Rename Stratis pool dialog box, enter a new name. Click Rename . 24.10. Setting overprovisioning mode in Stratis file system A storage stack can reach a state of overprovision. If the file system size becomes bigger than the pool backing it, the pool becomes full. To prevent this, disable overprovisioning, which ensures that the size of all file systems on the pool does not exceed the available physical storage provided by the pool. If you use Stratis for critical applications or the root file system, this mode prevents certain failure cases. If you enable overprovisioning, an API signal notifies you when your storage has been fully allocated. The notification serves as a warning to the user to inform them that when all the remaining pool space fills up, Stratis has no space left to extend to. Prerequisites Stratis is installed. For more information, see Installing Stratis . Procedure To set up the pool correctly, you have two possibilities: Create a pool from one or more block devices: Set overprovisioning mode in the existing pool: If set to "yes", you enable overprovisioning to the pool. This means that the sum of the logical sizes of the Stratis file systems, supported by the pool, can exceed the amount of available data space. Verification Run the following to view the full list of Stratis pools: Check if there is an indication of the pool overprovisioning mode flag in the stratis pool list output. The " ~ " is a math symbol for "NOT", so ~Op means no-overprovisioning. Optional: Run the following to check overprovisioning on a specific pool: 24.11. Binding a Stratis pool to NBDE Binding an encrypted Stratis pool to Network Bound Disk Encryption (NBDE) requires a Tang server. When a system containing the Stratis pool reboots, it connects with the Tang server to automatically unlock the encrypted pool without you having to provide the kernel keyring description. Note Binding a Stratis pool to a supplementary Clevis encryption mechanism does not remove the primary kernel keyring encryption. Prerequisites Stratis v2.3.0 or later is installed. For more information, see Installing Stratis . The stratisd service is running. You have created an encrypted Stratis pool, and you have the key description of the key that was used for the encryption. For more information, see Creating an encrypted Stratis pool . You can connect to the Tang server. For more information, see Deploying a Tang server with SELinux in enforcing mode . Procedure Bind an encrypted Stratis pool to NBDE: where my-pool Specifies the name of the encrypted Stratis pool. tang-server Specifies the IP address or URL of the Tang server. Additional resources Configuring automated unlocking of encrypted volumes using policy-based decryption 24.12. Binding a Stratis pool to TPM When you bind an encrypted Stratis pool to the Trusted Platform Module (TPM) 2.0, the system containing the pool reboots, and the pool is automatically unlocked without you having to provide the kernel keyring description. Prerequisites Stratis v2.3.0 or later is installed. For more information, see Installing Stratis . The stratisd service is running. You have created an encrypted Stratis pool. For more information, see Creating an encrypted Stratis pool . Procedure Bind an encrypted Stratis pool to TPM: where my-pool Specifies the name of the encrypted Stratis pool. key-description References the key that exists in the kernel keyring, which was generated when you created the encrypted Stratis pool. 24.13. Unlocking an encrypted Stratis pool with kernel keyring After a system reboot, your encrypted Stratis pool or the block devices that comprise it might not be visible. You can unlock the pool using the kernel keyring that was used to encrypt the pool. Prerequisites Stratis v2.1.0 is installed. For more information, see Installing Stratis . The stratisd service is running. You have created an encrypted Stratis pool. For more information, see Creating an encrypted Stratis pool . Procedure Re-create the key set using the same key description that was used previously: where key-description references the key that exists in the kernel keyring, which was generated when you created the encrypted Stratis pool. Verify that the Stratis pool is visible: 24.14. Unbinding a Stratis pool from supplementary encryption When you unbind an encrypted Stratis pool from a supported supplementary encryption mechanism, the primary kernel keyring encryption remains in place. This is not true for pools that are created with Clevis encryption from the start. Prerequisites Stratis v2.3.0 or later is installed on your system. For more information, see Installing Stratis . You have created an encrypted Stratis pool. For more information, see Creating an encrypted Stratis pool . The encrypted Stratis pool is bound to a supported supplementary encryption mechanism. Procedure Unbind an encrypted Stratis pool from a supplementary encryption mechanism: where my-pool specifies the name of the Stratis pool you want to unbind. Additional resources Binding an encrypted Stratis pool to NBDE Binding an encrypted Stratis pool to TPM 24.15. Starting and stopping Stratis pool You can start and stop Stratis pools. This gives you the option to dissasemble or bring down all the objects that were used to construct the pool, such as file systems, cache devices, thin pool, and encrypted devices. Note that if the pool actively uses any device or file system, it might issue a warning and not be able to stop. The stopped state is recorded in the pool's metadata. These pools do not start on the following boot, until the pool receives a start command. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. You have created either an unencrypted or an encrypted Stratis pool. See Creating an unencrypted Stratis pool or Creating an encrypted Stratis pool . Procedure Use the following command to start the Stratis pool. The --unlock-method option specifies the method of unlocking the pool if it is encrypted: Alternatively, use the following command to stop the Stratis pool. This tears down the storage stack but leaves all metadata intact: Verification Use the following command to list all pools on the system: Use the following command to list all not previously started pools. If the UUID is specified, the command prints detailed information about the pool corresponding to the UUID: 24.16. Creating a Stratis file system Create a Stratis file system on an existing Stratis pool. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. You have created a Stratis pool. See Creating an unencrypted Stratis pool or Creating an encrypted Stratis pool . Procedure Create a Stratis file system on a pool: where number-and-unit Specifies the size of a file system. The specification format must follow the standard size specification format for input, that is B, KiB, MiB, GiB, TiB or PiB. my-pool Specifies the name of the Stratis pool. my-fs Specifies an arbitrary name for the file system. For example: Example 24.1. Creating a Stratis file system Verification List file systems within the pool to check if the Stratis file system is created: Additional resources Mounting a Stratis file system 24.17. Creating a file system on a Stratis pool by using the web console You can use the web console to create a file system on an existing Stratis pool. Prerequisites You have installed the RHEL 8 web console. You have enabled the cockpit service. Your user account is allowed to log in to the web console. For instructions, see Installing and enabling the web console . The stratisd service is running. A Stratis pool is created. Procedure Log in to the RHEL 8 web console. For details, see Logging in to the web console . Click Storage . Click the Stratis pool on which you want to create a file system. On the Stratis pool page, scroll to the Stratis filesystems section and click Create new filesystem . Enter a name for the file system. Enter a mount point for the file system. Select the mount option. In the At boot drop-down menu, select when you want to mount your file system. Create the file system: If you want to create and mount the file system, click Create and mount . If you want to only create the file system, click Create only . Verification The new file system is visible on the Stratis pool page under the Stratis filesystems tab. 24.18. Mounting a Stratis file system Mount an existing Stratis file system to access the content. Prerequisites Stratis is installed. For more information, see Installing Stratis . The stratisd service is running. You have created a Stratis file system. For more information, see Creating a Stratis file system . Procedure To mount the file system, use the entries that Stratis maintains in the /dev/stratis/ directory: The file system is now mounted on the mount-point directory and ready to use. 24.19. Setting up non-root Stratis file systems in /etc/fstab using a systemd service You can manage setting up non-root file systems in /etc/fstab using a systemd service. Prerequisites Stratis is installed. See Installing Stratis . The stratisd service is running. You have created a Stratis file system. See Creating a Stratis file system . Procedure As root, edit the /etc/fstab file and add a line to set up non-root file systems: Additional resources Persistently mounting file systems
[ "yum install stratisd stratis-cli", "systemctl enable --now stratisd", "wipefs --all block-device", "stratis pool create my-pool block-device", "stratis pool create my-pool block-device-1 block-device-2", "stratis pool list", "wipefs --all block-device", "stratis key set --capture-key key-description", "stratis pool create --key-desc key-description my-pool block-device", "stratis pool create --key-desc key-description my-pool block-device-1 block-device-2", "stratis pool list", "stratis pool create pool-name /dev/sdb", "stratis pool overprovision pool-name <yes|no>", "stratis pool list Name Total Physical Properties UUID Alerts pool-name 1.42 TiB / 23.96 MiB / 1.42 TiB ~Ca,~Cr,~Op cb7cb4d8-9322-4ac4-a6fd-eb7ae9e1e540", "stratis pool overprovision pool-name yes stratis pool list Name Total Physical Properties UUID Alerts pool-name 1.42 TiB / 23.96 MiB / 1.42 TiB ~Ca,~Cr,~Op cb7cb4d8-9322-4ac4-a6fd-eb7ae9e1e540", "stratis pool bind nbde --trust-url my-pool tang-server", "stratis pool bind tpm my-pool key-description", "stratis key set --capture-key key-description", "stratis pool list", "stratis pool unbind clevis my-pool", "stratis pool start pool-uuid --unlock-method <keyring|clevis>", "stratis pool stop pool-name", "stratis pool list", "stratis pool list --stopped --uuid UUID", "stratis filesystem create --size number-and-unit my-pool my-fs", "stratis filesystem create --size 10GiB pool1 filesystem1", "stratis fs list my-pool", "mount /dev/stratis/ my-pool / my-fs mount-point", "/dev/stratis/ my-pool/my-fs mount-point xfs defaults,x-systemd.requires=stratis-fstab-setup@ pool-uuid .service,x-systemd.after=stratis-fstab-setup@ pool-uuid .service dump-value fsck_value" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_storage_devices/setting-up-stratis-file-systems
Chapter 5. Using build strategies
Chapter 5. Using build strategies The following sections define the primary supported build strategies, and how to use them. 5.1. Docker build Red Hat OpenShift Service on AWS uses Buildah to build a container image from a Dockerfile. For more information on building container images with Dockerfiles, see the Dockerfile reference documentation . Tip If you set Docker build arguments by using the buildArgs array, see Understand how ARG and FROM interact in the Dockerfile reference documentation. 5.1.1. Replacing the Dockerfile FROM image You can replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object. If the Dockerfile uses multi-stage builds, the image in the last FROM instruction will be replaced. Procedure To replace the FROM instruction of the Dockerfile with the from parameters of the BuildConfig object, add the following settings to the BuildConfig object: strategy: dockerStrategy: from: kind: "ImageStreamTag" name: "debian:latest" 5.1.2. Using Dockerfile path By default, docker builds use a Dockerfile located at the root of the context specified in the BuildConfig.spec.source.contextDir field. The dockerfilePath field allows the build to use a different path to locate your Dockerfile, relative to the BuildConfig.spec.source.contextDir field. It can be a different file name than the default Dockerfile, such as MyDockerfile , or a path to a Dockerfile in a subdirectory, such as dockerfiles/app1/Dockerfile . Procedure Set the dockerfilePath field for the build to use a different path to locate your Dockerfile: strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile 5.1.3. Using docker environment variables To make environment variables available to the docker build process and resulting image, you can add environment variables to the dockerStrategy definition of the build configuration. The environment variables defined there are inserted as a single ENV Dockerfile instruction right after the FROM instruction, so that it can be referenced later on within the Dockerfile. The variables are defined during build and stay in the output image, therefore they will be present in any container that runs that image as well. For example, defining a custom HTTP proxy to be used during build and runtime: dockerStrategy: ... env: - name: "HTTP_PROXY" value: "http://myproxy.net:5187/" You can also manage environment variables defined in the build configuration with the oc set env command. 5.1.4. Adding Docker build arguments You can set Docker build arguments using the buildArgs array. The build arguments are passed to Docker when a build is started. Tip See Understand how ARG and FROM interact in the Dockerfile reference documentation. Procedure To set Docker build arguments, add entries to the buildArgs array, which is located in the dockerStrategy definition of the BuildConfig object. For example: dockerStrategy: ... buildArgs: - name: "version" value: "latest" Note Only the name and value fields are supported. Any settings on the valueFrom field are ignored. 5.1.5. Squashing layers with docker builds Docker builds normally create a layer representing each instruction in a Dockerfile. Setting the imageOptimizationPolicy to SkipLayers merges all instructions into a single layer on top of the base image. Procedure Set the imageOptimizationPolicy to SkipLayers : strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers 5.1.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the dockerStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 1 5 Required. A unique name. 2 6 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. Additional resources Build inputs Input secrets and config maps 5.2. Source-to-image build Source-to-image (S2I) is a tool for building reproducible container images. It produces ready-to-run images by injecting application source into a container image and assembling a new image. The new image incorporates the base image, the builder, and built source and is ready to use with the buildah run command. S2I supports incremental builds, which re-use previously downloaded dependencies, previously built artifacts, and so on. 5.2.1. Performing source-to-image incremental builds Source-to-image (S2I) can perform incremental builds, which means it reuses artifacts from previously-built images. Procedure To create an incremental build, create a with the following modification to the strategy definition: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "incremental-image:latest" 1 incremental: true 2 1 Specify an image that supports incremental builds. Consult the documentation of the builder image to determine if it supports this behavior. 2 This flag controls whether an incremental build is attempted. If the builder image does not support incremental builds, the build will still succeed, but you will get a log message stating the incremental build was not successful because of a missing save-artifacts script. Additional resources See S2I Requirements for information on how to create a builder image supporting incremental builds. 5.2.2. Overriding source-to-image builder image scripts You can override the assemble , run , and save-artifacts source-to-image (S2I) scripts provided by the builder image. Procedure To override the assemble , run , and save-artifacts S2I scripts provided by the builder image, complete one of the following actions: Provide an assemble , run , or save-artifacts script in the .s2i/bin directory of your application source repository. Provide a URL of a directory containing the scripts as part of the strategy definition in the BuildConfig object. For example: strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "builder-image:latest" scripts: "http://somehost.com/scripts_directory" 1 1 The build process appends run , assemble , and save-artifacts to the path. If any or all scripts with these names exist, the build process uses these scripts in place of scripts with the same name that are provided in the image. Note Files located at the scripts URL take precedence over files located in .s2i/bin of the source repository. 5.2.3. Source-to-image environment variables There are two ways to make environment variables available to the source build process and resulting image: environment files and BuildConfig environment values. The variables that you provide using either method will be present during the build process and in the output image. 5.2.3.1. Using source-to-image environment files Source build enables you to set environment values, one per line, inside your application, by specifying them in a .s2i/environment file in the source repository. The environment variables specified in this file are present during the build process and in the output image. If you provide a .s2i/environment file in your source repository, source-to-image (S2I) reads this file during the build. This allows customization of the build behavior as the assemble script may use these variables. Procedure For example, to disable assets compilation for your Rails application during the build: Add DISABLE_ASSET_COMPILATION=true in the .s2i/environment file. In addition to builds, the specified environment variables are also available in the running application itself. For example, to cause the Rails application to start in development mode instead of production : Add RAILS_ENV=development to the .s2i/environment file. The complete list of supported environment variables is available in the using images section for each image. 5.2.3.2. Using source-to-image build configuration environment You can add environment variables to the sourceStrategy definition of the build configuration. The environment variables defined there are visible during the assemble script execution and will be defined in the output image, making them also available to the run script and application code. Procedure For example, to disable assets compilation for your Rails application: sourceStrategy: ... env: - name: "DISABLE_ASSET_COMPILATION" value: "true" Additional resources The build environment section provides more advanced instructions. You can also manage environment variables defined in the build configuration with the oc set env command. 5.2.4. Ignoring source-to-image source files Source-to-image (S2I) supports a .s2iignore file, which contains a list of file patterns that should be ignored. Files in the build working directory, as provided by the various input sources, that match a pattern found in the .s2iignore file will not be made available to the assemble script. 5.2.5. Creating images from source code with source-to-image Source-to-image (S2I) is a framework that makes it easy to write images that take application source code as an input and produce a new image that runs the assembled application as output. The main advantage of using S2I for building reproducible container images is the ease of use for developers. As a builder image author, you must understand two basic concepts in order for your images to provide the best S2I performance, the build process and S2I scripts. 5.2.5.1. Understanding the source-to-image build process The build process consists of the following three fundamental elements, which are combined into a final container image: Sources Source-to-image (S2I) scripts Builder image S2I generates a Dockerfile with the builder image as the first FROM instruction. The Dockerfile generated by S2I is then passed to Buildah. 5.2.5.2. How to write source-to-image scripts You can write source-to-image (S2I) scripts in any programming language, as long as the scripts are executable inside the builder image. S2I supports multiple options providing assemble / run / save-artifacts scripts. All of these locations are checked on each build in the following order: A script specified in the build configuration. A script found in the application source .s2i/bin directory. A script found at the default image URL with the io.openshift.s2i.scripts-url label. Both the io.openshift.s2i.scripts-url label specified in the image and the script specified in a build configuration can take one of the following forms: image:///path_to_scripts_dir : absolute path inside the image to a directory where the S2I scripts are located. file:///path_to_scripts_dir : relative or absolute path to a directory on the host where the S2I scripts are located. http(s)://path_to_scripts_dir : URL to a directory where the S2I scripts are located. Table 5.1. S2I scripts Script Description assemble The assemble script builds the application artifacts from a source and places them into appropriate directories inside the image. This script is required. The workflow for this script is: Optional: Restore build artifacts. If you want to support incremental builds, make sure to define save-artifacts as well. Place the application source in the desired location. Build the application artifacts. Install the artifacts into locations appropriate for them to run. run The run script executes your application. This script is required. save-artifacts The save-artifacts script gathers all dependencies that can speed up the build processes that follow. This script is optional. For example: For Ruby, gems installed by Bundler. For Java, .m2 contents. These dependencies are gathered into a tar file and streamed to the standard output. usage The usage script allows you to inform the user how to properly use your image. This script is optional. test/run The test/run script allows you to create a process to check if the image is working correctly. This script is optional. The proposed flow of that process is: Build the image. Run the image to verify the usage script. Run s2i build to verify the assemble script. Optional: Run s2i build again to verify the save-artifacts and assemble scripts save and restore artifacts functionality. Run the image to verify the test application is working. Note The suggested location to put the test application built by your test/run script is the test/test-app directory in your image repository. Example S2I scripts The following example S2I scripts are written in Bash. Each example assumes its tar contents are unpacked into the /tmp/s2i directory. assemble script: #!/bin/bash # restore build artifacts if [ "USD(ls /tmp/s2i/artifacts/ 2>/dev/null)" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi # move the application source mv /tmp/s2i/src USDHOME/src # build application artifacts pushd USD{HOME} make all # install the artifacts make install popd run script: #!/bin/bash # run the application /opt/application/run.sh save-artifacts script: #!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd usage script: #!/bin/bash # inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF Additional resources S2I Image Creation Tutorial 5.2.6. Using build volumes You can mount build volumes to give running builds access to information that you do not want to persist in the output container image. Build volumes provide sensitive information, such as repository credentials, that the build environment or configuration only needs at build time. Build volumes are different from build inputs, whose data can persist in the output container image. The mount points of build volumes, from which the running build reads data, are functionally similar to pod volume mounts . Prerequisites You have added an input secret, config map, or both to a BuildConfig object. Procedure In the sourceStrategy definition of the BuildConfig object, add any build volumes to the volumes array. For example: spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8 1 5 Required. A unique name. 2 6 Required. The absolute path of the mount point. It must not contain .. or : and does not collide with the destination path generated by the builder. The /opt/app-root/src is the default home directory for many Red Hat S2I-enabled images. 3 7 Required. The type of source, ConfigMap , Secret , or CSI . 4 8 Required. The name of the source. Additional resources Build inputs Input secrets and config maps 5.3. Pipeline build Important The Pipeline build strategy is deprecated in Red Hat OpenShift Service on AWS 4. Equivalent and improved functionality is present in the Red Hat OpenShift Service on AWS Pipelines based on Tekton. Jenkins images on Red Hat OpenShift Service on AWS are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The Pipeline build strategy allows developers to define a Jenkins pipeline for use by the Jenkins pipeline plugin. The build can be started, monitored, and managed by Red Hat OpenShift Service on AWS in the same way as any other build type. Pipeline workflows are defined in a jenkinsfile , either embedded directly in the build configuration, or supplied in a Git repository and referenced by the build configuration. 5.3.1. Understanding Red Hat OpenShift Service on AWS pipelines Important The Pipeline build strategy is deprecated in Red Hat OpenShift Service on AWS 4. Equivalent and improved functionality is present in the Red Hat OpenShift Service on AWS Pipelines based on Tekton. Jenkins images on Red Hat OpenShift Service on AWS are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. Pipelines give you control over building, deploying, and promoting your applications on Red Hat OpenShift Service on AWS. Using a combination of the Jenkins Pipeline build strategy, jenkinsfiles , and the Red Hat OpenShift Service on AWS Domain Specific Language (DSL) provided by the Jenkins Client Plugin, you can create advanced build, test, deploy, and promote pipelines for any scenario. Red Hat OpenShift Service on AWS Jenkins Sync Plugin The Red Hat OpenShift Service on AWS Jenkins Sync Plugin keeps the build configuration and build objects in sync with Jenkins jobs and builds, and provides the following: Dynamic job and run creation in Jenkins. Dynamic creation of agent pod templates from image streams, image stream tags, or config maps. Injection of environment variables. Pipeline visualization in the Red Hat OpenShift Service on AWS web console. Integration with the Jenkins Git plugin, which passes commit information from Red Hat OpenShift Service on AWS builds to the Jenkins Git plugin. Synchronization of secrets into Jenkins credential entries. Red Hat OpenShift Service on AWS Jenkins Client Plugin The Red Hat OpenShift Service on AWS Jenkins Client Plugin is a Jenkins plugin which aims to provide a readable, concise, comprehensive, and fluent Jenkins Pipeline syntax for rich interactions with an Red Hat OpenShift Service on AWS API Server. The plugin uses the Red Hat OpenShift Service on AWS command line tool, oc , which must be available on the nodes executing the script. The Jenkins Client Plugin must be installed on your Jenkins master so the Red Hat OpenShift Service on AWS DSL will be available to use within the jenkinsfile for your application. This plugin is installed and enabled by default when using the Red Hat OpenShift Service on AWS Jenkins image. For Red Hat OpenShift Service on AWS Pipelines within your project, you will must use the Jenkins Pipeline Build Strategy. This strategy defaults to using a jenkinsfile at the root of your source repository, but also provides the following configuration options: An inline jenkinsfile field within your build configuration. A jenkinsfilePath field within your build configuration that references the location of the jenkinsfile to use relative to the source contextDir . Note The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.3.2. Providing the Jenkins file for pipeline builds Important The Pipeline build strategy is deprecated in Red Hat OpenShift Service on AWS 4. Equivalent and improved functionality is present in the Red Hat OpenShift Service on AWS Pipelines based on Tekton. Jenkins images on Red Hat OpenShift Service on AWS are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. The jenkinsfile uses the standard groovy language syntax to allow fine grained control over the configuration, build, and deployment of your application. You can supply the jenkinsfile in one of the following ways: A file located within your source code repository. Embedded as part of your build configuration using the jenkinsfile field. When using the first option, the jenkinsfile must be included in your applications source code repository at one of the following locations: A file named jenkinsfile at the root of your repository. A file named jenkinsfile at the root of the source contextDir of your repository. A file name specified via the jenkinsfilePath field of the JenkinsPipelineStrategy section of your BuildConfig, which is relative to the source contextDir if supplied, otherwise it defaults to the root of the repository. The jenkinsfile is run on the Jenkins agent pod, which must have the Red Hat OpenShift Service on AWS client binaries available if you intend to use the Red Hat OpenShift Service on AWS DSL. Procedure To provide the Jenkins file, you can either: Embed the Jenkins file in the build configuration. Include in the build configuration a reference to the Git repository that contains the Jenkins file. Embedded Definition kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') } Reference to Git Repository kind: "BuildConfig" apiVersion: "v1" metadata: name: "sample-pipeline" spec: source: git: uri: "https://github.com/openshift/ruby-hello-world" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1 1 The optional jenkinsfilePath field specifies the name of the file to use, relative to the source contextDir . If contextDir is omitted, it defaults to the root of the repository. If jenkinsfilePath is omitted, it defaults to jenkinsfile . 5.3.3. Using environment variables for pipeline builds Important The Pipeline build strategy is deprecated in Red Hat OpenShift Service on AWS 4. Equivalent and improved functionality is present in the Red Hat OpenShift Service on AWS Pipelines based on Tekton. Jenkins images on Red Hat OpenShift Service on AWS are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. To make environment variables available to the Pipeline build process, you can add environment variables to the jenkinsPipelineStrategy definition of the build configuration. Once defined, the environment variables will be set as parameters for any Jenkins job associated with the build configuration. Procedure To define environment variables to be used during build, edit the YAML file: jenkinsPipelineStrategy: ... env: - name: "FOO" value: "BAR" You can also manage environment variables defined in the build configuration with the oc set env command. 5.3.3.1. Mapping between BuildConfig environment variables and Jenkins job parameters When a Jenkins job is created or updated based on changes to a Pipeline strategy build configuration, any environment variables in the build configuration are mapped to Jenkins job parameters definitions, where the default values for the Jenkins job parameters definitions are the current values of the associated environment variables. After the Jenkins job's initial creation, you can still add additional parameters to the job from the Jenkins console. The parameter names differ from the names of the environment variables in the build configuration. The parameters are honored when builds are started for those Jenkins jobs. How you start builds for the Jenkins job dictates how the parameters are set. If you start with oc start-build , the values of the environment variables in the build configuration are the parameters set for the corresponding job instance. Any changes you make to the parameters' default values from the Jenkins console are ignored. The build configuration values take precedence. If you start with oc start-build -e , the values for the environment variables specified in the -e option take precedence. If you specify an environment variable not listed in the build configuration, they will be added as a Jenkins job parameter definitions. Any changes you make from the Jenkins console to the parameters corresponding to the environment variables are ignored. The build configuration and what you specify with oc start-build -e takes precedence. If you start the Jenkins job with the Jenkins console, then you can control the setting of the parameters with the Jenkins console as part of starting a build for the job. Note It is recommended that you specify in the build configuration all possible environment variables to be associated with job parameters. Doing so reduces disk I/O and improves performance during Jenkins processing. 5.3.4. Pipeline build tutorial Important The Pipeline build strategy is deprecated in Red Hat OpenShift Service on AWS 4. Equivalent and improved functionality is present in the Red Hat OpenShift Service on AWS Pipelines based on Tekton. Jenkins images on Red Hat OpenShift Service on AWS are fully supported and users should follow Jenkins user documentation for defining their jenkinsfile in a job or store it in a Source Control Management system. This example demonstrates how to create an Red Hat OpenShift Service on AWS Pipeline that will build, deploy, and verify a Node.js/MongoDB application using the nodejs-mongodb.json template. Procedure Create the Jenkins master: USD oc project <project_name> Select the project that you want to use or create a new project with oc new-project <project_name> . USD oc new-app jenkins-ephemeral 1 If you want to use persistent storage, use jenkins-persistent instead. Create a file named nodejs-sample-pipeline.yaml with the following content: Note This creates a BuildConfig object that employs the Jenkins pipeline strategy to build, deploy, and scale the Node.js/MongoDB example application. kind: "BuildConfig" apiVersion: "v1" metadata: name: "nodejs-sample-pipeline" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline After you create a BuildConfig object with a jenkinsPipelineStrategy , tell the pipeline what to do by using an inline jenkinsfile : Note This example does not set up a Git repository for the application. The following jenkinsfile content is written in Groovy using the Red Hat OpenShift Service on AWS DSL. For this example, include inline content in the BuildConfig object using the YAML Literal Style, though including a jenkinsfile in your source repository is the preferred method. def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo "Using project: USD{openshift.project()}" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector("all", [ template : templateName ]).delete() 5 if (openshift.selector("secrets", templateName).exists()) { 6 openshift.selector("secrets", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector("bc", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == "Complete") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector("dc", templateName).rollout() timeout(5) { 9 openshift.selector("dc", templateName).related('pods').untilEach(1) { return (it.object().status.phase == "Running") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag("USD{templateName}:latest", "USD{templateName}-staging:latest") 10 } } } } } } } 1 Path of the template to use. 1 2 Name of the template that will be created. 3 Spin up a node.js agent pod on which to run this build. 4 Set a timeout of 20 minutes for this pipeline. 5 Delete everything with this template label. 6 Delete any secrets with this template label. 7 Create a new application from the templatePath . 8 Wait up to five minutes for the build to complete. 9 Wait up to five minutes for the deployment to complete. 10 If everything else succeeded, tag the USD {templateName}:latest image as USD {templateName}-staging:latest . A pipeline build configuration for the staging environment can watch for the USD {templateName}-staging:latest image to change and then deploy it to the staging environment. Note The example was written using the declarative pipeline style, but the older scripted pipeline style is also supported. Create the Pipeline BuildConfig in your Red Hat OpenShift Service on AWS cluster: USD oc create -f nodejs-sample-pipeline.yaml If you do not want to create your own file, you can use the sample from the Origin repository by running: USD oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml Start the Pipeline: USD oc start-build nodejs-sample-pipeline Note Alternatively, you can start your pipeline with the Red Hat OpenShift Service on AWS web console by navigating to the Builds Pipeline section and clicking Start Pipeline , or by visiting the Jenkins Console, navigating to the Pipeline that you created, and clicking Build Now . Once the pipeline is started, you should see the following actions performed within your project: A job instance is created on the Jenkins server. An agent pod is launched, if your pipeline requires one. The pipeline runs on the agent pod, or the master if no agent is required. Any previously created resources with the template=nodejs-mongodb-example label will be deleted. A new application, and all of its associated resources, will be created from the nodejs-mongodb-example template. A build will be started using the nodejs-mongodb-example BuildConfig . The pipeline will wait until the build has completed to trigger the stage. A deployment will be started using the nodejs-mongodb-example deployment configuration. The pipeline will wait until the deployment has completed to trigger the stage. If the build and deploy are successful, the nodejs-mongodb-example:latest image will be tagged as nodejs-mongodb-example:stage . The agent pod is deleted, if one was required for the pipeline. Note The best way to visualize the pipeline execution is by viewing it in the Red Hat OpenShift Service on AWS web console. You can view your pipelines by logging in to the web console and navigating to Builds Pipelines. 5.4. Adding secrets with web console You can add a secret to your build configuration so that it can access a private repository. Procedure To add a secret to your build configuration so that it can access a private repository from the Red Hat OpenShift Service on AWS web console: Create a new Red Hat OpenShift Service on AWS project. Create a secret that contains credentials for accessing a private source code repository. Create a build configuration. On the build configuration editor page or in the create app from builder image page of the web console, set the Source Secret . Click Save . 5.5. Enabling pulling and pushing You can enable pulling to a private registry by setting the pull secret and pushing by setting the push secret in the build configuration. Procedure To enable pulling to a private registry: Set the pull secret in the build configuration. To enable pushing: Set the push secret in the build configuration.
[ "strategy: dockerStrategy: from: kind: \"ImageStreamTag\" name: \"debian:latest\"", "strategy: dockerStrategy: dockerfilePath: dockerfiles/app1/Dockerfile", "dockerStrategy: env: - name: \"HTTP_PROXY\" value: \"http://myproxy.net:5187/\"", "dockerStrategy: buildArgs: - name: \"version\" value: \"latest\"", "strategy: dockerStrategy: imageOptimizationPolicy: SkipLayers", "spec: dockerStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"incremental-image:latest\" 1 incremental: true 2", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"builder-image:latest\" scripts: \"http://somehost.com/scripts_directory\" 1", "sourceStrategy: env: - name: \"DISABLE_ASSET_COMPILATION\" value: \"true\"", "#!/bin/bash restore build artifacts if [ \"USD(ls /tmp/s2i/artifacts/ 2>/dev/null)\" ]; then mv /tmp/s2i/artifacts/* USDHOME/. fi move the application source mv /tmp/s2i/src USDHOME/src build application artifacts pushd USD{HOME} make all install the artifacts make install popd", "#!/bin/bash run the application /opt/application/run.sh", "#!/bin/bash pushd USD{HOME} if [ -d deps ]; then # all deps contents to tar stream tar cf - deps fi popd", "#!/bin/bash inform the user how to use the image cat <<EOF This is a S2I sample builder image, to use it, install https://github.com/openshift/source-to-image EOF", "spec: sourceStrategy: volumes: - name: secret-mvn 1 mounts: - destinationPath: /opt/app-root/src/.ssh 2 source: type: Secret 3 secret: secretName: my-secret 4 - name: settings-mvn 5 mounts: - destinationPath: /opt/app-root/src/.m2 6 source: type: ConfigMap 7 configMap: name: my-config 8", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: |- node('agent') { stage 'build' openshiftBuild(buildConfig: 'ruby-sample-build', showBuildLogs: 'true') stage 'deploy' openshiftDeploy(deploymentConfig: 'frontend') }", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"sample-pipeline\" spec: source: git: uri: \"https://github.com/openshift/ruby-hello-world\" strategy: jenkinsPipelineStrategy: jenkinsfilePath: some/repo/dir/filename 1", "jenkinsPipelineStrategy: env: - name: \"FOO\" value: \"BAR\"", "oc project <project_name>", "oc new-app jenkins-ephemeral 1", "kind: \"BuildConfig\" apiVersion: \"v1\" metadata: name: \"nodejs-sample-pipeline\" spec: strategy: jenkinsPipelineStrategy: jenkinsfile: <pipeline content from below> type: JenkinsPipeline", "def templatePath = 'https://raw.githubusercontent.com/openshift/nodejs-ex/master/openshift/templates/nodejs-mongodb.json' 1 def templateName = 'nodejs-mongodb-example' 2 pipeline { agent { node { label 'nodejs' 3 } } options { timeout(time: 20, unit: 'MINUTES') 4 } stages { stage('preamble') { steps { script { openshift.withCluster() { openshift.withProject() { echo \"Using project: USD{openshift.project()}\" } } } } } stage('cleanup') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.selector(\"all\", [ template : templateName ]).delete() 5 if (openshift.selector(\"secrets\", templateName).exists()) { 6 openshift.selector(\"secrets\", templateName).delete() } } } } } } stage('create') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.newApp(templatePath) 7 } } } } } stage('build') { steps { script { openshift.withCluster() { openshift.withProject() { def builds = openshift.selector(\"bc\", templateName).related('builds') timeout(5) { 8 builds.untilEach(1) { return (it.object().status.phase == \"Complete\") } } } } } } } stage('deploy') { steps { script { openshift.withCluster() { openshift.withProject() { def rm = openshift.selector(\"dc\", templateName).rollout() timeout(5) { 9 openshift.selector(\"dc\", templateName).related('pods').untilEach(1) { return (it.object().status.phase == \"Running\") } } } } } } } stage('tag') { steps { script { openshift.withCluster() { openshift.withProject() { openshift.tag(\"USD{templateName}:latest\", \"USD{templateName}-staging:latest\") 10 } } } } } } }", "oc create -f nodejs-sample-pipeline.yaml", "oc create -f https://raw.githubusercontent.com/openshift/origin/master/examples/jenkins/pipeline/nodejs-sample-pipeline.yaml", "oc start-build nodejs-sample-pipeline" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/builds_using_buildconfig/build-strategies
Chapter 38. Executing rules
Chapter 38. Executing rules After you identify example rules or create your own rules in Business Central, you can build and deploy the associated project and execute rules locally or on KIE Server to test the rules. Prerequisites Business Central and KIE Server are installed and running. For installation options, see Planning a Red Hat Decision Manager installation . Procedure In Business Central, go to Menu Design Projects and click the project name. In the upper-right corner of the project Assets page, click Deploy to build the project and deploy it to KIE Server. If the build fails, address any problems described in the Alerts panel at the bottom of the screen. For more information about project deployment options, see Packaging and deploying an Red Hat Decision Manager project . Note If the rule assets in your project are not built from an executable rule model by default, verify that the following dependency is in the pom.xml file of your project and rebuild the project: <dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency> This dependency is required for rule assets in Red Hat Decision Manager to be built from executable rule models by default. This dependency is included as part of the Red Hat Decision Manager core packaging, but depending on your Red Hat Decision Manager upgrade history, you may need to manually add this dependency to enable the executable rule model behavior. For more information about executable rule models, see Packaging and deploying an Red Hat Decision Manager project . Create a Maven or Java project outside of Business Central, if not created already, that you can use for executing rules locally or that you can use as a client application for executing rules on KIE Server. The project must contain a pom.xml file and any other required components for executing the project resources. For example test projects, see "Other methods for creating and executing DRL rules" . Open the pom.xml file of your test project or client application and add the following dependencies, if not added already: kie-ci : Enables your client application to load Business Central project data locally using ReleaseId kie-server-client : Enables your client application to interact remotely with assets on KIE Server slf4j : (Optional) Enables your client application to use Simple Logging Facade for Java (SLF4J) to return debug logging information after you interact with KIE Server Example dependencies for Red Hat Decision Manager 7.13 in a client application pom.xml file: <!-- For local execution --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.25</version> </dependency> For available versions of these artifacts, search the group ID and artifact ID in the Nexus Repository Manager online. Note Instead of specifying a Red Hat Decision Manager <version> for individual dependencies, consider adding the Red Hat Business Automation bill of materials (BOM) dependency to your project pom.xml file. The Red Hat Business Automation BOM applies to both Red Hat Decision Manager and Red Hat Process Automation Manager. When you add the BOM files, the correct versions of transitive dependencies from the provided Maven repositories are included in the project. Example BOM dependency: <dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency> For more information about the Red Hat Business Automation BOM, see What is the mapping between Red Hat Process Automation Manager and the Maven library version? . Ensure that the dependencies for artifacts containing model classes are defined in the client application pom.xml file exactly as they appear in the pom.xml file of the deployed project. If dependencies for model classes differ between the client application and your projects, execution errors can occur. To access the project pom.xml file in Business Central, select any existing asset in the project and then in the Project Explorer menu on the left side of the screen, click the Customize View gear icon and select Repository View pom.xml . For example, the following Person class dependency appears in both the client and deployed project pom.xml files: <dependency> <groupId>com.sample</groupId> <artifactId>Person</artifactId> <version>1.0.0</version> </dependency> If you added the slf4j dependency to the client application pom.xml file for debug logging, create a simplelogger.properties file on the relevant classpath (for example, in src/main/resources/META-INF in Maven) with the following content: org.slf4j.simpleLogger.defaultLogLevel=debug In your client application, create a .java main class containing the necessary imports and a main() method to load the KIE base, insert facts, and execute the rules. For example, a Person object in a project contains getter and setter methods to set and retrieve the first name, last name, hourly rate, and the wage of a person. The following Wage rule in a project calculates the wage and hourly rate values and displays a message based on the result: package com.sample; import com.sample.Person; dialect "java" rule "Wage" when Person(hourlyRate * wage > 100) Person(name : firstName, surname : lastName) then System.out.println("Hello" + " " + name + " " + surname + "!"); System.out.println("You are rich!"); end To test this rule locally outside of KIE Server (if needed), configure the .java class to import KIE services, a KIE container, and a KIE session, and then use the main() method to fire all rules against a defined fact model: Executing rules locally import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public class RulesTest { public static final void main(String[] args) { try { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl("com.myspace", "MyProject", "1.0.0"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // Insert the person into the session: kSession.insert(p); // Fire all rules: kSession.fireAllRules(); kSession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } } To test this rule on KIE Server, configure the .java class with the imports and rule execution information similarly to the local example, and additionally specify KIE services configuration and KIE services client details: Executing rules on KIE Server package com.sample; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import org.kie.api.command.BatchExecutionCommand; import org.kie.api.command.Command; import org.kie.api.KieServices; import org.kie.api.runtime.ExecutionResults; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.RuleServicesClient; import com.sample.Person; public class RulesTest { private static final String containerName = "testProject"; private static final String sessionName = "myStatelessSession"; public static final void main(String[] args) { try { // Define KIE services configuration and client: Set<Class<?>> allClasses = new HashSet<Class<?>>(); allClasses.add(Person.class); String serverUrl = "http://USDHOST:USDPORT/kie-server/services/rest/server"; String username = "USDUSERNAME"; String password = "USDPASSWORD"; KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(serverUrl, username, password); config.setMarshallingFormat(MarshallingFormat.JAXB); config.addExtraClasses(allClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName("Tom"); p.setLastName("Summers"); p.setHourlyRate(10); // Insert Person into the session: KieCommands kieCommands = KieServices.Factory.get().getCommands(); List<Command> commandList = new ArrayList<Command>(); commandList.add(kieCommands.newInsert(p, "personReturnId")); // Fire all rules: commandList.add(kieCommands.newFireAllRules("numberOfFiredRules")); BatchExecutionCommand batch = kieCommands.newBatchExecution(commandList, sessionName); // Use rule services client to send request: RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); ServiceResponse<ExecutionResults> executeResponse = ruleClient.executeCommandsWithResults(containerName, batch); System.out.println("number of fired rules:" + executeResponse.getResult().getValue("numberOfFiredRules")); } catch (Throwable t) { t.printStackTrace(); } } } Run the configured .java class from your project directory. You can run the file in your development platform (such as Red Hat CodeReady Studio) or in the command line. Example Maven execution (within project directory): Example Java execution (within project directory) Review the rule execution status in the command line and in the server log. If any rules do not execute as expected, review the configured rules in the project and the main class configuration to validate the data provided.
[ "<dependency> <groupId>org.drools</groupId> <artifactId>drools-model-compiler</artifactId> <version>USD{rhpam.version}</version> </dependency>", "<!-- For local execution --> <dependency> <groupId>org.kie</groupId> <artifactId>kie-ci</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For remote execution on KIE Server --> <dependency> <groupId>org.kie.server</groupId> <artifactId>kie-server-client</artifactId> <version>7.67.0.Final-redhat-00024</version> </dependency> <!-- For debug logging (optional) --> <dependency> <groupId>org.slf4j</groupId> <artifactId>slf4j-simple</artifactId> <version>1.7.25</version> </dependency>", "<dependency> <groupId>com.redhat.ba</groupId> <artifactId>ba-platform-bom</artifactId> <version>7.13.5.redhat-00002</version> <scope>import</scope> <type>pom</type> </dependency>", "<dependency> <groupId>com.sample</groupId> <artifactId>Person</artifactId> <version>1.0.0</version> </dependency>", "org.slf4j.simpleLogger.defaultLogLevel=debug", "package com.sample; import com.sample.Person; dialect \"java\" rule \"Wage\" when Person(hourlyRate * wage > 100) Person(name : firstName, surname : lastName) then System.out.println(\"Hello\" + \" \" + name + \" \" + surname + \"!\"); System.out.println(\"You are rich!\"); end", "import org.kie.api.KieServices; import org.kie.api.builder.ReleaseId; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.drools.compiler.kproject.ReleaseIdImpl; public class RulesTest { public static final void main(String[] args) { try { // Identify the project in the local repository: ReleaseId rid = new ReleaseIdImpl(\"com.myspace\", \"MyProject\", \"1.0.0\"); // Load the KIE base: KieServices ks = KieServices.Factory.get(); KieContainer kContainer = ks.newKieContainer(rid); KieSession kSession = kContainer.newKieSession(); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName(\"Tom\"); p.setLastName(\"Summers\"); p.setHourlyRate(10); // Insert the person into the session: kSession.insert(p); // Fire all rules: kSession.fireAllRules(); kSession.dispose(); } catch (Throwable t) { t.printStackTrace(); } } }", "package com.sample; import java.util.ArrayList; import java.util.HashSet; import java.util.List; import java.util.Set; import org.kie.api.command.BatchExecutionCommand; import org.kie.api.command.Command; import org.kie.api.KieServices; import org.kie.api.runtime.ExecutionResults; import org.kie.api.runtime.KieContainer; import org.kie.api.runtime.KieSession; import org.kie.server.api.marshalling.MarshallingFormat; import org.kie.server.api.model.ServiceResponse; import org.kie.server.client.KieServicesClient; import org.kie.server.client.KieServicesConfiguration; import org.kie.server.client.KieServicesFactory; import org.kie.server.client.RuleServicesClient; import com.sample.Person; public class RulesTest { private static final String containerName = \"testProject\"; private static final String sessionName = \"myStatelessSession\"; public static final void main(String[] args) { try { // Define KIE services configuration and client: Set<Class<?>> allClasses = new HashSet<Class<?>>(); allClasses.add(Person.class); String serverUrl = \"http://USDHOST:USDPORT/kie-server/services/rest/server\"; String username = \"USDUSERNAME\"; String password = \"USDPASSWORD\"; KieServicesConfiguration config = KieServicesFactory.newRestConfiguration(serverUrl, username, password); config.setMarshallingFormat(MarshallingFormat.JAXB); config.addExtraClasses(allClasses); KieServicesClient kieServicesClient = KieServicesFactory.newKieServicesClient(config); // Set up the fact model: Person p = new Person(); p.setWage(12); p.setFirstName(\"Tom\"); p.setLastName(\"Summers\"); p.setHourlyRate(10); // Insert Person into the session: KieCommands kieCommands = KieServices.Factory.get().getCommands(); List<Command> commandList = new ArrayList<Command>(); commandList.add(kieCommands.newInsert(p, \"personReturnId\")); // Fire all rules: commandList.add(kieCommands.newFireAllRules(\"numberOfFiredRules\")); BatchExecutionCommand batch = kieCommands.newBatchExecution(commandList, sessionName); // Use rule services client to send request: RuleServicesClient ruleClient = kieServicesClient.getServicesClient(RuleServicesClient.class); ServiceResponse<ExecutionResults> executeResponse = ruleClient.executeCommandsWithResults(containerName, batch); System.out.println(\"number of fired rules:\" + executeResponse.getResult().getValue(\"numberOfFiredRules\")); } catch (Throwable t) { t.printStackTrace(); } } }", "mvn clean install exec:java -Dexec.mainClass=\"com.sample.app.RulesTest\"", "javac -classpath \"./USDDEPENDENCIES/*:.\" RulesTest.java java -classpath \"./USDDEPENDENCIES/*:.\" RulesTest" ]
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/assets-executing-proc_guided-decision-tables
Chapter 4. Project deployment options with Red Hat Process Automation Manager
Chapter 4. Project deployment options with Red Hat Process Automation Manager After you develop, test, and build your Red Hat Process Automation Manager project, you can deploy the project to begin using the business assets you have created. You can deploy a Red Hat Process Automation Manager project to a configured KIE Server, to an embedded Java application, or into a Red Hat OpenShift Container Platform environment for an enhanced containerized implementation. The following options are the main methods for Red Hat Process Automation Manager project deployment: Table 4.1. Project deployment options Deployment option Description Documentation Deployment to an OpenShift environment Red Hat OpenShift Container Platform combines Docker and Kubernetes and enables you to create and manage containers. You can install both Business Central and KIE Server on OpenShift. Red Hat Process Automation Manager provides templates that you can use to deploy a Red Hat Process Automation Manager authoring environment, managed server environment, immutable server environment, or trial environment on OpenShift. With OpenShift, components of Red Hat Process Automation Manager are deployed as separate OpenShift pods. You can scale each of the pods up and down individually, providing as few or as many containers as necessary for a particular component. You can use standard OpenShift methods to manage the pods and balance the load. Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 4 using Operators Deploying an Red Hat Process Automation Manager environment on Red Hat OpenShift Container Platform 3 using templates Deployment to KIE Server KIE Server is the server provided with Red Hat Process Automation Manager that runs the decision services, process applications, and other deployable assets from a packaged and deployed Red Hat Process Automation Manager project (KJAR file). These services are consumed at run time through an instantiated KIE container, or deployment unit . You can deploy and maintain deployment units in KIE Server using Business Central or using a headless Process Automation Manager controller with its associated REST API (considered a managed KIE Server instance). You can also deploy and maintain deployment units using the KIE Server REST API or Java client API from a standalone Maven project, an embedded Java application, or other custom environment (considered an unmanaged KIE Server instance). Packaging and deploying an Red Hat Process Automation Manager project Interacting with Red Hat Process Automation Manager using KIE APIs Managing and monitoring KIE Server Deployment to an embedded Java application If you want to deploy Red Hat Process Automation Manager projects to your own Java virtual machine (JVM) environment, microservice, or application server, you can bundle the application resources in the project WAR files to create a deployment unit similar to a KIE container. You can also use the core KIE APIs (not KIE Server APIs) to configure a KIE scanner to periodically update KIE containers. KIE Public API
null
https://docs.redhat.com/en/documentation/red_hat_process_automation_manager/7.13/html/designing_your_decision_management_architecture_for_red_hat_process_automation_manager/project-deployment-options-ref_decision-management-architecture
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.17/html/managing_and_allocating_storage_resources/making-open-source-more-inclusive
Deploying into Spring Boot
Deploying into Spring Boot Red Hat Fuse 7.13 Build and run Spring Boot applications in standalone mode Red Hat Fuse Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_spring_boot/index
Chapter 5. Installing a three-node cluster on Nutanix
Chapter 5. Installing a three-node cluster on Nutanix In OpenShift Container Platform version 4.17, you can install a three-node cluster on Nutanix. A three-node cluster consists of three control plane machines, which also act as compute machines. This type of cluster provides a smaller, more resource efficient cluster, for cluster administrators and developers to use for testing, development, and production. 5.1. Configuring a three-node cluster You configure a three-node cluster by setting the number of worker nodes to 0 in the install-config.yaml file before deploying the cluster. Setting the number of worker nodes to 0 ensures that the control plane machines are schedulable. This allows application workloads to be scheduled to run from the control plane nodes. Note Because application workloads run from control plane nodes, additional subscriptions are required, as the control plane nodes are considered to be compute nodes. Prerequisites You have an existing install-config.yaml file. Procedure Set the number of compute replicas to 0 in your install-config.yaml file, as shown in the following compute stanza: Example install-config.yaml file for a three-node cluster apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0 # ... 5.2. steps Installing a cluster on Nutanix
[ "apiVersion: v1 baseDomain: example.com compute: - name: worker platform: {} replicas: 0" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/installing_on_nutanix/installing-nutanix-three-node
Chapter 93. OpenTelemetry
Chapter 93. OpenTelemetry Since Camel 3.5 The OpenTelemetry component is used for tracing and timing the incoming and outgoing Camel messages using OpenTelemetry . Events (spans) are captured for incoming and outgoing messages that are sent to/from Camel. 93.1. Dependencies Add the following dependency to your pom.xml for this component: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-opentelemetry-starter</artifactId> </dependency> 93.2. Configuration The configuration properties for the OpenTelemetry tracer are: Option Default Description excludePatterns Sets exclude pattern(s) that will disable tracing for Camel messages that matches the pattern. The content is a Set<String> where the key is a pattern. The pattern uses the rules from Intercept. encoding false Sets whether the header keys need to be encoded (connector specific) or not. The value is a boolean. Dashes need for instances to be encoded for JMS property keys. 93.2.1. Configuration Add the camel-opentelemetry component in your POM, in addition to any specific dependencies associated with the chosen OpenTelemetry compliant Tracer. To explicitly configure OpenTelemetry support, instantiate the OpenTelemetryTracer and initialize the camel context. You can optionally specify a Tracer , or alternatively it can be implicitly discovered using the Registry OpenTelemetryTracer otelTracer = new OpenTelemetryTracer(); // By default it uses the DefaultTracer, but you can override it with a specific OpenTelemetry Tracer implementation. otelTracer.setTracer(...); // And then initialize the context otelTracer.init(camelContext); 93.3. Spring Boot Add the camel-opentelemetry-starter dependency, and then turn on the OpenTracing by annotating the main class with @CamelOpenTelemetry . The OpenTelemetryTracer is implicitly obtained from the camel context's Registry , unless a OpenTelemetryTracer bean has been defined by the application. 93.4. Java Agent Download the latest version of Java agent . This package includes the instrumentation agent as well as instrumentations for all supported libraries and all available data exporters. The package provides a completely automatic, out-of-the-box experience. Enable the instrumentation agent using the -javaagent flag to the JVM. java -javaagent:path/to/opentelemetry-javaagent.jar \ -jar myapp.jar By default, the OpenTelemetry Java agent uses OTLP exporter configured to send data to OpenTelemetry collector at http://localhost:4317 . Configuration parameters are passed as Java system properties ( -D flags) or as environment variables. See Configuring the agent and OpenTelemetry auto-configuration for the full list of configuration items. For example: java -javaagent:path/to/opentelemetry-javaagent.jar \ -Dotel.service.name=your-service-name \ -Dotel.traces.exporter=jaeger \ -jar myapp.jar 93.5. Spring Boot Auto-Configuration The component supports 2 options, which are listed below. Name Description Default Type camel.opentelemetry.encoding Activate or deactivate the dash encoding in headers (required by JMS) for messaging. Boolean camel.opentelemetry.exclude-patterns Sets exclude pattern(s) that will disable the tracing for the Camel messages that matches the pattern. Set 93.6. MDC Logging When MDC Logging is enabled for the active Camel context, the Trace ID and Span ID are added and removed from the MDC for each route, where the keys are trace_id and span_id , respectively.
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-opentelemetry-starter</artifactId> </dependency>", "OpenTelemetryTracer otelTracer = new OpenTelemetryTracer(); // By default it uses the DefaultTracer, but you can override it with a specific OpenTelemetry Tracer implementation. otelTracer.setTracer(...); // And then initialize the context otelTracer.init(camelContext);", "java -javaagent:path/to/opentelemetry-javaagent.jar -jar myapp.jar", "java -javaagent:path/to/opentelemetry-javaagent.jar -Dotel.service.name=your-service-name -Dotel.traces.exporter=jaeger -jar myapp.jar" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.0/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-opentelemetry-component-starter
Chapter 2. Configuring external PostgreSQL databases
Chapter 2. Configuring external PostgreSQL databases As an administrator, you can configure and use external PostgreSQL databases in Red Hat Developer Hub. You can use a PostgreSQL certificate file to configure an external PostgreSQL instance using the Operator or Helm Chart. Note Developer Hub supports the configuration of external PostgreSQL databases. You can perform maintenance activities, such as backing up your data or configuring high availability (HA) for the external PostgreSQL databases. By default, the Red Hat Developer Hub operator or Helm Chart creates a local PostgreSQL database. However, this configuration is not suitable for the production environments. For production deployments, disable the creation of local database and configure Developer Hub to connect to an external PostgreSQL instance instead. 2.1. Configuring an external PostgreSQL instance using the Operator You can configure an external PostgreSQL instance using the Red Hat Developer Hub Operator. By default, the Operator creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database. Prerequisites You are using a supported version of PostgreSQL. For more information, see the Product life cycle page . You have the following details: db-host : Denotes your PostgreSQL instance Domain Name System (DNS) or IP address db-port : Denotes your PostgreSQL instance port number, such as 5432 username : Denotes the user name to connect to your PostgreSQL instance password : Denotes the password to connect to your PostgreSQL instance You have installed the Red Hat Developer Hub Operator. Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation. Note By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance. Procedure Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection: cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF 1 Provide the name of the certificate secret. 2 Provide the CA certificate key. 3 Optional: Provide the TLS private key. 4 Optional: Provide the TLS certificate key. Create a credential secret to connect with the PostgreSQL instance: cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF 1 Provide the name of the credential secret. 2 Provide credential data to connect with your PostgreSQL instance. 3 Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode . 4 Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance. Create your Backstage custom resource (CR): cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in my-rhdh-database-certificates-secrets Secret extraEnvs: secrets: - name: my-rhdh-database-secrets 3 # ... 1 Set the value of the enableLocalDb parameter to false to disable creating local PostgreSQL instances. 2 Provide the name of the certificate secret if you have configured a TLS connection. 3 Provide the name of the credential secret that you created. Note The environment variables listed in the Backstage CR work with the Operator default configuration. If you have changed the Operator default configuration, you must reconfigure the Backstage CR accordingly. Apply the Backstage CR to the namespace where you have deployed the Developer Hub instance. 2.2. Configuring an external PostgreSQL instance using the Helm Chart You can configure an external PostgreSQL instance by using the Helm Chart. By default, the Helm Chart creates and manages a local instance of PostgreSQL in the same namespace where you have deployed the RHDH instance. However, you can change this default setting to configure an external PostgreSQL database server, for example, Amazon Web Services (AWS) Relational Database Service (RDS) or Azure database. Prerequisites You are using a supported version of PostgreSQL. For more information, see the Product life cycle page . You have the following details: db-host : Denotes your PostgreSQL instance Domain Name System (DNS) or IP address db-port : Denotes your PostgreSQL instance port number, such as 5432 username : Denotes the user name to connect to your PostgreSQL instance password : Denotes the password to connect to your PostgreSQL instance You have installed the RHDH application by using the Helm Chart. Optional: You have a CA certificate, Transport Layer Security (TLS) private key, and TLS certificate so that you can secure your database connection by using the TLS protocol. For more information, refer to your PostgreSQL vendor documentation. Note By default, Developer Hub uses a database for each plugin and automatically creates it if none is found. You might need the Create Database privilege in addition to PSQL Database privileges for configuring an external PostgreSQL instance. Procedure Optional: Create a certificate secret to configure your PostgreSQL instance with a TLS connection: cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # ... EOF 1 Provide the name of the certificate secret. 2 Provide the CA certificate key. 3 Optional: Provide the TLS private key. 4 Optional: Provide the TLS certificate key. Create a credential secret to connect with the PostgreSQL instance: cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: "<db-port>" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF 1 Provide the name of the credential secret. 2 Provide credential data to connect with your PostgreSQL instance. 3 Optional: Provide the value based on the required Secure Sockets Layer (SSL) mode . 4 Optional: Provide the value only if you need a TLS connection for your PostgreSQL instance. Configure your PostgreSQL instance in the Helm configuration file named values.yaml : # ... upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: my-rhdh-database-secrets # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: USD{POSTGRES_HOST} port: USD{POSTGRES_PORT} user: USD{POSTGRES_USER} password: USD{POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: USDfile: /opt/app-root/src/postgres-ca.pem key: USDfile: /opt/app-root/src/postgres-key.key cert: USDfile: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - my-rhdh-database-secrets # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include "janus-idp.backend-secret-name" USD }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: '{{ printf "%s-dynamic-plugins-npmrc" .Release.Name }}' - name: postgres-crt secret: secretName: my-rhdh-database-certificates-secrets 7 # ... 1 Set the value of the upstream.postgresql.enabled parameter to false to disable creating local PostgreSQL instances. 2 Provide the name of the credential secret. 3 Provide the name of the credential secret. 4 Optional: Provide the name of the TLS certificate only for a TLS connection. 5 Optional: Provide the name of the CA certificate only for a TLS connection. 6 Optional: Provide the name of the TLS private key only if your TLS connection requires a private key. 7 Provide the name of the certificate secret if you have configured a TLS connection. Apply the configuration changes in your Helm configuration file named values.yaml : helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.4.2 2.3. Migrating local databases to an external database server using the Operator By default, Red Hat Developer Hub hosts the data for each plugin in a PostgreSQL database. When you fetch the list of databases, you might see multiple databases based on the number of plugins configured in Developer Hub. You can migrate the data from an RHDH instance hosted on a local PostgreSQL server to an external PostgreSQL service, such as AWS RDS, Azure database, or Crunchy database. To migrate the data from each RHDH instance, you can use PostgreSQL utilities, such as pg_dump with psql or pgAdmin . Note The following procedure uses a database copy script to do a quick migration. Prerequisites You have installed the pg_dump and psql utilities on your local machine. For data export, you have the PGSQL user privileges to make a full dump of local databases. For data import, you have the PGSQL admin privileges to create an external database and populate it with database dumps. Procedure Configure port forwarding for the local PostgreSQL database pod by running the following command on a terminal: oc port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port> Where: The <pgsql-pod-name> variable denotes the name of a PostgreSQL pod with the format backstage-psql-<deployment-name>-<_index> . The <forward-to-port> variable denotes the port of your choice to forward PostgreSQL data to. The <forward-from-port> variable denotes the local PostgreSQL instance port, such as 5432 . Example: Configuring port forwarding oc port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432 Make a copy of the following db_copy.sh script and edit the details based on your configuration: #!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") 7 for db in USD{!allDB[@]}; do db=USD{allDB[USDdb]} echo Copying database: USDdb PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -c "create database USDdb;" pg_dump -h USDfrom_host -p USDfrom_port -U USDfrom_user -d USDdb | PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -d USDdb done 1 The destination host name, for example, <db-instance-name>.rds.amazonaws.com . 2 The destination port, such as 5432 . 3 The destination server username, for example, postgres . 4 The source host name, such as 127.0.0.1 . 5 The source port number, such as the <forward-to-port> variable. 6 The source server username, for example, postgres . 7 The name of databases to import in double quotes separated by spaces, for example, ("backstage_plugin_app" "backstage_plugin_auth" "backstage_plugin_catalog" "backstage_plugin_permission" "backstage_plugin_scaffolder" "backstage_plugin_search") . Create a destination database for copying the data: /bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1 1 The <destination-db-password> variable denotes the password to connect to the destination database. Note You can stop port forwarding when the copying of the data is complete. For more information about handling large databases and using the compression tools, see the Handling Large Databases section on the PostgreSQL website. Reconfigure your Backstage custom resource (CR). For more information, see Configuring an external PostgreSQL instance using the Operator . Check that the following code is present at the end of your Backstage CR after reconfiguration: # ... spec: database: enableLocalDb: false application: # ... extraFiles: secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem # key name as in my-rhdh-database-certificates-secrets Secret extraEnvs: secrets: - name: my-rhdh-database-secrets # ... Note Reconfiguring the Backstage CR deletes the corresponding StatefulSet and Pod objects, but does not delete the PersistenceVolumeClaim object. Use the following command to delete the local PersistenceVolumeClaim object: oc -n developer-hub delete pvc <local-psql-pvc-name> where, the <local-psql-pvc-name> variable is in the data-<psql-pod-name> format. Apply the configuration changes. Verification Verify that your RHDH instance is running with the migrated data and does not contain the local PostgreSQL database by running the following command: oc get pods -n <your-namespace> Check the output for the following details: The backstage-developer-hub-xxx pod is in running state. The backstage-psql-developer-hub-0 pod is not available. You can also verify these details using the Topology view in the OpenShift Container Platform web console.
[ "cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # EOF", "cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: \"<db-port>\" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF", "cat <<EOF | oc -n my-rhdh-project create -f - apiVersion: rhdh.redhat.com/v1alpha3 kind: Backstage metadata: name: <backstage-instance-name> spec: database: enableLocalDb: false 1 application: extraFiles: mountPath: <path> # e g /opt/app-root/src secrets: - name: my-rhdh-database-certificates-secrets 2 key: postgres-crt.pem, postgres-ca.pem, postgres-key.key # key name as in my-rhdh-database-certificates-secrets Secret extraEnvs: secrets: - name: my-rhdh-database-secrets 3 #", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-certificates-secrets 1 type: Opaque stringData: postgres-ca.pem: |- -----BEGIN CERTIFICATE----- <ca-certificate-key> 2 postgres-key.key: |- -----BEGIN CERTIFICATE----- <tls-private-key> 3 postgres-crt.pem: |- -----BEGIN CERTIFICATE----- <tls-certificate-key> 4 # EOF", "cat <<EOF | oc -n <your-namespace> create -f - apiVersion: v1 kind: Secret metadata: name: my-rhdh-database-secrets 1 type: Opaque stringData: 2 POSTGRES_PASSWORD: <password> POSTGRES_PORT: \"<db-port>\" POSTGRES_USER: <username> POSTGRES_HOST: <db-host> PGSSLMODE: <ssl-mode> # for TLS connection 3 NODE_EXTRA_CA_CERTS: <abs-path-to-pem-file> # for TLS connection, e.g. /opt/app-root/src/postgres-crt.pem 4 EOF", "upstream: postgresql: enabled: false # disable PostgreSQL instance creation 1 auth: existingSecret: my-rhdh-database-secrets # inject credentials secret to Backstage 2 backstage: appConfig: backend: database: connection: # configure Backstage DB connection parameters host: USD{POSTGRES_HOST} port: USD{POSTGRES_PORT} user: USD{POSTGRES_USER} password: USD{POSTGRES_PASSWORD} ssl: rejectUnauthorized: true, ca: USDfile: /opt/app-root/src/postgres-ca.pem key: USDfile: /opt/app-root/src/postgres-key.key cert: USDfile: /opt/app-root/src/postgres-crt.pem extraEnvVarsSecrets: - my-rhdh-database-secrets # inject credentials secret to Backstage 3 extraEnvVars: - name: BACKEND_SECRET valueFrom: secretKeyRef: key: backend-secret name: '{{ include \"janus-idp.backend-secret-name\" USD }}' extraVolumeMounts: - mountPath: /opt/app-root/src/dynamic-plugins-root name: dynamic-plugins-root - mountPath: /opt/app-root/src/postgres-crt.pem name: postgres-crt # inject TLS certificate to Backstage cont. 4 subPath: postgres-crt.pem - mountPath: /opt/app-root/src/postgres-ca.pem name: postgres-ca # inject CA certificate to Backstage cont. 5 subPath: postgres-ca.pem - mountPath: /opt/app-root/src/postgres-key.key name: postgres-key # inject TLS private key to Backstage cont. 6 subPath: postgres-key.key extraVolumes: - ephemeral: volumeClaimTemplate: spec: accessModes: - ReadWriteOnce resources: requests: storage: 1Gi name: dynamic-plugins-root - configMap: defaultMode: 420 name: dynamic-plugins optional: true name: dynamic-plugins - name: dynamic-plugins-npmrc secret: defaultMode: 420 optional: true secretName: '{{ printf \"%s-dynamic-plugins-npmrc\" .Release.Name }}' - name: postgres-crt secret: secretName: my-rhdh-database-certificates-secrets 7 #", "helm upgrade -n <your-namespace> <your-deploy-name> openshift-helm-charts/redhat-developer-hub -f values.yaml --version 1.4.2", "port-forward -n <your-namespace> <pgsql-pod-name> <forward-to-port>:<forward-from-port>", "port-forward -n developer-hub backstage-psql-developer-hub-0 15432:5432", "#!/bin/bash to_host=<db-service-host> 1 to_port=5432 2 to_user=postgres 3 from_host=127.0.0.1 4 from_port=15432 5 from_user=postgres 6 allDB=(\"backstage_plugin_app\" \"backstage_plugin_auth\" \"backstage_plugin_catalog\" \"backstage_plugin_permission\" \"backstage_plugin_scaffolder\" \"backstage_plugin_search\") 7 for db in USD{!allDB[@]}; do db=USD{allDB[USDdb]} echo Copying database: USDdb PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -c \"create database USDdb;\" pg_dump -h USDfrom_host -p USDfrom_port -U USDfrom_user -d USDdb | PGPASSWORD=USDTO_PSW psql -h USDto_host -p USDto_port -U USDto_user -d USDdb done", "/bin/bash TO_PSW=<destination-db-password> /path/to/db_copy.sh 1", "spec: database: enableLocalDb: false application: # extraFiles: secrets: - name: my-rhdh-database-certificates-secrets key: postgres-crt.pem # key name as in my-rhdh-database-certificates-secrets Secret extraEnvs: secrets: - name: my-rhdh-database-secrets", "-n developer-hub delete pvc <local-psql-pvc-name>", "get pods -n <your-namespace>" ]
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.4/html/configuring/configuring-external-postgresql-databases
Preface
Preface Open Java Development Kit (OpenJDK) is a free and open source implementation of the Java Platform, Standard Edition (Java SE). Eclipse Temurin is available in four LTS versions: OpenJDK 8u, OpenJDK 11u, OpenJDK 17u, and OpenJDK 21u. Binary files for Eclipse Temurin are available for macOS, Microsoft Windows, and multiple Linux x86 Operating Systems including Red Hat Enterprise Linux and Ubuntu.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/17/html/release_notes_for_eclipse_temurin_17.0.14/pr01
Chapter 16. Supportability and Maintenance
Chapter 16. Supportability and Maintenance ABRT Authorized Micro-Reporting In Red Hat Enterprise Linux 7.1, the Automatic Bug Reporting Tool ( ABRT ) receives tighter integration with the Red Hat Customer Portal and is capable of directly sending micro-reports to the Portal. ABRT provides a utility, abrt-auto-reporting , to easily configure user's Portal credentials necessary to authorize micro-reports. The integrated authorization allows ABRT to reply to a micro-report with a rich text which may include possible steps to fix the cause of the micro-report. For example, ABRT can suggest which packages are supposed to be upgraded or offer Knowledge base articles related to the issue. See the Customer Portal for more information on this feature .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.1_release_notes/chap-red_hat_enterprise_linux-7.1_release_notes-supportability_and_maintenance
Chapter 3. Configuring IAM for IBM Cloud
Chapter 3. Configuring IAM for IBM Cloud In environments where the cloud identity and access management (IAM) APIs are not reachable, you must put the Cloud Credential Operator (CCO) into manual mode before you install the cluster. 3.1. Alternatives to storing administrator-level secrets in the kube-system project The Cloud Credential Operator (CCO) manages cloud provider credentials as Kubernetes custom resource definitions (CRDs). You can configure the CCO to suit the security requirements of your organization by setting different values for the credentialsMode parameter in the install-config.yaml file. Storing an administrator-level credential secret in the cluster kube-system project is not supported for IBM Cloud(R); therefore, you must set the credentialsMode parameter for the CCO to Manual when installing OpenShift Container Platform and manage your cloud credentials manually. Using manual mode allows each cluster component to have only the permissions it requires, without storing an administrator-level credential in the cluster. You can also use this mode if your environment does not have connectivity to the cloud provider public IAM endpoint. However, you must manually reconcile permissions with new release images for every upgrade. You must also manually supply credentials for every component that requests them. Additional resources About the Cloud Credential Operator 3.2. Configuring the Cloud Credential Operator utility To create and manage cloud credentials from outside of the cluster when the Cloud Credential Operator (CCO) is operating in manual mode, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. Additional resources Rotating API keys for IBM Cloud(R) 3.3. steps Installing a cluster on IBM Cloud(R) with customizations 3.4. Additional resources Preparing to update a cluster with manually maintained credentials
[ "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for IBM Cloud nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command." ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_cloud/configuring-iam-ibm-cloud
Chapter 38. Authentication and Interoperability
Chapter 38. Authentication and Interoperability Use of AD and LDAP sudo providers The Active Directory (AD) provider is a back end used to connect to an AD server. Starting with Red Hat Enterprise Linux 7.2, using the AD sudo provider together with the LDAP provider is available as a Technology Preview. To enable the AD sudo provider, add the sudo_provider=ad setting in the [domain] section of the sssd.conf file. (BZ# 1068725 ) DNSSEC available as Technology Preview in IdM Identity Management (IdM) servers with integrated DNS now support DNS Security Extensions (DNSSEC), a set of extensions to DNS that enhance security of the DNS protocol. DNS zones hosted on IdM servers can be automatically signed using DNSSEC. The cryptographic keys are automatically generated and rotated. Users who decide to secure their DNS zones with DNSSEC are advised to read and follow these documents: DNSSEC Operational Practices, Version 2: http://tools.ietf.org/html/rfc6781#section-2 Secure Domain Name System (DNS) Deployment Guide: http://dx.doi.org/10.6028/NIST.SP.800-81-2 DNSSEC Key Rollover Timing Considerations: http://tools.ietf.org/html/rfc7583 Note that IdM servers with integrated DNS use DNSSEC to validate DNS answers obtained from other DNS servers. This might affect the availability of DNS zones that are not configured in accordance with recommended naming practices described in the Red Hat Enterprise Linux Networking Guide: https://access.redhat.com/documentation/en-US/Red_Hat_Enterprise_Linux/7/html/Networking_Guide/ch-Configure_Host_Names.html#sec-Recommended_Naming_Practices . (BZ# 1115294 ) Identity Management JSON-RPC API available as Technology Preview An API is available for Identity Management (IdM). To view the API, IdM also provides an API browser as Technology Preview. In Red Hat Enterprise Linux 7.3, the IdM API was enhanced to enable multiple versions of API commands. Previously, enhancements could change the behavior of a command in an incompatible way. Users are now able to continue using existing tools and scripts even if the IdM API changes. This enables: Administrators to use or later versions of IdM on the server than on the managing client. Developers to use a specific version of an IdM call, even if the IdM version changes on the server. In all cases, the communication with the server is possible, regardless if one side uses, for example, a newer version that introduces new options for a feature. For details on using the API, see https://access.redhat.com/articles/2728021 (BZ# 1298286 ) Containerized Identity Management server available as Technology Preview The rhel7/ipa-server container image is available as a Technology Preview feature. Note that the rhel7/sssd container image is now fully supported. For details, see https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html-single/using_containerized_identity_management_services . (BZ# 1405325 , BZ#1405326) The Custodia secrets service provider is available as a Technology Preview As a Technology Preview, you can use Custodia, a secrets service provider. Custodia stores or serves as a proxy for secrets, such as keys or passwords. For details, see the upstream documentation at http://custodia.readthedocs.io . Note that since Red Hat Enterprise Linux 7.6, Custodia has been deprecated. (BZ# 1403214 )
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.6_release_notes/technology_previews_authentication_and_interoperability
12.2. Creating a Virtual Machine Pool
12.2. Creating a Virtual Machine Pool You can create a virtual machine pool containing multiple virtual machines based on a common template. See Templates in the Virtual Machine Management Guide for information about sealing a virtual machine and creating a template. Sysprep File Configuration Options for Windows Virtual Machines Several sysprep file configuration options are available, depending on your requirements. If your pool does not need to join a domain, you can use the default sysprep file, located in /usr/share/ovirt-engine/conf/sysprep/ . If your pool needs to join a domain, you can create a custom sysprep for each Windows operating system: Copy the relevant sections for each operating system from /usr/share/ovirt-engine/conf/osinfo-defaults.properties to a new file and save as 99-defaults.properties . In 99-defaults.properties , specify the Windows product activation key and the path of your new custom sysprep file: Create a new sysprep file, specifying the domain, domain password, and domain administrator: If you need to configure different sysprep settings for different pools of Windows virtual machines, you can create a custom sysprep file in the Administration Portal (see Creating a Virtual Machine Pool below). See Using Sysprep to Automate the Configuration of Virtual Machines in the Virtual Machine Guide for more information. Creating a Virtual Machine Pool Click Compute Pools . Click New . Select a Cluster from the drop-down list. Select a Template and version from the drop-down menu. A template provides standard settings for all the virtual machines in the pool. Select an Operating System from the drop-down list. Use the Optimized for drop-down list to optimize virtual machines for Desktop or Server . Note High Performance optimization is not recommended for pools because a high performance virtual machine is pinned to a single host and concrete resources. A pool containing multiple virtual machines with such a configuration would not run well. Enter a Name and, optionally, a Description and Comment . The Name of the pool is applied to each virtual machine in the pool, with a numeric suffix. You can customize the numbering of the virtual machines with ? as a placeholder. Example 12.1. Pool Name and Virtual Machine Numbering Examples Pool: MyPool Virtual machines: MyPool-1 , MyPool-2 , ... MyPool-10 Pool: MyPool-??? Virtual machines: MyPool-001 , MyPool-002 , ... MyPool-010 Enter the Number of VMs for the pool. Enter the number of virtual machines to be prestarted in the Prestarted field. Select the Maximum number of VMs per user that a single user is allowed to run in a session. The minimum is 1 . Select the Delete Protection check box to enable delete protection. If you are creating a pool of non-Windows virtual machines or if you are using the default sysprep , skip this step. If you are creating a custom sysprep file for a pool of Windows virtual machines: Click the Show Advanced Options button. Click the Initial Run tab and select the Use Cloud-Init/Sysprep check box. Click the Authentication arrow and enter the User Name and Password or select Use already configured password . Note This User Name is the name of the local administrator. You can change its value from its default value ( user ) here in the Authentication section or in a custom sysprep file. Click the Custom Script arrow and paste the contents of the default sysprep file, located in /usr/share/ovirt-engine/conf/sysprep/ , into the text box. You can modify the following values of the sysprep file: Key . If you do not want to use the pre-defined Windows activation product key, replace <![CDATA[USDProductKeyUSD]]> with a valid product key: Example 12.2. Windows Product Key Example Domain that the Windows virtual machines will join, the domain's Password , and the domain administrator's Username : <Credentials> <Domain> AD_Domain </Domain> <Password> Domain_Password </Password> <Username> Domain_Administrator </Username> </Credentials> Example 12.3. Domain Credentials Example Note The Domain , Password , and Username are required to join the domain. The Key is for activation. You do not necessarily need both. The domain and credentials cannot be modified in the Initial Run tab. FullName of the local administrator: <UserData> ... <FullName> Local_Administrator </FullName> ... </UserData> DisplayName and Name of the local administrator: <LocalAccounts> <LocalAccount wcm:action="add"> <Password> <Value><![CDATA[USDAdminPasswordUSD]]></Value> <PlainText>true</PlainText> </Password> <DisplayName> Local_Administrator </DisplayName> <Group>administrators</Group> <Name> Local_Administrator </Name> </LocalAccount> </LocalAccounts> The remaining variables in the sysprep file can be filled in on the Initial Run tab. Optional. Set a Pool Type : Click the Type tab and select a Pool Type : Manual - The administrator is responsible for explicitly returning the virtual machine to the pool. Automatic - The virtual machine is automatically returned to the virtual machine pool. Select the Stateful Pool check box to ensure that virtual machines are started in a stateful mode. This ensures that changes made by a user will persist on a virtual machine. Click OK . Optional. Override the SPICE proxy: In the Console tab, select the Override SPICE Proxy check box. In the Overridden SPICE proxy address text field, specify the address of a SPICE proxy to override the global SPICE proxy. Click OK . For a pool of Windows virtual machines, click Compute Virtual Machines , select each virtual machine from the pool, and click Run Run Once . Note If the virtual machine does not start and Info [windeploy.exe] Found no unattend file appears in %WINDIR%\panther\UnattendGC\setupact.log , add the UnattendFile key to the registry of the Windows virtual machine that was used to create the template for the pool: Check that the Windows virtual machine has an attached floppy device with the unattend file, for example, A:\Unattend.xml . Click Start , click Run , type regedit in the Open text box, and click OK . In the left pane, go to HKEY_LOCAL_MACHINE SYSTEM Setup . Right-click the right pane and select New String Value . Enter UnattendFile as the key name. Double-click the new key and enter the unattend file name and path, for example, A:\Unattend.xml , as the key's value. Save the registry, seal the Windows virtual machine, and create a new template. See Templates in the Virtual Machine Management Guide for details. You have created and configured a virtual machine pool with the specified number of identical virtual machines. You can view these virtual machines in Compute Virtual Machines , or by clicking the name of a pool to open its details view; a virtual machine in a pool is distinguished from independent virtual machines by its icon.
[ "os. operating_system .productKey.value= Windows_product_activation_key os. operating_system .sysprepPath.value = USD{ENGINE_USR}/conf/sysprep/sysprep. operating_system", "<Credentials> <Domain> AD_Domain </Domain> <Password> Domain_Password </Password> <Username> Domain_Administrator </Username> </Credentials>", "<ProductKey> <Key><![CDATA[USDProductKeyUSD]]></Key> </ProductKey>", "<ProductKey> <Key>0000-000-000-000</Key> </ProductKey>", "<Credentials> <Domain> AD_Domain </Domain> <Password> Domain_Password </Password> <Username> Domain_Administrator </Username> </Credentials>", "<Credentials> <Domain>addomain.local</Domain> <Password>12345678</Password> <Username>Sarah_Smith</Username> </Credentials>", "<UserData> ... <FullName> Local_Administrator </FullName> ... </UserData>", "<LocalAccounts> <LocalAccount wcm:action=\"add\"> <Password> <Value><![CDATA[USDAdminPasswordUSD]]></Value> <PlainText>true</PlainText> </Password> <DisplayName> Local_Administrator </DisplayName> <Group>administrators</Group> <Name> Local_Administrator </Name> </LocalAccount> </LocalAccounts>" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/creating_a_vm_pool
22.16.15. Configuring the Time-to-Live for NTP Packets
22.16.15. Configuring the Time-to-Live for NTP Packets To specify that a particular time-to-live ( TTL ) value should be used in place of the default, add the following option to the end of a server or peer command: ttl value Specify the time-to-live value to be used in packets sent by broadcast servers and multicast NTP servers. Specify the maximum time-to-live value to use for the " expanding ring search " by a manycast client. The default value is 127 .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2_configuring_the_time-to-live_for_ntp_packets
Chapter 5. Deploying Red Hat Quay on public cloud
Chapter 5. Deploying Red Hat Quay on public cloud Red Hat Quay can run on public clouds, either in standalone mode or where OpenShift Container Platform itself has been deployed on public cloud. A full list of tested and supported configurations can be found in the Red Hat Quay Tested Integrations Matrix at https://access.redhat.com/articles/4067991 . Recommendation: If Red Hat Quay is running on public cloud, then you should use the public cloud services for Red Hat Quay backend services to ensure proper high availability and scalability. 5.1. Running Red Hat Quay on Amazon Web Services If Red Hat Quay is running on Amazon Web Services (AWS), you can use the following features: AWS Elastic Load Balancer AWS S3 (hot) blob storage AWS RDS database AWS ElastiCache Redis EC2 virtual machine recommendation: M3.Large or M4.XLarge The following image provides a high level overview of Red Hat Quay running on AWS: Red Hat Quay on AWS 5.2. Running Red Hat Quay on Microsoft Azure If Red Hat Quay is running on Microsoft Azure, you can use the following features: Azure managed services such as highly available PostgreSQL Azure Blob Storage must be hot storage Azure cool storage is not available for Red Hat Quay Azure Cache for Redis The following image provides a high level overview of Red Hat Quay running on Microsoft Azure: Red Hat Quay on Microsoft Azure
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/red_hat_quay_architecture/arch-deploy-quay-public-cloud
function::ipmib_tcp_local_port
function::ipmib_tcp_local_port Name function::ipmib_tcp_local_port - Get the local tcp port Synopsis Arguments skb pointer to a struct sk_buff SourceIsLocal flag to indicate whether local operation Description Returns the local tcp port from skb .
[ "ipmib_tcp_local_port:long(skb:long,SourceIsLocal:long)" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/systemtap_tapset_reference/api-ipmib-tcp-local-port
26.2. Configure JGroups (Library Mode)
26.2. Configure JGroups (Library Mode) Red Hat JBoss Data Grid must have an appropriate JGroups configuration in order to operate in clustered mode. Example 26.5. JGroups Programmatic Configuration Example 26.6. JGroups XML Configuration In either programmatic or XML configuration methods, JBoss Data Grid searches for jgroups.xml in the classpath before searching for an absolute path name if it is not found in the classpath. Report a bug 26.2.1. JGroups Transport Protocols A transport protocol is the protocol at the bottom of a protocol stack. Transport Protocols are responsible for sending and receiving messages from the network. Red Hat JBoss Data Grid ships with both UDP and TCP transport protocols. Report a bug 26.2.1.1. The UDP Transport Protocol UDP is a transport protocol that uses: IP multicasting to send messages to all members of a cluster. UDP datagrams for unicast messages, which are sent to a single member. When the UDP transport is started, it opens a unicast socket and a multicast socket. The unicast socket is used to send and receive unicast messages, the multicast socket sends and receives multicast sockets. The physical address of the channel will be the same as the address and port number of the unicast socket. Report a bug 26.2.1.2. The TCP Transport Protocol TCP/IP is a replacement transport for UDP in situations where IP multicast cannot be used, such as operations over a WAN where routers may discard IP multicast packets. TCP is a transport protocol used to send unicast and multicast messages. When sending multicast messages, TCP sends multiple unicast messages. When using TCP, each message to all cluster members is sent as multiple unicast messages, or one to each member. As IP multicasting cannot be used to discover initial members, another mechanism must be used to find initial membership. Red Hat JBoss Data Grid's Hot Rod is a custom TCP client/server protocol. Report a bug 26.2.1.3. Using the TCPPing Protocol Some networks only allow TCP to be used. The pre-configured default-configs/default-jgroups-tcp.xml includes the MPING protocol, which uses UDP multicast for discovery. When UDP multicast is not available, the MPING protocol, has to be replaced by a different mechanism. The recommended alternative is the TCPPING protocol. The TCPPING configuration contains a static list of IP addresses which are contacted for node discovery. Example 26.7. Configure the JGroups Subsystem to Use TCPPING Report a bug 26.2.2. Pre-Configured JGroups Files Red Hat JBoss Data Grid ships with a number of pre-configured JGroups files packaged in infinispan-embedded.jar , and are available on the classpath by default. In order to use one of these files, specify one of these file names instead of using jgroups.xml . The JGroups configuration files shipped with JBoss Data Grid are intended to be used as a starting point for a working project. JGroups will usually require fine-tuning for optimal network performance. The available configurations are: default-configs/default-jgroups-udp.xml default-configs/default-jgroups-tcp.xml default-configs/default-jgroups-ec2.xml Report a bug 26.2.2.1. default-jgroups-udp.xml The default-configs/default-jgroups-udp.xml file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-udp.xml configuration uses UDP as a transport and UDP multicast for discovery. is suitable for large clusters (over 8 nodes). is suitable if using Invalidation or Replication modes. The behavior of some of these settings can be altered by adding certain system properties to the JVM at startup. The settings that can be configured are shown in the following table. Table 26.1. default-jgroups-udp.xml System Properties System Property Description Default Required? jgroups.udp.mcast_addr IP address to use for multicast (both for communications and discovery). Must be a valid Class D IP address, suitable for IP multicast. 228.6.7.8 No jgroups.udp.mcast_port Port to use for multicast socket 46655 No jgroups.udp.ip_ttl Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped 2 No Report a bug 26.2.2.2. default-jgroups-tcp.xml The default-configs/default-jgroups-tcp.xml file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-tcp.xml configuration uses TCP as a transport and UDP multicast for discovery. is generally only used where multicast UDP is not an option. TCP does not perform as well as UDP for clusters of eight or more nodes. Clusters of four nodes or fewer result in roughly the same level of performance for both UDP and TCP. As with other pre-configured JGroups files, the behavior of some of these settings can be altered by adding certain system properties to the JVM at startup. The settings that can be configured are shown in the following table. Table 26.2. default-jgroups-tcp.xml System Properties System Property Description Default Required? jgroups.tcp.address IP address to use for the TCP transport. 127.0.0.1 No jgroups.tcp.port Port to use for TCP socket 7800 No jgroups.udp.mcast_addr IP address to use for multicast (for discovery). Must be a valid Class D IP address, suitable for IP multicast. 228.6.7.8 No jgroups.udp.mcast_port Port to use for multicast socket 46655 No jgroups.udp.ip_ttl Specifies the time-to-live (TTL) for IP multicast packets. The value here refers to the number of network hops a packet is allowed to make before it is dropped 2 No 23149%2C+Administration+and+Configuration+Guide-6.628-06-2017+13%3A51%3A02JBoss+Data+Grid+6Documentation6.6.1 Report a bug 26.2.2.3. default-jgroups-ec2.xml The default-configs/default-jgroups-ec2.xml file is a pre-configured JGroups configuration in Red Hat JBoss Data Grid. The default-jgroups-ec2.xml configuration uses TCP as a transport and S3_PING for discovery. is suitable on Amazon EC2 nodes where UDP multicast isn't available. As with other pre-configured JGroups files, the behavior of some of these settings can be altered by adding certain system properties to the JVM at startup. The settings that can be configured are shown in the following table. Table 26.3. default-jgroups-ec2.xml System Properties System Property Description Default Required? jgroups.tcp.address IP address to use for the TCP transport. 127.0.0.1 No jgroups.tcp.port Port to use for TCP socket 7800 No jgroups.s3.access_key The Amazon S3 access key used to access an S3 bucket Yes jgroups.s3.secret_access_key The Amazon S3 secret key used to access an S3 bucket Yes jgroups.s3.bucket Name of the Amazon S3 bucket to use. Must be unique and must already exist Yes jgroups.s3.pre_signed_delete_url The pre-signed URL to be used for the DELETE operation. Yes jgroups.s3.pre_signed_put_url The pre-signed URL to be used for the PUT operation. Yes jgroups.s3.prefix If set, S3_PING searches for a bucket with a name that starts with the prefix value. No Report a bug
[ "GlobalConfiguration gc = new GlobalConfigurationBuilder() .transport() .defaultTransport() .addProperty(\"configurationFile\",\"jgroups.xml\") .build();", "<infinispan> <global> <transport> <properties> <property name=\"configurationFile\" value=\"jgroups.xml\" /> </properties> </transport> </global> <!-- Additional configuration elements here --> </infinispan>", "<TCP bind_port=\"7800\" /> <TCPPING initial_hosts=\"USD{jgroups.tcpping.initial_hosts:HostA[7800],HostB[7801]}\" port_range=\"1\" />" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/sect-Configure_JGroups_Library_Mode
Upgrade Red Hat Quay
Upgrade Red Hat Quay Red Hat Quay 3.9 Upgrade Red Hat Quay Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/red_hat_quay/3.9/html/upgrade_red_hat_quay/index
Chapter 1. About specialized hardware and driver enablement
Chapter 1. About specialized hardware and driver enablement Many applications require specialized hardware or software that depends on kernel modules or drivers. You can use driver containers to load out-of-tree kernel modules on Red Hat Enterprise Linux CoreOS (RHCOS) nodes. To deploy out-of-tree drivers during cluster installation, use the kmods-via-containers framework. To load drivers or kernel modules on an existing OpenShift Container Platform cluster, OpenShift Container Platform offers several tools: The Driver Toolkit is a container image that is a part of every OpenShift Container Platform release. It contains the kernel packages and other common dependencies that are needed to build a driver or kernel module. The Driver Toolkit can be used as a base image for driver container image builds on OpenShift Container Platform. The Special Resource Operator (SRO) orchestrates the building and management of driver containers to load kernel modules and drivers on an existing OpenShift or Kubernetes cluster. The Node Feature Discovery (NFD) Operator adds node labels for CPU capabilities, kernel version, PCIe device vendor IDs, and more.
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.10/html/specialized_hardware_and_driver_enablement/about-hardware-enablement
Chapter 2. Using Embedded Caches
Chapter 2. Using Embedded Caches Embed Data Grid caches directly in your project for in-memory data storage. 2.1. Adding the EmbeddedCacheManager Bean Configure your application to use embedded caches. Procedure Add infinispan-spring-boot3-starter-embedded to your project's classpath to enable Embedded mode. Use the Spring @Autowired annotation to include an EmbeddedCacheManager bean in your Java configuration classes, as in the following example: private final EmbeddedCacheManager cacheManager; @Autowired public YourClassName(EmbeddedCacheManager cacheManager) { this.cacheManager = cacheManager; } You are now ready to use Data Grid caches directly within your application, as in the following example: cacheManager.getCache("testCache").put("testKey", "testValue"); System.out.println("Received value from cache: " + cacheManager.getCache("testCache").get("testKey")); 2.2. Cache Manager Configuration Beans You can customize the Cache Manager with the following configuration beans: InfinispanGlobalConfigurer InfinispanCacheConfigurer Configuration InfinispanConfigurationCustomizer InfinispanGlobalConfigurationCustomizer Note You can create one InfinispanGlobalConfigurer bean only. However you can create multiple configurations with the other beans. InfinispanCacheConfigurer Bean @Bean public InfinispanCacheConfigurer cacheConfigurer() { return manager -> { final Configuration ispnConfig = new ConfigurationBuilder() .clustering() .cacheMode(CacheMode.LOCAL) .build(); manager.defineConfiguration("local-sync-config", ispnConfig); }; } Configuration Bean Link the bean name to the cache that it configures, as follows: @Bean(name = "small-cache") public org.infinispan.configuration.cache.Configuration smallCache() { return new ConfigurationBuilder() .read(baseCache) .memory().size(1000L) .memory().evictionType(EvictionType.COUNT) .build(); } @Bean(name = "large-cache") public org.infinispan.configuration.cache.Configuration largeCache() { return new ConfigurationBuilder() .read(baseCache) .memory().size(2000L) .build(); } Customizer Beans @Bean public InfinispanGlobalConfigurationCustomizer globalCustomizer() { return builder -> builder.transport().clusterName(CLUSTER_NAME); } @Bean public InfinispanConfigurationCustomizer configurationCustomizer() { return builder -> builder.memory().evictionType(EvictionType.COUNT); } 2.3. Enabling Spring Cache Support With both embedded and remote caches, Data Grid provides an implementation of Spring Cache that you can enable. Procedure Add the @EnableCaching annotation to your application. If the Data Grid starter detects the: EmbeddedCacheManager bean, it instantiates a new SpringEmbeddedCacheManager . RemoteCacheManager bean, it instantiates a new SpringRemoteCacheManager . Reference Spring Cache Reference
[ "private final EmbeddedCacheManager cacheManager; @Autowired public YourClassName(EmbeddedCacheManager cacheManager) { this.cacheManager = cacheManager; }", "cacheManager.getCache(\"testCache\").put(\"testKey\", \"testValue\"); System.out.println(\"Received value from cache: \" + cacheManager.getCache(\"testCache\").get(\"testKey\"));", "@Bean public InfinispanCacheConfigurer cacheConfigurer() { return manager -> { final Configuration ispnConfig = new ConfigurationBuilder() .clustering() .cacheMode(CacheMode.LOCAL) .build(); manager.defineConfiguration(\"local-sync-config\", ispnConfig); }; }", "@Bean(name = \"small-cache\") public org.infinispan.configuration.cache.Configuration smallCache() { return new ConfigurationBuilder() .read(baseCache) .memory().size(1000L) .memory().evictionType(EvictionType.COUNT) .build(); } @Bean(name = \"large-cache\") public org.infinispan.configuration.cache.Configuration largeCache() { return new ConfigurationBuilder() .read(baseCache) .memory().size(2000L) .build(); }", "@Bean public InfinispanGlobalConfigurationCustomizer globalCustomizer() { return builder -> builder.transport().clusterName(CLUSTER_NAME); } @Bean public InfinispanConfigurationCustomizer configurationCustomizer() { return builder -> builder.memory().evictionType(EvictionType.COUNT); }" ]
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/data_grid_spring_boot_starter/starter-embedded-caches
5.8. A Word About Backups...
5.8. A Word About Backups... One of the most important factors when considering disk storage is that of backups. We have not covered this subject here, because an in-depth section ( Section 8.2, "Backups" ) has been dedicated to backups.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-storage-backups
Chapter 3. Setting up Maven locally
Chapter 3. Setting up Maven locally Typical Red Hat build of Apache Camel application development uses Maven to build and manage projects. The following topics describe how to set up Maven locally: Section 3.1, "Preparing to set up Maven" Section 3.2, "Adding Red Hat repositories to Maven" Section 3.3, "Using local Maven repositories" Section 3.4, "Setting Maven mirror using environmental variables or system properties" Section 3.5, "About Maven artifacts and coordinates" 3.1. Preparing to set up Maven Maven is a free, open source, build tool from Apache. Typically, you use Maven to build Fuse applications. Procedure Download Maven 3.8.6 or later from the Maven download page . Tip To verify that you have the correct Maven and JDK version installed, open a command terminal and enter the following command: Check the output to verify that Maven is version 3.8.6 or newer, and is using OpenJDK 17. Ensure that your system is connected to the Internet. While building a project, the default behavior is that Maven searches external repositories and downloads the required artifacts. Maven looks for repositories that are accessible over the Internet. You can change this behavior so that Maven searches only repositories that are on a local network. That is, Maven can run in an offline mode. In offline mode, Maven looks for artifacts in its local repository. See Section 3.3, "Using local Maven repositories" . 3.2. Adding Red Hat repositories to Maven To access artifacts that are in Red Hat Maven repositories, you need to add those repositories to Maven's settings.xml file. Maven looks for the settings.xml file in the .m2 directory of the user's home directory. If there is not a user specified settings.xml file, Maven uses the system-level settings.xml file at M2_HOME/conf/settings.xml . Prerequisite You know the location of the settings.xml file in which you want to add the Red Hat repositories. Procedure In the settings.xml file, add repository elements for the Red Hat repositories as shown in this example: Note If you are using the camel-jira component, also add the atlassian repository. <?xml version="1.0"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/maven-external/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings> 3.3. Using local Maven repositories If you are running a container without an Internet connection, and you need to deploy an application that has dependencies that are not available offline, you can use the Maven dependency plug-in to download the application's dependencies into a Maven offline repository. You can then distribute this customized Maven offline repository to machines that do not have an Internet connection. Procedure In the project directory that contains the pom.xml file, download a repository for a Maven project by running a command such as the following: In this example, Maven dependencies and plug-ins that are required to build the project are downloaded to the /tmp/my-project directory. Distribute this customized Maven offline repository internally to any machines that do not have an Internet connection. 3.4. Setting Maven mirror using environmental variables or system properties When running the applications you need access to the artifacts that are in the Red Hat Maven repositories. These repositories are added to Maven's settings.xml file. Maven checks the following locations for settings.xml file: looks for the specified url if not found looks for USD{user.home}/.m2/settings.xml if not found looks for USD{maven.home}/conf/settings.xml if not found looks for USD{M2_HOME}/conf/settings.xml if no location is found, empty org.apache.maven.settings.Settings instance is created. 3.4.1. About Maven mirror Maven uses a set of remote repositories to access the artifacts, which are currently not available in local repository. The list of repositories almost always contains Maven Central repository, but for Red Hat Fuse, it also contains Maven Red Hat repositories. In some cases where it is not possible or allowed to access different remote repositories, you can use a mechanism of Maven mirrors. A mirror replaces a particular repository URL with a different one, so all HTTP traffic when remote artifacts are being searched for can be directed to a single URL. 3.4.2. Adding Maven mirror to settings.xml To set the Maven mirror, add the following section to Maven's settings.xml : No mirror is used if the above section is not found in the settings.xml file. To specify a global mirror without providing the XML configuration, you can use either system property or environmental variables. 3.4.3. Setting Maven mirror using environmental variable or system property To set the Maven mirror using either environmental variable or system property, you can add: Environmental variable called MAVEN_MIRROR_URL to bin/setenv file System property called mavenMirrorUrl to etc/system.properties file 3.4.4. Using Maven options to specify Maven mirror url To use an alternate Maven mirror url, other than the one specified by environmental variables or system property, use the following maven options when running the application: -DmavenMirrorUrl=mirrorId::mirrorUrl for example, -DmavenMirrorUrl=my-mirror::http://mirror.net/repository -DmavenMirrorUrl=mirrorUrl for example, -DmavenMirrorUrl=http://mirror.net/repository . In this example, the <id> of the <mirror> is just a mirror. 3.5. About Maven artifacts and coordinates In the Maven build system, the basic building block is an artifact . After a build, the output of an artifact is typically an archive, such as a JAR or WAR file. A key aspect of Maven is the ability to locate artifacts and manage the dependencies between them. A Maven coordinate is a set of values that identifies the location of a particular artifact. A basic coordinate has three values in the following form: groupId:artifactId:version Sometimes Maven augments a basic coordinate with a packaging value or with both a packaging value and a classifier value. A Maven coordinate can have any one of the following forms: Here are descriptions of the values: groupdId Defines a scope for the name of the artifact. You would typically use all or part of a package name as a group ID. For example, org.fusesource.example . artifactId Defines the artifact name relative to the group ID. version Specifies the artifact's version. A version number can have up to four parts: n.n.n.n , where the last part of the version number can contain non-numeric characters. For example, the last part of 1.0-SNAPSHOT is the alphanumeric substring, 0-SNAPSHOT . packaging Defines the packaged entity that is produced when you build the project. For OSGi projects, the packaging is bundle . The default value is jar . classifier Enables you to distinguish between artifacts that were built from the same POM, but have different content. Elements in an artifact's POM file define the artifact's group ID, artifact ID, packaging, and version, as shown here: <project ... > ... <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> ... </project> To define a dependency on the preceding artifact, you would add the following dependency element to a POM file: <project ... > ... <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> ... </project> Note It is not necessary to specify the bundle package type in the preceding dependency, because a bundle is just a particular kind of JAR file and jar is the default Maven package type. If you do need to specify the packaging type explicitly in a dependency, however, you can use the type element.
[ "mvn --version", "<?xml version=\"1.0\"?> <settings> <profiles> <profile> <id>extra-repos</id> <activation> <activeByDefault>true</activeByDefault> </activation> <repositories> <repository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </repository> <repository> <id>atlassian</id> <url>https://packages.atlassian.com/maven-external/</url> <name>atlassian external repo</name> <snapshots> <enabled>false</enabled> </snapshots> <releases> <enabled>true</enabled> </releases> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>redhat-ga-repository</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>extra-repos</activeProfile> </activeProfiles> </settings>", "mvn org.apache.maven.plugins:maven-dependency-plugin:3.1.0:go-offline -Dmaven.repo.local=/tmp/my-project", "<mirror> <id>all</id> <mirrorOf>*</mirrorOf> <url>http://host:port/path</url> </mirror>", "groupId:artifactId:version groupId:artifactId:packaging:version groupId:artifactId:packaging:classifier:version", "<project ... > <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <packaging>bundle</packaging> <version>1.0-SNAPSHOT</version> </project>", "<project ... > <dependencies> <dependency> <groupId>org.fusesource.example</groupId> <artifactId>bundle-demo</artifactId> <version>1.0-SNAPSHOT</version> </dependency> </dependencies> </project>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/getting_started_with_red_hat_build_of_apache_camel_for_quarkus/set-up-maven-locally
Chapter 10. Scoping tokens
Chapter 10. Scoping tokens 10.1. About scoping tokens You can create scoped tokens to delegate some of your permissions to another user or service account. For example, a project administrator might want to delegate the power to create pods. A scoped token is a token that identifies as a given user but is limited to certain actions by its scope. Only a user with the dedicated-admin role can create scoped tokens. Scopes are evaluated by converting the set of scopes for a token into a set of PolicyRules . Then, the request is matched against those rules. The request attributes must match at least one of the scope rules to be passed to the "normal" authorizer for further authorization checks. 10.1.1. User scopes User scopes are focused on getting information about a given user. They are intent-based, so the rules are automatically created for you: user:full - Allows full read/write access to the API with all of the user's permissions. user:info - Allows read-only access to information about the user, such as name and groups. user:check-access - Allows access to self-localsubjectaccessreviews and self-subjectaccessreviews . These are the variables where you pass an empty user and groups in your request object. user:list-projects - Allows read-only access to list the projects the user has access to. 10.1.2. Role scope The role scope allows you to have the same level of access as a given role filtered by namespace. role:<cluster-role name>:<namespace or * for all> - Limits the scope to the rules specified by the cluster-role, but only in the specified namespace . Note Caveat: This prevents escalating access. Even if the role allows access to resources like secrets, rolebindings, and roles, this scope will deny access to those resources. This helps prevent unexpected escalations. Many people do not think of a role like edit as being an escalating role, but with access to a secret it is. role:<cluster-role name>:<namespace or * for all>:! - This is similar to the example above, except that including the bang causes this scope to allow escalating access. 10.2. Adding unauthenticated groups to cluster roles As a cluster administrator, you can add unauthenticated users to the following cluster roles in Red Hat OpenShift Service on AWS by creating a cluster role binding. Unauthenticated users do not have access to non-public cluster roles. This should only be done in specific use cases when necessary. You can add unauthenticated users to the following cluster roles: system:scope-impersonation system:webhook system:oauth-token-deleter self-access-reviewer Important Always verify compliance with your organization's security standards when modifying unauthenticated access. Prerequisites You have access to the cluster as a user with the cluster-admin role. You have installed the OpenShift CLI ( oc ). Procedure Create a YAML file named add-<cluster_role>-unauth.yaml and add the following content: apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: "true" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated Apply the configuration by running the following command: USD oc apply -f add-<cluster_role>.yaml
[ "apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRoleBinding metadata: annotations: rbac.authorization.kubernetes.io/autoupdate: \"true\" name: <cluster_role>access-unauthenticated roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <cluster_role> subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:unauthenticated", "oc apply -f add-<cluster_role>.yaml" ]
https://docs.redhat.com/en/documentation/red_hat_openshift_service_on_aws/4/html/authentication_and_authorization/tokens-scoping
11.5. Preparing and Adding Block Storage
11.5. Preparing and Adding Block Storage 11.5.1. Preparing iSCSI Storage Red Hat Virtualization supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time. For information on setting up and configuring iSCSI storage, see Online Storage Management in the Red Hat Enterprise Linux 7 Storage Administration Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 11.5.2. Adding iSCSI Storage This procedure shows you how to attach existing iSCSI storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the new storage domain. Select a Data Center from the drop-down list. Select Data as the Domain Function and iSCSI as the Storage Type . Select an active host as the Host . Important Communication to the storage domain is from the selected host and not directly from the Manager. Therefore, all hosts must have access to the storage device before the storage domain can be configured. The Manager can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the step. Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment. Note LUNs used externally to the environment are also displayed. You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs. Enter the FQDN or IP address of the iSCSI host in the Address field. Enter the port with which to connect to the host when browsing for targets in the Port field. The default is 3260 . If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password . Note You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information. Click Discover . Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets. Important If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported. Click the + button to the desired target. This expands the entry and displays all unused LUNs attached to the target. Select the check box for each LUN that you are using to create the storage domain. Optionally, you can configure the advanced parameters: Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding. If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond . 11.5.3. Configuring iSCSI Multipathing iSCSI multipathing enables you to create and manage groups of logical networks and iSCSI storage connections. Multiple network paths between the hosts and iSCSI storage prevent host downtime caused by network path failure. The Manager connects each host in the data center to each target, using the NICs or VLANs that are assigned to the logical networks in the iSCSI bond. You can create an iSCSI bond with multiple targets and logical networks for redundancy. Prerequisites One or more iSCSI targets One or more logical networks that meet the following requirements: Not defined as Required or VM Network Assigned to a host interface Assigned a static IP address in the same VLAN and subnet as the other logical networks in the iSCSI bond Procedure Click Compute Data Centers . Click the data center name to open the details view. In the iSCSI Multipathing tab, click Add . In the Add iSCSI Bond window, enter a Name and a Description . Select a logical network from Logical Networks and a storage domain from Storage Targets . You must select all the paths to the same target. Click OK . The hosts in the data center are connected to the iSCSI targets through the logical networks in the iSCSI bond. 11.5.4. Migrating a Logical Network to an iSCSI Bond If you have a logical network that you created for iSCSI traffic and configured on top of an existing network bond , you can migrate it to an iSCSI bond on the same subnet without disruption or downtime. Procedure Modify the current logical network so that it is not Required : Click Compute Clusters . Click the cluster name to open the details view. In the Logical Networks tab, select the current logical network ( net-1 ) and click Manage Networks . Clear the Require check box and click OK . Create a new logical network that is not Required and not VM network : Click Add Network to open the New Logical Network window. In the General tab, enter the Name ( net-2 ) and clear the VM network check box. In the Cluster tab, clear the Require check box and click OK . Remove the current network bond and reassign the logical networks: Click Compute Hosts . Click the host name to open the details view. In the Network Interfaces tab, click Setup Host Networks . Drag net-1 to the right to unassign it. Drag the current bond to the right to remove it. Drag net-1 and net-2 to the left to assign them to physical interfaces. Click the pencil icon of net-2 to open the Edit Network window. In the IPV4 tab, select Static . Enter the IP and Netmask/Routing Prefix of the subnet and click OK . Create the iSCSI bond: Click Compute Data Centers . Click the data center name to open the details view. In the iSCSI Multipathing tab, click Add . In the Add iSCSI Bond window, enter a Name , select the networks, net-1 and net-2 , and click OK . Your data center has an iSCSI bond containing the old and new logical networks. 11.5.5. Preparing FCP Storage Red Hat Virtualization supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time. Red Hat Virtualization system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage. For information on setting up and configuring FCP or multipathing on Red Hat Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide . Important If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. Important Red Hat Virtualization currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. Important If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, Red Hat recommends adding a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: 11.5.6. Adding FCP Storage This procedure shows you how to attach existing FCP storage to your Red Hat Virtualization environment as a data domain. Procedure Click Storage Domains . Click New Domain . Enter the Name of the storage domain. Select an FCP Data Center from the drop-down list. If you do not yet have an appropriate FCP data center, select (none) . Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available. Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center's SPM host. Important All communication to the storage domain is through the selected host and not directly from the Red Hat Virtualization Manager. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured. The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs. Optionally, you can configure the advanced parameters. Click Advanced Parameters . Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged. Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked. Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist. Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains. Click OK . The new FCP data domain remains in a Locked status while it is being prepared for use. When ready, it is automatically attached to the data center. 11.5.7. Increasing iSCSI or FCP Storage There are several ways to increase iSCSI or FCP storage size: Add an existing LUN to the current storage domain. Create a new storage domain with new LUNs and add it to an existing data center. See Section 11.5.2, "Adding iSCSI Storage" . Expand the storage domain by resizing the underlying LUNs. For information about creating, configuring, or resizing iSCSI storage on Red Hat Enterprise Linux 7 systems, see the Red Hat Enterprise Linux 7 Storage Administration Guide . The following procedure explains how to expand storage area network (SAN) storage by adding a new LUN to an existing storage domain. Prerequisites The storage domain's status must be UP . The LUN must be accessible to all the hosts whose status is UP , or else the operation will fail and the LUN will not be added to the domain. The hosts themselves, however, will not be affected. If a newly added host, or a host that is coming out of maintenance or a Non Operational state, cannot access the LUN, the host's state will be Non Operational . Increasing an Existing iSCSI or FCP Storage Domain Click Storage Domains and select an iSCSI or FCP domain. Click Manage Domain . Click Targets > LUNs and click the Discover Targets expansion button. Enter the connection information for the storage server and click Discover to initiate the connection. Click LUNs > Targets and select the check box of the newly available LUN. Click OK to add the LUN to the selected storage domain. This will increase the storage domain by the size of the added LUN. When expanding the storage domain by resizing the underlying LUNs, the LUNs must also be refreshed in the Administration Portal. Refreshing the LUN Size Click Storage Domains and select an iSCSI or FCP domain. Click Manage Domain . Click on LUNs > Targets . In the Additional Size column, click the Add Additional_Storage_Size button of the LUN to refresh. Click OK to refresh the LUN to indicate the new storage size. 11.5.8. Reusing LUNs LUNs cannot be reused, as is, to create a storage domain or virtual disk. If you try to reuse the LUNs, the Administration Portal displays the following error message: A self-hosted engine shows the following error during installation: Before the LUN can be reused, the old partitioning table must be cleared. Clearing the Partition Table from a LUN Important You must run this procedure on the correct LUN so that you do not inadvertently destroy data. Run the dd command with the ID of the LUN that you want to reuse, the maximum number of bytes to read and write at a time, and the number of input blocks to copy:
[ "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue }", "Physical device initialization failed. Please check that the device is empty and accessible by the host.", "[ ERROR ] Error creating Volume Group: Failed to initialize physical device: (\"[u'/dev/mapper/000000000000000000000000000000000']\",) [ ERROR ] Failed to execute stage 'Misc configuration': Failed to initialize physical device: (\"[u'/dev/mapper/000000000000000000000000000000000']\",)", "dd if=/dev/zero of=/dev/mapper/LUN_ID bs=1M count=200 oflag=direct" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/administration_guide/sect-Preparing_and_Adding_Block_Storage
Chapter 16. Troubleshooting Network Observability
Chapter 16. Troubleshooting Network Observability To assist in troubleshooting Network Observability issues, you can perform some troubleshooting actions. 16.1. Using the must-gather tool You can use the must-gather tool to collect information about the Network Observability Operator resources and cluster-wide resources, such as pod logs, FlowCollector , and webhook configurations. Procedure Navigate to the directory where you want to store the must-gather data. Run the following command to collect cluster-wide must-gather resources: USD oc adm must-gather --image-stream=openshift/must-gather \ --image=quay.io/netobserv/must-gather 16.2. Configuring network traffic menu entry in the OpenShift Container Platform console Manually configure the network traffic menu entry in the OpenShift Container Platform console when the network traffic menu entry is not listed in Observe menu in the OpenShift Container Platform console. Prerequisites You have installed OpenShift Container Platform version 4.10 or newer. Procedure Check if the spec.consolePlugin.register field is set to true by running the following command: USD oc -n netobserv get flowcollector cluster -o yaml Example output Optional: Add the netobserv-plugin plugin by manually editing the Console Operator config: USD oc edit console.operator.openshift.io cluster Example output Optional: Set the spec.consolePlugin.register field to true by running the following command: USD oc -n netobserv edit flowcollector cluster -o yaml Example output Ensure the status of console pods is running by running the following command: USD oc get pods -n openshift-console -l app=console Restart the console pods by running the following command: USD oc delete pods -n openshift-console -l app=console Clear your browser cache and history. Check the status of Network Observability plugin pods by running the following command: USD oc get pods -n netobserv -l app=netobserv-plugin Example output Check the logs of the Network Observability plugin pods by running the following command: USD oc logs -n netobserv -l app=netobserv-plugin Example output time="2022-12-13T12:06:49Z" level=info msg="Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info" module=main time="2022-12-13T12:06:49Z" level=info msg="listening on https://:9001" module=server 16.3. Flowlogs-Pipeline does not consume network flows after installing Kafka If you deployed the flow collector first with deploymentModel: KAFKA and then deployed Kafka, the flow collector might not connect correctly to Kafka. Manually restart the flow-pipeline pods where Flowlogs-pipeline does not consume network flows from Kafka. Procedure Delete the flow-pipeline pods to restart them by running the following command: USD oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer 16.4. Failing to see network flows from both br-int and br-ex interfaces br-ex` and br-int are virtual bridge devices operated at OSI layer 2. The eBPF agent works at the IP and TCP levels, layers 3 and 4 respectively. You can expect that the eBPF agent captures the network traffic passing through br-ex and br-int , when the network traffic is processed by other interfaces such as physical host or virtual pod interfaces. If you restrict the eBPF agent network interfaces to attach only to br-ex and br-int , you do not see any network flow. Manually remove the part in the interfaces or excludeInterfaces that restricts the network interfaces to br-int and br-ex . Procedure Remove the interfaces: [ 'br-int', 'br-ex' ] field. This allows the agent to fetch information from all the interfaces. Alternatively, you can specify the Layer-3 interface for example, eth0 . Run the following command: USD oc edit -n netobserv flowcollector.yaml -o yaml Example output 1 Specifies the network interfaces. 16.5. Network Observability controller manager pod runs out of memory You can increase memory limits for the Network Observability operator by editing the spec.config.resources.limits.memory specification in the Subscription object. Procedure In the web console, navigate to Operators Installed Operators Click Network Observability and then select Subscription . From the Actions menu, click Edit Subscription . Alternatively, you can use the CLI to open the YAML configuration for the Subscription object by running the following command: USD oc edit subscription netobserv-operator -n openshift-netobserv-operator Edit the Subscription object to add the config.resources.limits.memory specification and set the value to account for your memory requirements. See the Additional resources for more information about resource considerations: apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2 1 For example, you can increase the memory limit to 800Mi . 2 This value should not be edited, but note that it changes depending on the most current release of the Operator. 16.6. Running custom queries to Loki For troubleshooting, can run custom queries to Loki. There are two examples of ways to do this, which you can adapt according to your needs by replacing the <api_token> with your own. Note These examples use the netobserv namespace for the Network Observability Operator and Loki deployments. Additionally, the examples assume that the LokiStack is named loki . You can optionally use a different namespace and naming by adapting the examples, specifically the -n netobserv or the loki-gateway URL. Prerequisites Installed Loki Operator for use with Network Observability Operator Procedure To get all available labels, run the following: USD oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq To get all flows from the source namespace, my-namespace , run the following: USD oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace="my-namespace"}' | jq Additional resources Resource considerations 16.7. Troubleshooting Loki ResourceExhausted error Loki may return a ResourceExhausted error when network flow data sent by Network Observability exceeds the configured maximum message size. If you are using the Red Hat Loki Operator, this maximum message size is configured to 100 MiB. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project drop-down menu. In the Provided APIs list, select the Network Observability Operator. Click the Flow Collector then the YAML view tab. If you are using the Loki Operator, check that the spec.loki.batchSize value does not exceed 98 MiB. If you are using a Loki installation method that is different from the Red Hat Loki Operator, such as Grafana Loki, verify that the grpc_server_max_recv_msg_size Grafana Loki server setting is higher than the FlowCollector resource spec.loki.batchSize value. If it is not, you must either increase the grpc_server_max_recv_msg_size value, or decrease the spec.loki.batchSize value so that it is lower than the limit. Click Save if you edited the FlowCollector . 16.8. Loki empty ring error The Loki "empty ring" error results in flows not being stored in Loki and not showing up in the web console. This error might happen in various situations. A single workaround to address them all does not exist. There are some actions you can take to investigate the logs in your Loki pods, and verify that the LokiStack is healthy and ready. Some of the situations where this error is observed are as follows: After a LokiStack is uninstalled and reinstalled in the same namespace, old PVCs are not removed, which can cause this error. Action : You can try removing the LokiStack again, removing the PVC, then reinstalling the LokiStack . After a certificate rotation, this error can prevent communication with the flowlogs-pipeline and console-plugin pods. Action : You can restart the pods to restore the connectivity. 16.9. Resource troubleshooting 16.10. LokiStack rate limit errors A rate-limit placed on the Loki tenant can result in potential temporary loss of data and a 429 error: Per stream rate limit exceeded (limit:xMB/sec) while attempting to ingest for stream . You might consider having an alert set to notify you of this error. For more information, see "Creating Loki rate limit alerts for the NetObserv dashboard" in the Additional resources of this section. You can update the LokiStack CRD with the perStreamRateLimit and perStreamRateLimitBurst specifications, as shown in the following procedure. Procedure Navigate to Operators Installed Operators , viewing All projects from the Project dropdown. Look for Loki Operator , and select the LokiStack tab. Create or edit an existing LokiStack instance using the YAML view to add the perStreamRateLimit and perStreamRateLimitBurst specifications: apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed 1 The default value for perStreamRateLimit is 3 . 2 The default value for perStreamRateLimitBurst is 15 . Click Save . Verification Once you update the perStreamRateLimit and perStreamRateLimitBurst specifications, the pods in your cluster restart and the 429 rate-limit error no longer occurs. 16.11. Running a large query results in Loki errors When running large queries for a long time, Loki errors can occur, such as a timeout or too many outstanding requests . There is no complete corrective for this issue, but there are several ways to mitigate it: Adapt your query to add an indexed filter With Loki queries, you can query on both indexed and non-indexed fields or labels. Queries that contain filters on labels perform better. For example, if you query for a particular Pod, which is not an indexed field, you can add its Namespace to the query. The list of indexed fields can be found in the "Network flows format reference", in the Loki label column. Consider querying Prometheus rather than Loki Prometheus is a better fit than Loki to query on large time ranges. However, whether or not you can use Prometheus instead of Loki depends on the use case. For example, queries on Prometheus are much faster than on Loki, and large time ranges do not impact performance. But Prometheus metrics do not contain as much information as flow logs in Loki. The Network Observability OpenShift web console automatically favors Prometheus over Loki if the query is compatible; otherwise, it defaults to Loki. If your query does not run against Prometheus, you can change some filters or aggregations to make the switch. In the OpenShift web console, you can force the use of Prometheus. An error message is displayed when incompatible queries fail, which can help you figure out which labels to change to make the query compatible. For example, changing a filter or an aggregation from Resource or Pods to Owner . Consider using the FlowMetrics API to create your own metric If the data that you need isn't available as a Prometheus metric, you can use the FlowMetrics API to create your own metric. For more information, see "FlowMetrics API Reference" and "Configuring custom metrics by using FlowMetric API". Configure Loki to improve the query performance If the problem persists, you can consider configuring Loki to improve the query performance. Some options depend on the installation mode you used for Loki, such as using the Operator and LokiStack , or Monolithic mode, or Microservices mode. In LokiStack or Microservices modes, try increasing the number of querier replicas . Increase the query timeout . You must also increase the Network Observability read timeout to Loki in the FlowCollector spec.loki.readTimeout . Additional resources Network flows format reference FlowMetric API reference Configuring custom metrics by using FlowMetric API
[ "oc adm must-gather --image-stream=openshift/must-gather --image=quay.io/netobserv/must-gather", "oc -n netobserv get flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: false", "oc edit console.operator.openshift.io cluster", "spec: plugins: - netobserv-plugin", "oc -n netobserv edit flowcollector cluster -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: consolePlugin: register: true", "oc get pods -n openshift-console -l app=console", "oc delete pods -n openshift-console -l app=console", "oc get pods -n netobserv -l app=netobserv-plugin", "NAME READY STATUS RESTARTS AGE netobserv-plugin-68c7bbb9bb-b69q6 1/1 Running 0 21s", "oc logs -n netobserv -l app=netobserv-plugin", "time=\"2022-12-13T12:06:49Z\" level=info msg=\"Starting netobserv-console-plugin [build version: , build date: 2022-10-21 15:15] at log level info\" module=main time=\"2022-12-13T12:06:49Z\" level=info msg=\"listening on https://:9001\" module=server", "oc delete pods -n netobserv -l app=flowlogs-pipeline-transformer", "oc edit -n netobserv flowcollector.yaml -o yaml", "apiVersion: flows.netobserv.io/v1alpha1 kind: FlowCollector metadata: name: cluster spec: agent: type: EBPF ebpf: interfaces: [ 'br-int', 'br-ex' ] 1", "oc edit subscription netobserv-operator -n openshift-netobserv-operator", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: netobserv-operator namespace: openshift-netobserv-operator spec: channel: stable config: resources: limits: memory: 800Mi 1 requests: cpu: 100m memory: 100Mi installPlanApproval: Automatic name: netobserv-operator source: redhat-operators sourceNamespace: openshift-marketplace startingCSV: <network_observability_operator_latest_version> 2", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/labels | jq", "oc exec deployment/netobserv-plugin -n netobserv -- curl -G -s -H 'X-Scope-OrgID:network' -H 'Authorization: Bearer <api_token>' -k https://loki-gateway-http.netobserv.svc:8080/api/logs/v1/network/loki/api/v1/query --data-urlencode 'query={SrcK8S_Namespace=\"my-namespace\"}' | jq", "apiVersion: loki.grafana.com/v1 kind: LokiStack metadata: name: loki namespace: netobserv spec: limits: global: ingestion: perStreamRateLimit: 6 1 perStreamRateLimitBurst: 30 2 tenants: mode: openshift-network managementState: Managed" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/network_observability/installing-troubleshooting
A.2. Comparing Entries
A.2. Comparing Entries ldapcompare checks entries to see if the specified entry or entries contain an attribute of a specific value. For example, this checks to see if an entry has an sn value of Smith: The compare attribute can be specified in one of three ways: A single attribute:value statement passed in the command line directly A single attribute::base64value statement passed in the command line directly, for attributes like jpegPhoto or to verify certificates or CRLs An attribute:file statement that points to a file containing a list of comparison values for the attribute, and the script iterates through the list The compare operation itself has to be run against a specific entry or group of entries. A single entry DN can be passed through the command line, or a list of DNs to be compared can be given using the -f option. Example A.1. Comparing One Attribute Value to One Entry Both the attribute-value comparison and the DN are passed with the script. Example A.2. Comparing a List Attribute Values from a File First, create a file of possible sn values. Then, create a list of entries to compare the values to. Then run the script.
[ "ldapcompare -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x sn:smith uid=bjensen,ou=people,dc=example,dc=com comparing type: \"sn\" value: \"smith\" in entry \"uid=bjensen,ou=people,dc=example,dc=com\" compare FALSE ldapcompare -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x sn:smith uid=jsmith,ou=people,dc=example,dc=com comparing type: \"sn\" value: \"smith\" in entry \"uid=jsmith,ou=people,dc=example,dc=com\" compare TRUE", "sn:Smith", "jpegPhoto:dkdkPDKCDdko0eiofk==", "postalCode:/tmp/codes.txt", "ldapcompare -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x sn:smith uid=jsmith,ou=people,dc=example,dc=com comparing type: \"sn\" value: \"smith\" in entry \"uid=jsmith,ou=people,dc=example,dc=com\" compare TRUE", "jensen johnson johannson jackson jorgenson", "uid=jen200,ou=people,dc=example,dc=com uid=dsj,ou=people,dc=example,dc=com uid=matthewjms,ou=people,dc=example,dc=com uid=john1234,ou=people,dc=example,dc=com uid=jack.son.1990,ou=people,dc=example,dc=com", "ldapcompare -D \"cn=Directory Manager\" -W -p 389 -h server.example.com -x sn:/tmp/surnames.txt -f /tmp/names.txt comparing type: \"sn\" value: \"jensen\" in entry \"uid=jen200,ou=people,dc=example,dc=com\" compare TRUE" ]
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/administration_guide/ldapcompare
Chapter 1. Introduction
Chapter 1. Introduction Security Enhanced Linux (SELinux) provides an additional layer of system security. SELinux fundamentally answers the question: "May <subject> do <action> to <object>", for example: "May a web server access files in users' home directories?". The standard access policy based on the user, group, and other permissions, known as Discretionary Access Control (DAC), does not enable system administrators to create comprehensive and fine-grained security policies, such as restricting specific applications to only viewing log files, while allowing other applications to append new data to the log files SELinux implements Mandatory Access Control (MAC). Every process and system resource has a special security label called a SELinux context . A SELinux context, sometimes referred to as a SELinux label , is an identifier which abstracts away the system-level details and focuses on the security properties of the entity. Not only does this provide a consistent way of referencing objects in the SELinux policy, but it also removes any ambiguity that can be found in other identification methods; for example, a file can have multiple valid path names on a system that makes use of bind mounts. The SELinux policy uses these contexts in a series of rules which define how processes can interact with each other and the various system resources. By default, the policy does not allow any interaction unless a rule explicitly grants access. Note It is important to remember that SELinux policy rules are checked after DAC rules. SELinux policy rules are not used if DAC rules deny access first, which means that no SELinux denial is logged if the traditional DAC rules prevent the access. SELinux contexts have several fields: user, role, type, and security level. The SELinux type information is perhaps the most important when it comes to the SELinux policy, as the most common policy rule which defines the allowed interactions between processes and system resources uses SELinux types and not the full SELinux context. SELinux types usually end with _t . For example, the type name for the web server is httpd_t . The type context for files and directories normally found in /var/www/html/ is httpd_sys_content_t . The type contexts for files and directories normally found in /tmp and /var/tmp/ is tmp_t . The type context for web server ports is http_port_t . For example, there is a policy rule that permits Apache (the web server process running as httpd_t ) to access files and directories with a context normally found in /var/www/html/ and other web server directories ( httpd_sys_content_t ). There is no allow rule in the policy for files normally found in /tmp and /var/tmp/ , so access is not permitted. With SELinux, even if Apache is compromised, and a malicious script gains access, it is still not able to access the /tmp directory. Figure 1.1. SELinux allows the Apache process running as httpd_t to access the /var/www/html/ directory and it denies the same process to access the /data/mysql/ directory because there is no allow rule for the httpd_t and mysqld_db_t type contexts). On the other hand, the MariaDB process running as mysqld_t is able to access the /data/mysql/ directory and SELinux also correctly denies the process with the mysqld_t type to access the /var/www/html/ directory labeled as httpd_sys_content_t. Additional Resources For more information, see the following documentation: The selinux(8) man page and man pages listed by the apropos selinux command. Man pages listed by the man -k _selinux command when the selinux-policy-doc package is installed. See Section 11.3.3, "Manual Pages for Services" for more information. The SELinux Coloring Book SELinux Wiki FAQ 1.1. Benefits of running SELinux SELinux provides the following benefits: All processes and files are labeled. SELinux policy rules define how processes interact with files, as well as how processes interact with each other. Access is only allowed if an SELinux policy rule exists that specifically allows it. Fine-grained access control. Stepping beyond traditional UNIX permissions that are controlled at user discretion and based on Linux user and group IDs, SELinux access decisions are based on all available information, such as an SELinux user, role, type, and, optionally, a security level. SELinux policy is administratively-defined and enforced system-wide. Improved mitigation for privilege escalation attacks. Processes run in domains, and are therefore separated from each other. SELinux policy rules define how processes access files and other processes. If a process is compromised, the attacker only has access to the normal functions of that process, and to files the process has been configured to have access to. For example, if the Apache HTTP Server is compromised, an attacker cannot use that process to read files in user home directories, unless a specific SELinux policy rule was added or configured to allow such access. SELinux can be used to enforce data confidentiality and integrity, as well as protecting processes from untrusted inputs. However, SELinux is not: antivirus software, replacement for passwords, firewalls, and other security systems, all-in-one security solution. SELinux is designed to enhance existing security solutions, not replace them. Even when running SELinux, it is important to continue to follow good security practices, such as keeping software up-to-date, using hard-to-guess passwords, or firewalls.
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/selinux_users_and_administrators_guide/chap-security-enhanced_linux-introduction
Chapter 7. Installation and Booting
Chapter 7. Installation and Booting rpm supports ordered installation based on package tags The OrderWithRequires feature has been added to the RPM Package Manager, which utilizes the new OrderWithRequires package tag. If a package specified in OrderWithRequires is present in a package transaction, it is installed before the package with the corresponding OrderWithRequires tag is installed. However, unlike the Requires package tag, OrderWithRequires does not generate additional dependencies, so if the package specified in the tag is not present in the transaction, it is not downloaded. Anaconda now displays a warning if LDL-formatted DASDs are detected during installation On IBM System z, DASDs with LDL (Linux Disk Layout) format are recognized by the kernel, but the installer does not support them. If one or more such DASDs are detected by Anaconda, it will display a warning about their unsupported status and offer to format them as CDL (Compatibility Disk Layout), which is a fully supported format type.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_release_notes/installation_and_booting
Chapter 4. Configuration
Chapter 4. Configuration This chapter describes the process for binding the AMQ Core Protocol JMS implementation to your JMS application and setting configuration options. JMS uses the Java Naming Directory Interface (JNDI) to register and look up API implementations and other resources. This enables you to write code to the JMS API without tying it to a particular implementation. Configuration options are exposed as query parameters on the connection URI. 4.1. Configuring the JNDI initial context JMS applications use a JNDI InitialContext object obtained from an InitialContextFactory to look up JMS objects such as the connection factory. AMQ Core Protocol JMS provides an implementation of the InitialContextFactory in the org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory class. The InitialContextFactory implementation is discovered when the InitialContext object is instantiated: javax.naming.Context context = new javax.naming.InitialContext(); To find an implementation, JNDI must be configured in your environment. There are three ways of achieving this: using a jndi.properties file, using a system property, or using the initial context API. Using a jndi.properties file Create a file named jndi.properties and place it on the Java classpath. Add a property with the key java.naming.factory.initial . Example: Setting the JNDI initial context factory using a jndi.properties file java.naming.factory.initial = org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory In Maven-based projects, the jndi.properties file is placed in the <project-dir> /src/main/resources directory. Using a system property Set the java.naming.factory.initial system property. Example: Setting the JNDI initial context factory using a system property USD java -Djava.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory ... Using the initial context API Use the JNDI initial context API to set properties programatically. Example: Setting JNDI properties programatically Hashtable<Object, Object> env = new Hashtable<>(); env.put("java.naming.factory.initial", "org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory"); InitialContext context = new InitialContext(env); Note that you can use the same API to set the JNDI properties for connection factories, queues, and topics. 4.2. Configuring the connection factory The JMS connection factory is the entry point for creating connections. It uses a connection URI that encodes your application-specific configuration settings. To set the factory name and connection URI, create a property in the format below. You can store this configuration in a jndi.properties file or set the corresponding system property. The JNDI property format for connection factories connectionFactory. <lookup-name> = <connection-uri> For example, this is how you might configure a factory named app1 : Example: Setting the connection factory in a jndi.properties file connectionFactory.app1 = tcp://example.net:61616?clientID=backend You can then use the JNDI context to look up your configured connection factory using the name app1 : ConnectionFactory factory = (ConnectionFactory) context.lookup("app1"); 4.3. Connection URIs Connections are configured using a connection URI. The connection URI specifies the remote host, port, and a set of configuration options, which are set as query parameters. For more information about the available options, see Chapter 5, Configuration options . The connection URI format For example, the following is a connection URI that connects to host example.net at port 61616 and sets the client ID to backend : Example: A connection URI In addition to tcp , AMQ Core Protocol JMS also supports the vm , udp , and jgroups schemes. These represent alternate transports and have corresponding acceptor configuration on the broker. Failover URIs URIs can contain multiple target connection URIs. If the initial connection to one target fails, another is tried. They take the following form: The failover URI format Options outside of the parentheses are applied to all of the connection URIs. 4.4. Configuring queue and topic names JMS provides the option of using JNDI to look up deployment-specific queue and topic resources. To set queue and topic names in JNDI, create properties in the following format. Either place this configuration in a jndi.properties file or set corresponding system properties. The JNDI property format for queues and topics queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name> For example, the following properties define the names jobs and notifications for two deployment-specific resources: Example: Setting queue and topic names in a jndi.properties file queue.jobs = app1/work-items topic.notifications = app1/updates You can then look up the resources by their JNDI names: Queue queue = (Queue) context.lookup("jobs"); Topic topic = (Topic) context.lookup("notifications");
[ "javax.naming.Context context = new javax.naming.InitialContext();", "java.naming.factory.initial = org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory", "java -Djava.naming.factory.initial=org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory", "Hashtable<Object, Object> env = new Hashtable<>(); env.put(\"java.naming.factory.initial\", \"org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory\"); InitialContext context = new InitialContext(env);", "connectionFactory. <lookup-name> = <connection-uri>", "connectionFactory.app1 = tcp://example.net:61616?clientID=backend", "ConnectionFactory factory = (ConnectionFactory) context.lookup(\"app1\");", "tcp://<host>:<port>[?<option>=<value>[&<option>=<value>...]]", "tcp://example.net:61616?clientID=backend", "(<connection-uri>[,<connection-uri>])[?<option>=<value>[&<option>=<value>...]]", "queue. <lookup-name> = <queue-name> topic. <lookup-name> = <topic-name>", "queue.jobs = app1/work-items topic.notifications = app1/updates", "Queue queue = (Queue) context.lookup(\"jobs\"); Topic topic = (Topic) context.lookup(\"notifications\");" ]
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_core_protocol_jms_client/configuration
Installing
Installing Red Hat Enterprise Linux AI 1.2 Installation documentation on various platforms Red Hat RHEL AI Documentation Team
[ "use the embedded container image ostreecontainer --url=/run/install/repo/container --transport=oci --no-signature-verification switch bootc to point to Red Hat container image for upgrades %post bootc switch --mutate-in-place --transport registry registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.1 touch /etc/cloud/cloud-init.disabled %end ## user customizations follow customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "customize this for your target system network environment network --bootproto=dhcp --device=link --activate customize this for your target system desired disk partitioning clearpart --all --initlabel --disklabel=gpt reqpart --add-boot part / --grow --fstype xfs customize this to include your own bootc container ostreecontainer --url quay.io/<your-user-name>/nvidia-bootc:latest services can also be customized via Kickstart firewall --disabled services --enabled=sshd optionally add a user user --name=cloud-user --groups=wheel --plaintext --password <password> sshkey --username cloud-user \"ssh-ed25519 AAAAC3Nza.....\" if desired, inject an SSH key for root rootpw --iscrypted locked sshkey --username root \"ssh-ed25519 AAAAC3Nza...\" reboot", "mkksiso rhelai-bootc.ks <downloaded-iso-image> rhelai-bootc-ks.iso", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "export BUCKET=<custom_bucket_name> export RAW_AMI=nvidia-bootc.ami export AMI_NAME=\"rhel-ai\" export DEFAULT_VOLUME_SIZE=1000", "aws s3 mb s3://USDBUCKET", "printf '{ \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Service\": \"vmie.amazonaws.com\" }, \"Action\": \"sts:AssumeRole\", \"Condition\": { \"StringEquals\":{ \"sts:Externalid\": \"vmimport\" } } } ] }' > trust-policy.json", "aws iam create-role --role-name vmimport --assume-role-policy-document file://trust-policy.json", "printf '{ \"Version\":\"2012-10-17\", \"Statement\":[ { \"Effect\":\"Allow\", \"Action\":[ \"s3:GetBucketLocation\", \"s3:GetObject\", \"s3:ListBucket\" ], \"Resource\":[ \"arn:aws:s3:::%s\", \"arn:aws:s3:::%s/*\" ] }, { \"Effect\":\"Allow\", \"Action\":[ \"ec2:ModifySnapshotAttribute\", \"ec2:CopySnapshot\", \"ec2:RegisterImage\", \"ec2:Describe*\" ], \"Resource\":\"*\" } ] }' USDBUCKET USDBUCKET > role-policy.json", "aws iam put-role-policy --role-name vmimport --policy-name vmimport-USDBUCKET --policy-document file://role-policy.json", "curl -Lo disk.raw <link-to-raw-file>", "aws s3 cp disk.raw s3://USDBUCKET/USDRAW_AMI", "printf '{ \"Description\": \"my-image\", \"Format\": \"raw\", \"UserBucket\": { \"S3Bucket\": \"%s\", \"S3Key\": \"%s\" } }' USDBUCKET USDRAW_AMI > containers.json", "task_id=USD(aws ec2 import-snapshot --disk-container file://containers.json | jq -r .ImportTaskId)", "aws ec2 describe-import-snapshot-tasks --filters Name=task-state,Values=active", "snapshot_id=USD(aws ec2 describe-import-snapshot-tasks | jq -r '.ImportSnapshotTasks[] | select(.ImportTaskId==\"'USD{task_id}'\") | .SnapshotTaskDetail.SnapshotId')", "aws ec2 create-tags --resources USDsnapshot_id --tags Key=Name,Value=\"USDAMI_NAME\"", "ami_id=USD(aws ec2 register-image --name \"USDAMI_NAME\" --description \"USDAMI_NAME\" --architecture x86_64 --root-device-name /dev/sda1 --block-device-mappings \"DeviceName=/dev/sda1,Ebs={VolumeSize=USD{DEFAULT_VOLUME_SIZE},SnapshotId=USD{snapshot_id}}\" --virtualization-type hvm --ena-support | jq -r .ImageId)", "aws ec2 create-tags --resources USDami_id --tags Key=Name,Value=\"USDAMI_NAME\"", "aws ec2 describe-images --owners self", "aws ec2 describe-security-groups", "aws ec2 describe-subnets", "instance_name=rhel-ai-instance ami=<ami-id> instance_type=<instance-type-size> key_name=<key-pair-name> security_group=<sg-id> disk_size=<size-of-disk>", "aws ec2 run-instances --image-id USDami --instance-type USDinstance_type --key-name USDkey_name --security-group-ids USDsecurity_group --subnet-id USDsubnet --block-device-mappings DeviceName=/dev/sda1,Ebs='{VolumeSize='USDdisk_size'}' --tag-specifications 'ResourceType=instance,Tags=[{Key=Name,Value='USDinstance_name'}]'", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/cloud--user/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls. taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "ibmcloud login", "ibmcloud login API endpoint: https://cloud.ibm.com Region: us-east Get a one-time code from https://identity-1.eu-central.iam.cloud.ibm.com/identity/passcode to proceed. Open the URL in the default browser? [Y/n] > One-time code > Authenticating OK Select an account: 1. <account-name> 2. <account-name-2> API endpoint: https://cloud.ibm.com Region: us-east User: <user-name> Account: <selected-account> Resource group: No resource group targeted, use 'ibmcloud target -g RESOURCE_GROUP'", "ibmcloud plugin install cloud-object-storage infrastructure-service", "ibmcloud target -g Default", "ibmcloud target -r us-east", "ibmcloud catalog service cloud-object-storage --output json | jq -r '.[].children[] | select(.children != null) | .children[].name'", "cos_deploy_plan=premium-global-deployment", "cos_si_name=THE_NAME_OF_YOUR_SERVICE_INSTANCE", "ibmcloud resource service-instance-create USD{cos_si_name} cloud-object-storage standard global -d USD{cos_deploy_plan}", "cos_crn=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .crn')", "ibmcloud cos config crn --crn USD{cos_crn} --force", "bucket_name=NAME_OF_MY_BUCKET", "ibmcloud cos bucket-create --bucket USD{bucket_name}", "cos_si_guid=USD(ibmcloud resource service-instance USD{cos_si_name} --output json| jq -r '.[] | select(.crn | contains(\"cloud-object-storage\")) | .guid')", "ibmcloud iam authorization-policy-create is cloud-object-storage Reader --source-resource-type image --target-service-instance-id USD{cos_si_guid}", "curl -Lo disk.qcow2 \"PASTE_HERE_THE_LINK_OF_THE_QCOW2_FILE\"", "image_name=rhel-ai-20240703v0", "ibmcloud cos upload --bucket USD{bucket_name} --key USD{image_name}.qcow2 --file disk.qcow2 --region <region>", "ibmcloud is image-create USD{image_name} --file cos://<region>/USD{bucket_name}/USD{image_name}.qcow2 --os-name red-ai-9-amd64-nvidia-byol", "image_id=USD(ibmcloud is images --visibility private --output json | jq -r '.[] | select(.name==\"'USDimage_name'\") | .id')", "while ibmcloud is image --output json USD{image_id} | jq -r .status | grep -xq pending; do sleep 1; done", "ibmcloud is image USD{image_id}", "ibmcloud login -c <ACCOUNT_ID> -r <REGION> -g <RESOURCE_GROUP>", "ibmcloud plugin install infrastructure-service", "ssh-keygen -f ibmcloud -t ed25519", "ibmcloud is key-create my-ssh-key @ibmcloud.pub --key-type ed25519", "ibmcloud is floating-ip-reserve my-public-ip --zone <region>", "ibmcloud is instance-profiles", "name=my-rhelai-instance vpc=my-vpc-in-us-east zone=us-east-1 subnet=my-subnet-in-us-east-1 instance_profile=gx3-64x320x4l4 image=my-custom-rhelai-image sshkey=my-ssh-key floating_ip=my-public-ip disk_size=250", "ibmcloud is instance-create USDname USDvpc USDzone USDinstance_profile USDsubnet --image USDimage --keys USDsshkey --boot-volume '{\"name\": \"'USD{name}'-boot\", \"volume\": {\"name\": \"'USD{name}'-boot\", \"capacity\": 'USD{disk_size}', \"profile\": {\"name\": \"general-purpose\"}}}' --allow-ip-spoofing false", "ibmcloud is floating-ip-update USDfloating_ip --nic primary --in USDname", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model model_list serve model serve sysinfo system info test model test train model train", "name=my-rhelai-instance", "data_volume_size=1000", "ibmcloud is instance-volume-attachment-add data USD{name} --new-volume-name USD{name}-data --profile general-purpose --capacity USD{data_volume_size}", "lsblk", "disk=/dev/vdb", "sgdisk -n 1:0:0 USDdisk", "mkfs.xfs -L ilab-data USD{disk}1", "echo LABEL=ilab-data /mnt xfs defaults 0 0 >> /etc/fstab", "systemctl daemon-reload", "mount -a", "chmod 1777 /mnt/", "echo 'export ILAB_HOME=/mnt' >> USDHOME/.bash_profile", "source USDHOME/.bash_profile", "gcloud auth login", "gcloud auth login Your browser has been opened to visit: https://accounts.google.com/o/oauth2/auth?XXXXXXXXXXXXXXXXXXXX You are now logged in as [[email protected]]. Your current project is [your-project]. You can change this setting by running: USD gcloud config set project PROJECT_ID", "gcloud_project=your-gcloud-project gcloud config set project USDgcloud_project", "gcloud_region=us-central1", "gcloud_bucket=name-for-your-bucket gsutil mb -l USDgcloud_region gs://USDgcloud_bucket", "FROM registry.redhat.io/rhelai1/bootc-nvidia-rhel9:1.2 RUN eval USD(grep VERSION_ID /etc/os-release) && echo -e \"[google-compute-engine]\\nname=Google Compute Engine\\nbaseurl=https://packages.cloud.google.com/yum/repos/google-compute-engine-elUSD{VERSION_ID/.*}-x86_64-stable\\nenabled=1\\ngpgcheck=1\\nrepo_gpgcheck=0\\ngpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg\\n https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg\" > /etc/yum.repos.d/google-cloud.repo && dnf install -y --nobest acpid cloud-init google-compute-engine google-osconfig-agent langpacks-en rng-tools timedatex tuned vim && curl -sSo /tmp/add-google-cloud-ops-agent-repo.sh https://dl.google.com/cloudagents/add-google-cloud-ops-agent-repo.sh && bash /tmp/add-google-cloud-ops-agent-repo.sh --also-install --remove-repo && rm /tmp/add-google-cloud-ops-agent-repo.sh && mkdir -p /var/lib/rpm-state && dnf remove -y irqbalance microcode_ctl && rmdir /var/lib/rpm-state && rm -f /etc/yum.repos.d/google-cloud.repo && sed -i -e '/^pool /c\\server metadata.google.internal iburst' /etc/chrony.conf && echo -e 'PermitRootLogin no\\nPasswordAuthentication no\\nClientAliveInterval 420' >> /etc/ssh/sshd_config && echo -e '[InstanceSetup]\\nset_boto_config = false' > /etc/default/instance_configs.cfg && echo 'blacklist floppy' > /etc/modprobe.d/blacklist_floppy.conf && echo -e '[install]\\nkargs = [\"net.ifnames=0\", \"biosdevname=0\", \"scsi_mod.use_blk_mq=Y\", \"console=ttyS0,38400n8d\", \"cloud-init=disabled\"]' > /usr/lib/bootc/install/05-cloud-kargs.toml", "GCP_BOOTC_IMAGE=quay.io/yourquayusername/bootc-nvidia-rhel9-gcp podman build --file Containerfile --tag USD{GCP_BOOTC_IMAGE} .", "[customizations.kernel] name = \"gcp\" append = \"net.ifnames=0 biosdevname=0 scsi_mod.use_blk_mq=Y console=ttyS0,38400n8d cloud-init=disabled\"", "mkdir -p build/store build/output podman run --rm -ti --privileged --pull newer -v /var/lib/containers/storage:/var/lib/containers/storage -v ./build/store:/store -v ./build/output:/output -v ./config.toml:/config.toml quay.io/centos-bootc/bootc-image-builder --config /config.toml --chown 0:0 --local --type raw --target-arch x86_64 USD{GCP_BOOTC_IMAGE}", "image_name=rhel-ai-1-2", "raw_file=<path-to-raw-file> tar cf rhelai_gcp.tar.gz --transform \"s|USDraw_file|disk.raw|\" --use-compress-program=pigz \"USDraw_file\"", "gsutil cp rhelai_gcp.tar.gz \"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\"", "gcloud compute images create \"USDimage_name\" --source-uri=\"gs://USD{gcloud_bucket}/USDimage_name.tar.gz\" --family \"rhel-ai\" --guest-os-features=GVNIC", "gcloud auth login", "gcloud compute machine-types list --zones=<zone>", "name=my-rhelai-instance zone=us-central1-a machine_type=a3-highgpu-8g accelerator=\"type=nvidia-h100-80gb,count=8\" image=my-custom-rhelai-image disk_size=1024 subnet=default", "gcloud config set compute/zone USDzone", "gcloud compute instances create USD{name} --machine-type USD{machine_type} --image USDimage --zone USDzone --subnet USDsubnet --boot-disk-size USD{disk_size} --boot-disk-device-name USD{name} --accelerator=USDaccelerator", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train", "az login", "az login A web browser has been opened at https://login.microsoftonline.com/organizations/oauth2/v2.0/authorize. Please continue the login in the web browser. If no web browser is available or if the web browser fails to open, use device code flow with `az login --use-device-code`. [ { \"cloudName\": \"AzureCloud\", \"homeTenantId\": \"c7b976df-89ce-42ec-b3b2-a6b35fd9c0be\", \"id\": \"79d7df51-39ec-48b9-a15e-dcf59043c84e\", \"isDefault\": true, \"managedByTenants\": [], \"name\": \"Team Name\", \"state\": \"Enabled\", \"tenantId\": \"0a873aea-428f-47bd-9120-73ce0c5cc1da\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "keyctl new_session azcopy login", "az_location=eastus", "az_resource_group=Default az group create --name USD{az_resource_group} --location USD{az_location}", "az_storage_account=THE_NAME_OF_YOUR_STORAGE_ACCOUNT", "az storage account create --name USD{az_storage_account} --resource-group USD{az_resource_group} --location USD{az_location} --sku Standard_LRS", "az_storage_container=NAME_OF_MY_BUCKET az storage container create --name USD{az_storage_container} --account-name USD{az_storage_account} --public-access off", "az account list --output table", "az_subscription_id=46c08fb3-83c5-4b59-8372-bf9caf15a681", "az role assignment create --assignee [email protected] --role \"Storage Blob Data Contributor\" --scope /subscriptions/USD{az_subscription_id}/resourceGroups/USD{az_resource_group}/providers/Microsoft.Storage/storageAccounts/USD{az_storage_account}/blobServices/default/containers/USD{az_storage_container}", "image_name=rhel-ai-1.2", "az_vhd_url=\"https://USD{az_storage_account}.blob.core.windows.net/USD{az_storage_container}/USD(basename USD{vhd_file})\" azcopy copy \"USDvhd_file\" \"USDaz_vhd_url\"", "az image create --resource-group USDaz_resource_group --name \"USDimage_name\" --source \"USD{az_vhd_url}\" --location USD{az_location} --os-type Linux --hyper-v-generation V2", "az login", "az vm list-sizes --location <region> --output table", "name=my-rhelai-instance az_location=eastus az_resource_group=my_resource_group az_admin_username=azureuser az_vm_size=Standard_ND96isr_H100_v5 az_image=my-custom-rhelai-image sshpubkey=USDHOME/.ssh/id_rsa.pub disk_size=1024", "az vm create --resource-group USDaz_resource_group --name USD{name} --image USD{az_image} --size USD{az_vm_size} --location USD{az_location} --admin-username USD{az_admin_username} --ssh-key-values @USDsshpubkey --authentication-type ssh --nic-delete-option Delete --accelerated-networking true --os-disk-size-gb 1024 --os-disk-name USD{name}-USD{az_location}", "ilab", "ilab Usage: ilab [OPTIONS] COMMAND [ARGS] CLI for interacting with InstructLab. If this is your first time running ilab, it's best to start with `ilab config init` to create the environment. Options: --config PATH Path to a configuration file. [default: /home/auser/.config/instructlab/config.yaml] -v, --verbose Enable debug logging (repeat for even more verbosity) --version Show the version and exit. --help Show this message and exit. Commands: config Command Group for Interacting with the Config of InstructLab. data Command Group for Interacting with the Data generated by model Command Group for Interacting with the Models in InstructLab. system Command group for all system-related command calls taxonomy Command Group for Interacting with the Taxonomy of InstructLab. Aliases: chat model chat convert model convert diff taxonomy diff download model download evaluate model evaluate generate data generate init config init list model list serve model serve sysinfo system info test model test train model train" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.2/html-single/installing/index
Chapter 1. Activating Red Hat Ansible Automation Platform
Chapter 1. Activating Red Hat Ansible Automation Platform Red Hat Ansible Automation Platform uses available subscriptions or a subscription manifest to authorize the use of Ansible Automation Platform. To obtain a subscription, you can do either of the following: Use your Red Hat customer or Satellite credentials when you launch Ansible Automation Platform. Upload a subscriptions manifest file either using the Red Hat Ansible Automation Platform interface or manually in an Ansible playbook. To activate Ansible Automation Platform using credentials, see Activate with credentials . To activate Ansible Automation Platform with a manifest file, see Activate with a manifest file .
null
https://docs.redhat.com/en/documentation/red_hat_ansible_automation_platform/2.5/html/operating_ansible_automation_platform/assembly-aap-activate
Chapter 6. Fixed Common Vulnerabilities and Exposures
Chapter 6. Fixed Common Vulnerabilities and Exposures This section details Common Vulnerabilities and Exposures (CVEs) fixed in the AMQ Broker 7.12 release. ENTMQBR-8644 - TRIAGE CVE-2023-6717 keycloak: XSS via assertion consumer service URL in SAML POST-binding flow [amq-7] ENTMQBR-8976 - TRIAGE CVE-2024-29025 netty-codec-http: Allocation of Resources Without Limits or Throttling [amq-7] ENTMQBR-8927 - CVE-2024-22259 springframework: URL Parsing with Host Validation [amq-7] ENTMQBR-8740 - CVE-2024-1132 keycloak: path transversal in redirection validation [amq-7] ENTMQBR-8758 - CVE-2024-1249 keycloak: org.keycloak.protocol.oidc: unvalidated cross-origin messages in checkLoginIframe leads to DDoS [amq-7] ENTMQBR-8626 - CVE-2023-6378 logback: serialization vulnerability in logback receiver [amq-7] ENTMQBR-8627 - CVE-2023-6481 logback: A serialization vulnerability in logback receiver [amq-7] ENTMQBR-8953 - CVE-2024-29131 CVE-2024-29133 commons-configuration2: various flaws [amq-7] ENTMQBR-8702 - CVE-2023-44981 zookeeper: Authorization Bypass in Apache ZooKeeper [amq-7] ENTMQBR-8611 - CVE-2022-41678 activemq: Apache ActiveMQ: Deserialization vulnerability on Jolokia that allows authenticated users to perform RCE [amq-7] ENTMQBR-8225 - CVE-2023-24540 amq-broker-rhel8-operator-container: golang: html/template: improper handling of JavaScript whitespace [amq-7] ENTMQBR-8227 - CVE-2022-21698 amq-broker-rhel8-operator-container: prometheus/client_golang: Denial of service using InstrumentHandlerCounter [amq-7] ENTMQBR-8238 - CVE-2022-21698 CVE-2023-24534 amq-broker-rhel8-operator-container: golang: net/http, net/textproto: denial of service from excessive memory allocation [amq-7] ENTMQBR-8239 - CVE-2023-29400 amq-broker-rhel8-operator-container: golang: html/template: improper handling of empty HTML attributes [amq-7] ENTMQBR-8240 - CVE-2023-24539 amq-broker-rhel8-operator-container: golang: html/template: improper sanitization of CSS values [amq-7] ENTMQBR-8228 - CVE-2021-43565 amq-broker-rhel8-operator-container: golang.org/x/crypto: empty plaintext packet causes panic [amq-7] ENTMQBR-8230 - CVE-2022-41723 amq-broker-rhel8-operator-container: net/http, golang.org/x/net/http2: avoid quadratic complexity in HPACK decoding [amq-7] ENTMQBR-8236 - CVE-2023-24536 amq-broker-rhel8-operator-container: golang: net/http, net/textproto, mime/multipart: denial of service from excessive resource consumption [amq-7] ENTMQBR-8237 - CVE-2023-24537 amq-broker-rhel8-operator-container: golang: go/parser: Infinite loop in parsing [amq-7] ENTMQBR-8231 - CVE-2022-2879 amq-broker-rhel8-operator-container: golang: archive/tar: unbounded memory consumption when reading headers [amq-7] ENTMQBR-8229 - CVE-2022-27664 amq-broker-rhel8-operator-container: golang: net/http: handle server errors after sending GOAWAY [amq-7] ENTMQBR-8226 - CVE-2022-32189 amq-broker-rhel8-operator-container: golang: math/big: decoding big.Float and big.Rat types can panic if the encoded message is too short, potentially allowing a denial of service [amq-7] ENTMQBR-8232 - CVE-2022-41715 amq-broker-rhel8-operator-container: golang: regexp/syntax: limit memory used by parsing regexps [amq-7] ENTMQBR-8241 - CVE-2023-24538 amq-broker-rhel8-operator-container: golang: html/template: backticks not treated as string delimiters [amq-7] ENTMQBR-8233 - CVE-2022-2880 amq-broker-rhel8-operator-container: golang: net/http/httputil: ReverseProxy should not forward unparseable query parameters [amq-7] ENTMQBR-8234 - CVE-2022-41724 amq-broker-rhel8-operator-container: golang: crypto/tls: large handshake records may cause panics [amq-7] ENTMQBR-8608 - CVE-2022-41678 activemq-broker-operator: Apache ActiveMQ: Deserialization vulnerability on Jolokia that allows authenticated users to perform RCE [amq-7] ENTMQBR-8235 - CVE-2022-41725 amq-broker-rhel8-operator-container: golang: net/http, mime/multipart: denial of service from excessive resource consumption [amq-7] ENTMQBR-8671 - CVE-2023-51074 json-path: stack-based buffer overflow in Criteria.parse method [amq-7]
null
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.12/html/release_notes_for_red_hat_amq_broker_7.12/resolved_cves
Chapter 6. Login Modules for Jakarta Enterprise Beans and Remoting
Chapter 6. Login Modules for Jakarta Enterprise Beans and Remoting 6.1. Remoting Login Module Short name : Remoting Full name : org.jboss.as.security.remoting.RemotingLoginModule Parent : AbstractServer Login Module The Remoting login module allows remote Jakarta Enterprise Beans invocations, coming in over remoting, to perform a SASL-based authentication. This allows the remote user to establish their identity via SASL and have that identity be used for authentication and authorization when making that Jakarta Enterprise Beans invocation. Table 6.1. Remoting Login Module Options Option Type Default Description useClientCert boolean false If true , the login module will obtain the SSLSession of the connection and substitute the peer's X509Certificate in place of the password. 6.2. Client Login Module Short name : Client Full name : org.jboss.security.ClientLoginModule Client login module is an implementation of login module for use by JBoss EAP clients when establishing caller identity and credentials. This creates a new SecurityContext , assigns it a principal and a credential and sets the SecurityContext to the ThreadLocal security context. Client login module is the only supported mechanism for a client to establish the current thread's caller. Both standalone client applications, and server environments, acting as JBoss EAP Jakarta Enterprise Beans clients where the security environment has not been configured to use the JBoss EAP security subsystem transparently, must use Client login module. Warning This login module does not perform any authentication. It merely copies the login information provided to it into the server Jakarta Enterprise Beans invocation layer for subsequent authentication on the server. Within JBoss EAP, this is only supported for the purpose of switching a user's identity for in-JVM calls. This is NOT supported for remote clients to establish an identity. Table 6.2. Client Login Module Options Option Type Default Description multi-threaded true or false true Set to true if each thread has its own principal and credential storage. Set to false to indicate that all threads in the VM share the same identity and credential. password-stacking useFirstPass or false false Set to useFirstPass to indicate that this login module should look for information stored in the LoginContext to use as the identity. This option can be used when stacking other login modules with this one. restore-login-identity true or false false Set to true if the identity and credential seen at the start of the login() method should be restored after the logout() method is invoked.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/7.4/html/login_module_reference/login_modules_for_jakarta_enterprise_beans_and_remoting
A.5. Enabling Quota Accounting
A.5. Enabling Quota Accounting It is possible to keep track of disk usage and maintain quota accounting for every user and group without enforcing the limit and warn values. To do this, mount the file system with the quota=account option specified. Usage -o quota=account Specifies that user and group usage statistics are maintained by the file system, even though the quota limits are not enforced. BlockDevice Specifies the block device where the GFS2 file system resides. MountPoint Specifies the directory where the GFS2 file system should be mounted. Example In this example, the GFS2 file system on /dev/vg01/lvol0 is mounted on the /mygfs2 directory with quota accounting enabled.
[ "mount -o quota=account BlockDevice MountPoint", "mount -o quota=account /dev/vg01/lvol0 /mygfs2" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/global_file_system_2/s1-quotaapp-quotaaccount
Chapter 1. Introduction to developing applications with Red Hat build of Apache Camel for Quarkus
Chapter 1. Introduction to developing applications with Red Hat build of Apache Camel for Quarkus This guide is for developers writing Camel applications on top of Red Hat build of Apache Camel for Quarkus. Camel components which are supported in Red Hat build of Apache Camel for Quarkus have an associated Red Hat build of Apache Camel for Quarkus extension. For more information about the Red Hat build of Apache Camel for Quarkus extensions supported in this distribution, see the Red Hat build of Apache Camel for Quarkus Extensions reference guide.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.8/html/developing_applications_with_red_hat_build_of_apache_camel_for_quarkus/introduction_to_developing_applications_with_red_hat_build_of_apache_camel_for_quarkus
20.3. Support in Red Hat Satellite
20.3. Support in Red Hat Satellite System management of Red Hat Enterprise Linux 7.5 for IBM POWER LE (POWER9) is supported in Red Hat Satellite 6 but not in Red Hat Satellite 5.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/power9-satellite
Chapter 78. Kubernetes Services
Chapter 78. Kubernetes Services Since Camel 2.17 Both producer and consumer are supported The Kubernetes Services component is one of the Kubernetes Components which provides a producer to execute Kubernetes Service operations and a consumer to consume events related to Service objects. 78.1. Dependencies When using kubernetes-services with Red Hat build of Apache Camel for Spring Boot, use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency> 78.2. Configuring Options Camel components are configured on two separate levels: component level endpoint level 78.2.1. Configuring Component Options The component level is the highest level which holds general and common configurations that are inherited by the endpoints. For example a component may have security settings, credentials for authentication, urls for network connection and so forth. Some components only have a few options, and others may have many. Because components typically have pre configured defaults that are commonly used, then you may often only need to configure a few options on a component; or none at all. Configuring components can be done with the Component DSL , in a configuration file (application.properties|yaml), or directly with Java code. 78.2.2. Configuring Endpoint Options Where you find yourself configuring the most is on endpoints, as endpoints often have many options, which allows you to configure what you need the endpoint to do. The options are also categorized into whether the endpoint is used as consumer (from) or as a producer (to), or used for both. Configuring endpoints is most often done directly in the endpoint URI as path and query parameters. You can also use the Endpoint DSL as a type safe way of configuring endpoints. A good practice when configuring options is to use Property Placeholders , which allows to not hardcode urls, port numbers, sensitive information, and other settings. In other words placeholders allows to externalize the configuration from your code, and gives more flexibility and reuse. The following two sections lists all the options, firstly for the component followed by the endpoint. 78.3. Component Options The Kubernetes Services component supports 4 options, which are listed below. Name Description Default Type kubernetesClient (common) Autowired To use an existing kubernetes client. KubernetesClient bridgeErrorHandler (consumer) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean lazyStartProducer (producer) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean autowiredEnabled (advanced) Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true boolean 78.4. Endpoint Options The Kubernetes Services endpoint is configured using URI syntax: with the following path and query parameters: 78.4.1. Path Parameters (1 parameters) Name Description Default Type masterUrl (common) Required Kubernetes Master url. String 78.4.2. Query Parameters (33 parameters) Name Description Default Type apiVersion (common) The Kubernetes API Version to use. String dnsDomain (common) The dns domain, used for ServiceCall EIP. String kubernetesClient (common) Default KubernetesClient to use if provided. KubernetesClient namespace (common) The namespace. String portName (common) The port name, used for ServiceCall EIP. String portProtocol (common) The port protocol, used for ServiceCall EIP. tcp String crdGroup (consumer) The Consumer CRD Resource Group we would like to watch. String crdName (consumer) The Consumer CRD Resource name we would like to watch. String crdPlural (consumer) The Consumer CRD Resource Plural we would like to watch. String crdScope (consumer) The Consumer CRD Resource Scope we would like to watch. String crdVersion (consumer) The Consumer CRD Resource Version we would like to watch. String labelKey (consumer) The Consumer Label key when watching at some resources. String labelValue (consumer) The Consumer Label value when watching at some resources. String poolSize (consumer) The Consumer pool size. 1 int resourceName (consumer) The Consumer Resource Name we would like to watch. String bridgeErrorHandler (consumer (advanced)) Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false boolean exceptionHandler (consumer (advanced)) To let the consumer use a custom ExceptionHandler. Notice if the option bridgeErrorHandler is enabled then this option is not in use. By default the consumer will deal with exceptions, that will be logged at WARN or ERROR level and ignored. ExceptionHandler exchangePattern (consumer (advanced)) Sets the exchange pattern when the consumer creates an exchange. Enum values: InOnly InOut ExchangePattern operation (producer) Producer operation to do on Kubernetes. String lazyStartProducer (producer (advanced)) Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false boolean connectionTimeout (advanced) Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer caCertData (security) The CA Cert Data. String caCertFile (security) The CA Cert File. String clientCertData (security) The Client Cert Data. String clientCertFile (security) The Client Cert File. String clientKeyAlgo (security) The Key Algorithm used by the client. String clientKeyData (security) The Client Key data. String clientKeyFile (security) The Client Key file. String clientKeyPassphrase (security) The Client Key Passphrase. String oauthToken (security) The Auth Token. String password (security) Password to connect to Kubernetes. String trustCerts (security) Define if the certs we used are trusted anyway or not. Boolean username (security) Username to connect to Kubernetes. String 78.5. Message Headers The Kubernetes Services component supports 7 message header(s), which is/are listed below: Name Description Default Type CamelKubernetesOperation (producer) Constant: KUBERNETES_OPERATION The Producer operation. String CamelKubernetesNamespaceName (producer) Constant: KUBERNETES_NAMESPACE_NAME The namespace name. String CamelKubernetesServiceLabels (producer) Constant: KUBERNETES_SERVICE_LABELS The service labels. Map CamelKubernetesServiceName (producer) Constant: KUBERNETES_SERVICE_NAME The service name. String CamelKubernetesServiceSpec (producer) Constant: KUBERNETES_SERVICE_SPEC The spec of a service. ServiceSpec CamelKubernetesEventAction (consumer) Constant: KUBERNETES_EVENT_ACTION Action watched by the consumer. Enum values: ADDED MODIFIED DELETED ERROR BOOKMARK Action CamelKubernetesEventTimestamp (consumer) Constant: KUBERNETES_EVENT_TIMESTAMP Timestamp of the action watched by the consumer. long 78.6. Supported producer operation listServices listServicesByLabels getService createService deleteService 78.7. Kubernetes Services Producer Examples listServices: this operation list the services on a kubernetes cluster. from("direct:list"). toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServices"). to("mock:result"); This operation returns a List of services from your cluster. listServicesByLabels: this operation list the deployments by labels on a kubernetes cluster. from("direct:listByLabels").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put("key1", "value1"); labels.put("key2", "value2"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_LABELS, labels); } }); toF("kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServicesByLabels"). to("mock:result"); This operation returns a List of Services from your cluster, using a label selector (with key1 and key2, with value value1 and value2). 78.8. Kubernetes Services Consumer Example fromF("kubernetes-services://%s?oauthToken=%s&namespace=default&resourceName=test", host, authToken).process(new KubernertesProcessor()).to("mock:result"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Service sv = exchange.getIn().getBody(Service.class); log.info("Got event with configmap name: " + sv.getMetadata().getName() + " and action " + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } } This consumer returns a list of events on the namespace default for the service test. 78.8.1. Spring Boot Auto-Configuration The component supports 102 options, which are listed below. Name Description Default Type camel.cluster.kubernetes.attributes Custom service attributes. Map camel.cluster.kubernetes.cluster-labels Set the labels used to identify the pods composing the cluster. Map camel.cluster.kubernetes.config-map-name Set the name of the ConfigMap used to do optimistic locking (defaults to 'leaders'). String camel.cluster.kubernetes.connection-timeout-millis Connection timeout in milliseconds to use when making requests to the Kubernetes API server. Integer camel.cluster.kubernetes.enabled Sets if the Kubernetes cluster service should be enabled or not, default is false. false Boolean camel.cluster.kubernetes.id Cluster Service ID. String camel.cluster.kubernetes.jitter-factor A jitter factor to apply in order to prevent all pods to call Kubernetes APIs in the same instant. Double camel.cluster.kubernetes.kubernetes-namespace Set the name of the Kubernetes namespace containing the pods and the configmap (autodetected by default). String camel.cluster.kubernetes.lease-duration-millis The default duration of the lease for the current leader. Long camel.cluster.kubernetes.master-url Set the URL of the Kubernetes master (read from Kubernetes client properties by default). String camel.cluster.kubernetes.order Service lookup order/priority. Integer camel.cluster.kubernetes.pod-name Set the name of the current pod (autodetected from container host name by default). String camel.cluster.kubernetes.renew-deadline-millis The deadline after which the leader must stop its services because it may have lost the leadership. Long camel.cluster.kubernetes.retry-period-millis The time between two subsequent attempts to check and acquire the leadership. It is randomized using the jitter factor. Long camel.component.kubernetes-config-maps.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-config-maps.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-config-maps.enabled Whether to enable auto configuration of the kubernetes-config-maps component. This is enabled by default. Boolean camel.component.kubernetes-config-maps.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-config-maps.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-custom-resources.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-custom-resources.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-custom-resources.enabled Whether to enable auto configuration of the kubernetes-custom-resources component. This is enabled by default. Boolean camel.component.kubernetes-custom-resources.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-custom-resources.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-deployments.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-deployments.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-deployments.enabled Whether to enable auto configuration of the kubernetes-deployments component. This is enabled by default. Boolean camel.component.kubernetes-deployments.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-deployments.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-events.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-events.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-events.enabled Whether to enable auto configuration of the kubernetes-events component. This is enabled by default. Boolean camel.component.kubernetes-events.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-events.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-hpa.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-hpa.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-hpa.enabled Whether to enable auto configuration of the kubernetes-hpa component. This is enabled by default. Boolean camel.component.kubernetes-hpa.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-hpa.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-job.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-job.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-job.enabled Whether to enable auto configuration of the kubernetes-job component. This is enabled by default. Boolean camel.component.kubernetes-job.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-job.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-namespaces.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-namespaces.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-namespaces.enabled Whether to enable auto configuration of the kubernetes-namespaces component. This is enabled by default. Boolean camel.component.kubernetes-namespaces.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-namespaces.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-nodes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-nodes.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-nodes.enabled Whether to enable auto configuration of the kubernetes-nodes component. This is enabled by default. Boolean camel.component.kubernetes-nodes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-nodes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes-claims.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes-claims.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes-claims component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes-claims.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes-claims.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-persistent-volumes.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-persistent-volumes.enabled Whether to enable auto configuration of the kubernetes-persistent-volumes component. This is enabled by default. Boolean camel.component.kubernetes-persistent-volumes.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-persistent-volumes.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-pods.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-pods.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-pods.enabled Whether to enable auto configuration of the kubernetes-pods component. This is enabled by default. Boolean camel.component.kubernetes-pods.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-pods.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-replication-controllers.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-replication-controllers.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-replication-controllers.enabled Whether to enable auto configuration of the kubernetes-replication-controllers component. This is enabled by default. Boolean camel.component.kubernetes-replication-controllers.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-replication-controllers.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-resources-quota.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-resources-quota.enabled Whether to enable auto configuration of the kubernetes-resources-quota component. This is enabled by default. Boolean camel.component.kubernetes-resources-quota.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-resources-quota.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-secrets.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-secrets.enabled Whether to enable auto configuration of the kubernetes-secrets component. This is enabled by default. Boolean camel.component.kubernetes-secrets.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-secrets.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-service-accounts.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-service-accounts.enabled Whether to enable auto configuration of the kubernetes-service-accounts component. This is enabled by default. Boolean camel.component.kubernetes-service-accounts.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-service-accounts.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.kubernetes-services.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.kubernetes-services.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.kubernetes-services.enabled Whether to enable auto configuration of the kubernetes-services component. This is enabled by default. Boolean camel.component.kubernetes-services.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.kubernetes-services.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-build-configs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-build-configs.enabled Whether to enable auto configuration of the openshift-build-configs component. This is enabled by default. Boolean camel.component.openshift-build-configs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-build-configs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-builds.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-builds.enabled Whether to enable auto configuration of the openshift-builds component. This is enabled by default. Boolean camel.component.openshift-builds.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-builds.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean camel.component.openshift-deploymentconfigs.autowired-enabled Whether autowiring is enabled. This is used for automatic autowiring options (the option must be marked as autowired) by looking up in the registry to find if there is a single instance of matching type, which then gets configured on the component. This can be used for automatic configuring JDBC data sources, JMS connection factories, AWS Clients, etc. true Boolean camel.component.openshift-deploymentconfigs.bridge-error-handler Allows for bridging the consumer to the Camel routing Error Handler, which mean any exceptions occurred while the consumer is trying to pickup incoming messages, or the likes, will now be processed as a message and handled by the routing Error Handler. By default the consumer will use the org.apache.camel.spi.ExceptionHandler to deal with exceptions, that will be logged at WARN or ERROR level and ignored. false Boolean camel.component.openshift-deploymentconfigs.enabled Whether to enable auto configuration of the openshift-deploymentconfigs component. This is enabled by default. Boolean camel.component.openshift-deploymentconfigs.kubernetes-client To use an existing kubernetes client. The option is a io.fabric8.kubernetes.client.KubernetesClient type. KubernetesClient camel.component.openshift-deploymentconfigs.lazy-start-producer Whether the producer should be started lazy (on the first message). By starting lazy you can use this to allow CamelContext and routes to startup in situations where a producer may otherwise fail during starting and cause the route to fail being started. By deferring this startup to be lazy then the startup failure can be handled during routing messages via Camel's routing error handlers. Beware that when the first message is processed then creating and starting the producer may take a little time and prolong the total processing time of the processing. false Boolean
[ "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-kubernetes-starter</artifactId> </dependency>", "kubernetes-services:masterUrl", "from(\"direct:list\"). toF(\"kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServices\"). to(\"mock:result\");", "from(\"direct:listByLabels\").process(new Processor() { @Override public void process(Exchange exchange) throws Exception { Map<String, String> labels = new HashMap<>(); labels.put(\"key1\", \"value1\"); labels.put(\"key2\", \"value2\"); exchange.getIn().setHeader(KubernetesConstants.KUBERNETES_SERVICE_LABELS, labels); } }); toF(\"kubernetes-services:///?kubernetesClient=#kubernetesClient&operation=listServicesByLabels\"). to(\"mock:result\");", "fromF(\"kubernetes-services://%s?oauthToken=%s&namespace=default&resourceName=test\", host, authToken).process(new KubernertesProcessor()).to(\"mock:result\"); public class KubernertesProcessor implements Processor { @Override public void process(Exchange exchange) throws Exception { Message in = exchange.getIn(); Service sv = exchange.getIn().getBody(Service.class); log.info(\"Got event with configmap name: \" + sv.getMetadata().getName() + \" and action \" + in.getHeader(KubernetesConstants.KUBERNETES_EVENT_ACTION)); } }" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel/4.4/html/red_hat_build_of_apache_camel_for_spring_boot_reference/csb-camel-kubernetes-services-component-starter
Chapter 30. Delegating permissions to user groups to manage users using IdM WebUI
Chapter 30. Delegating permissions to user groups to manage users using IdM WebUI Delegation is one of the access control methods in IdM, along with self-service rules and role-based access control (RBAC). You can use delegation to assign permissions to one group of users to manage entries for another group of users. This section covers the following topics: Delegation rules Creating a delegation rule using IdM WebUI Viewing existing delegation rules using IdM WebUI Modifying a delegation rule using IdM WebUI Deleting a delegation rule using IdM WebUI 30.1. Delegation rules You can delegate permissions to user groups to manage users by creating delegation rules . Delegation rules allow a specific user group to perform write (edit) operations on specific attributes for users in another user group. This form of access control rule is limited to editing the values of a subset of attributes you specify in a delegation rule; it does not grant the ability to add or remove whole entries or control over unspecified attributes. Delegation rules grant permissions to existing user groups in IdM. You can use delegation to, for example, allow the managers user group to manage selected attributes of users in the employees user group. 30.2. Creating a delegation rule using IdM WebUI Follow this procedure to create a delegation rule using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . Click Add . In the Add delegation window, do the following: Name the new delegation rule. Set the permissions by selecting the check boxes that indicate whether users will have the right to view the given attributes ( read ) and add or change the given attributes ( write ). In the User group drop-down menu, select the group who is being granted permissions to view or edit the entries of users in the member group. In the Member user group drop-down menu, select the group whose entries can be edited by members of the delegation group. In the attributes box, select the check boxes by the attributes to which you want to grant permissions. Click the Add button to save the new delegation rule. 30.3. Viewing existing delegation rules using IdM WebUI Follow this procedure to view existing delegation rules using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . 30.4. Modifying a delegation rule using IdM WebUI Follow this procedure to modify an existing delegation rule using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . Click on the rule you want to modify. Make the desired changes: Change the name of the rule. Change granted permissions by selecting the check boxes that indicate whether users will have the right to view the given attributes ( read ) and add or change the given attributes ( write ). In the User group drop-down menu, select the group who is being granted permissions to view or edit the entries of users in the member group. In the Member user group drop-down menu, select the group whose entries can be edited by members of the delegation group. In the attributes box, select the check boxes by the attributes to which you want to grant permissions. To remove permissions to an attribute, uncheck the relevant check box. Click the Save button to save the changes. 30.5. Deleting a delegation rule using IdM WebUI Follow this procedure to delete an existing delegation rule using the IdM WebUI. Prerequisites You are logged in to the IdM Web UI as a member of the admins group. Procedure From the IPA Server menu, click Role-Based Access Control Delegations . Select the check box to the rule you want to remove. Click Delete . Click Delete to confirm.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html/managing_idm_users_groups_hosts_and_access_control_rules/delegating-permissions-to-user-groups-to-manage-users-using-idm-webui_managing-users-groups-hosts
2.6.2.2.4. Expansions
2.6.2.2.4. Expansions Expansions, when used in conjunction with the spawn and twist directives, provide information about the client, server, and processes involved. The following is a list of supported expansions: %a - Returns the client's IP address. %A - Returns the server's IP address. %c - Returns a variety of client information, such as the user name and hostname, or the user name and IP address. %d - Returns the daemon process name. %h - Returns the client's hostname (or IP address, if the hostname is unavailable). %H - Returns the server's hostname (or IP address, if the hostname is unavailable). %n - Returns the client's hostname. If unavailable, unknown is printed. If the client's hostname and host address do not match, paranoid is printed. %N - Returns the server's hostname. If unavailable, unknown is printed. If the server's hostname and host address do not match, paranoid is printed. %p - Returns the daemon's process ID. %s -Returns various types of server information, such as the daemon process and the host or IP address of the server. %u - Returns the client's user name. If unavailable, unknown is printed. The following sample rule uses an expansion in conjunction with the spawn command to identify the client host in a customized log file. When connections to the SSH daemon ( sshd ) are attempted from a host in the example.com domain, execute the echo command to log the attempt, including the client hostname (by using the %h expansion), to a special file: Similarly, expansions can be used to personalize messages back to the client. In the following example, clients attempting to access FTP services from the example.com domain are informed that they have been banned from the server: For a full explanation of available expansions, as well as additional access control options, see section 5 of the man pages for hosts_access ( man 5 hosts_access ) and the man page for hosts_options . Refer to Section 2.6.5, "Additional Resources" for more information about TCP Wrappers.
[ "sshd : .example.com : spawn /bin/echo `/bin/date` access denied to %h>>/var/log/sshd.log : deny", "vsftpd : .example.com : twist /bin/echo \"421 %h has been banned from this server!\"" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/sect-Security_Guide-Option_Fields-Expansions
Chapter 30. Multitenancy
Chapter 30. Multitenancy Red Hat 3scale API Management allows multiple independent instances of 3scale accounts to exist on a single on-premises deployment. Accounts operate independently from one another, and cannot share information among themselves. 30.1. Master Admin Portal A master administrator monitors and manages the 3scale accounts through the Master Admin Portal and API endpoints. Similar to the standard Admin Portal , the Master Admin Portal contains information about all the accounts of a deployment, and allows for administration of accounts and users through a unique account page. For details on account administrator operations, refer to the Accounts guide. 30.1.1. Accessing the Master Admin Portal To access the Master Admin Portal, you need to use the credentials and URL specifically defined for the Master Admin Portal during the on-premises installation process. The Master Admin Portal URL consists of the MASTER_NAME ( master by default in the template) and the WILDCARD_DOMAIN : You can identify the Master Admin Portal by the Master flag. 30.1.2. Adding an account through the Master Admin Portal To add an account through the Master Admin Portal, follow these steps: Log in to the Master Admin Portal . Navigate to Accounts . Click Create . Indicate the required information for the user: Username Email Password Password confirmation Indicate the required information for the organization: Organization/Group Name Click Create . After these steps, Red Hat 3scale API Management creates an account subdomain for your account based on the Organization/Group Name field. Additionally, you can see a page containing the details of the account you created. 30.1.3. Creating a single gateway with the Master Admin Portal With the Master Admin Portal, you can create a single gateway for all tenants by configuring the THREESCALE_PORTAL_ENDPOINT environment variable. This is similar to the Hosted APIcast in the Hosted 3scale (SaaS), where the default apicast-staging and apicast-production gateways deployed with the OpenShift template are configured in this way. To create a single gateway with the Master Admin Portal, follow these steps: Fetch the value of ACCESS_TOKEN out of the system-master-apicast secret in your 3scale project. Alternatively, you can create new access tokens in the Master Admin Portal. Use the following command when you deploy APIcast: The end of the url looks like /master/api/proxy/configs , this is because master holds the configs in a different endpoint compared to the default /admin/api/services.json . 30.2. Managing accounts You can manage accounts through the Master Admin Portal or through API calls. 30.2.1. Managing accounts through the Master Admin Portal To manage the accounts through the Master Admin Portal, you need to do the following: Log in to the Master Admin Portal . Navigate to the Accounts page. Select the group or organization you want to manage. From the Accounts page, you can perform administrative actions, such as impersonating an admin account or suspending an account. You can also manage the following account attributes: Applications Users Invitations Group Memberships Organization/Group Name 30.2.2. Managing accounts through API calls You can manage accounts through the Master Admin API calls. For information on these calls, refer to the Master API section, by clicking the question mark (?) icon located in the upper-right corner of the Master Admin Portal, and then choosing 3scale API Docs . 30.3. Understanding multitenancy subdomains As a result of multiple accounts existing under the same OpenShift cluster domain, individual account names prepend the OpenShift cluster domain name as subdomains. For example, the route for an account named user on a cluster with a domain of example.com appears as: A standard multitenant deployment will include: A master admin user A master admin portal route, defined by the MASTER_NAME parameter: An account admin user An account admin portal route, defined by the TENANT_NAME parameter: A developer portal route for the account: Routes for the production and staging embedded APIcast gateway: Additional accounts added by the master admin will be be assigned a subdomain based on their names. 30.4. Deleting tenant accounts 30.4.1. Deleting an account via the Admin Portal With this procedure, accounts are scheduled for deletion and will be deleted after 15 days. During the time, it is scheduled for deletion: Users cannot log in to the account. The account can not be edited; but the master can resume the account to the approved status. Additionally, the domains of the tenant (admin domain and developer portal) are not available, similar to a real deletion. Prerequisites: Log in to your master admin account . Procedure To see the list of accounts, navigate to Accounts . Click the account you want to delete. Click Edit , to the account's name. In the accounts details page, click the Delete icon. Confirm the deletion. 30.4.2. Deleting a tenant via the console If you want to delete the account with an immediate effect, you can do so via the console: Open the console with these commands: USD oc rsh -c system-master "USD(oc get pods --selector deploymentconfig=system-app -o name)" bundle exec rails console Delete immediately with these lines: tenant = Account.find(PROVIDER_ID) tenant.schedule_for_deletion! DeleteAccountHierarchyWorker.perform_later(tenant) This is how each line works: Line 1: finds the account and saves it in the variable tenant . Line 2: schedules the account for deletion. This is only necessary if you have not scheduled the deletion through the Admin Portal. Line 3: deletes the tenant in a background process only if you have scheduled the account for deletion or it is suspended. Deletion will not proceed if the account is in approved status. 30.5. Resuming tenant accounts Resuming a tenant account implies restoring an account scheduled for deletion. You can resume a tenant account up to 15 days after you have scheduled it for deletion. After resuming an account: All apps exist. All historical stats remain. All tokens that should be valid are valid again. Apps start authorizing again. Prerequisites: Log in to your master admin account. Procedure To see the list of accounts, navigate to Accounts . Click the account you want to delete. Under the account details, click Resume . Click Ok to confirm you want to resume the account.
[ "<MASTER_NAME> .<WILDCARD_DOMAIN>", "THREESCALE_PORTAL_ENDPOINT=\"https://<ACCESS_TOKEN>@<public url to master admin portal>/master/api/proxy/configs\"", "user.example.com", "<MASTER_NAME>.<WILDCARD_DOMAIN>", "<TENANT_NAME>-admin.<WILDCARD_DOMAIN>", "<TENANT_NAME>.<WILDCARD_DOMAIN>", "<API_NAME>-<TENANT_NAME>-apicast-staging.<WILDCARD_DOMAIN> <API_NAME>-<TENANT_NAME>-apicast-production.<WILDCARD_DOMAIN>", "This example illustrates the output users and routes of a standard multitenant deployment of 3scale:", "---- --> Deploying template \"3scale-project/3scale-api-management\" for \"amp.yml\" to project project", "3scale API Management --------- 3scale API Management main system", "Login on https://user-admin.3scale-project.example.com as admin/xXxXyz123 * With parameters: * ADMIN_PASSWORD=xXxXyz123 # generated * ADMIN_USERNAME=admin * TENANT_NAME=user * MASTER_NAME=master * MASTER_USER=master * MASTER_PASSWORD=xXxXyz123 # generated --> Success Access your application via route 'user-admin.3scale-project.example.com' Access your application via route 'master-admin.3scale-project.example.com' Access your application via route 'backend-user.3scale-project.example.com' Access your application via route 'user.3scale-project.example.com' Access your application via route 'api-user-apicast-staging.3scale-project.example.com' Access your application via route 'api-user-apicast-production.3scale-project.example.com' Access your application via route 'apicast-wildcard.3scale-project.example.com' ----", "oc rsh -c system-master \"USD(oc get pods --selector deploymentconfig=system-app -o name)\" bundle exec rails console", "tenant = Account.find(PROVIDER_ID) tenant.schedule_for_deletion! DeleteAccountHierarchyWorker.perform_later(tenant)" ]
https://docs.redhat.com/en/documentation/red_hat_3scale_api_management/2.15/html/admin_portal_guide/multitenancy_2
Chapter 1. Web Console Overview
Chapter 1. Web Console Overview The Red Hat OpenShift Container Platform web console provides a graphical user interface to visualize your project data and perform administrative, management, and troubleshooting tasks. The web console runs as pods on the control plane nodes in the openshift-console project. It is managed by a console-operator pod. Both Administrator and Developer perspectives are supported. Both Administrator and Developer perspectives enable you to create quick start tutorials for OpenShift Container Platform. A quick start is a guided tutorial with user tasks and is useful for getting oriented with an application, Operator, or other product offering. 1.1. About the Administrator perspective in the web console The Administrator perspective enables you to view the cluster inventory, capacity, general and specific utilization information, and the stream of important events, all of which help you to simplify planning and troubleshooting tasks. Both project administrators and cluster administrators can view the Administrator perspective. Cluster administrators can also open an embedded command line terminal instance with the web terminal Operator in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Administrator perspective is displayed by default if the user is recognized as an administrator. The Administrator perspective provides workflows specific to administrator use cases, such as the ability to: Manage workload, storage, networking, and cluster settings. Install and manage Operators using the Operator Hub. Add identity providers that allow users to log in and manage user access through roles and role bindings. View and manage a variety of advanced settings such as cluster updates, partial cluster updates, cluster Operators, custom resource definitions (CRDs), role bindings, and resource quotas. Access and manage monitoring features such as metrics, alerts, and monitoring dashboards. View and manage logging, metrics, and high-status information about the cluster. Visually interact with applications, components, and services associated with the Administrator perspective in OpenShift Container Platform. 1.2. About the Developer perspective in the web console The Developer perspective offers several built-in ways to deploy applications, services, and databases. In the Developer perspective, you can: View real-time visualization of rolling and recreating rollouts on the component. View the application status, resource utilization, project event streaming, and quota consumption. Share your project with others. Troubleshoot problems with your applications by running Prometheus Query Language (PromQL) queries on your project and examining the metrics visualized on a plot. The metrics provide information about the state of a cluster and any user-defined workloads that you are monitoring. Cluster administrators can also open an embedded command line terminal instance in the web console in OpenShift Container Platform 4.7 and later. Note The default web console perspective that is shown depends on the role of the user. The Developer perspective is displayed by default if the user is recognised as a developer. The Developer perspective provides workflows specific to developer use cases, such as the ability to: Create and deploy applications on OpenShift Container Platform by importing existing codebases, images, and container files. Visually interact with applications, components, and services associated with them within a project and monitor their deployment and build status. Group components within an application and connect the components within and across applications. Integrate serverless capabilities (Technology Preview). Create workspaces to edit your application code using Eclipse Che. You can use the Topology view to display applications, components, and workloads of your project. If you have no workloads in the project, the Topology view will show some links to create or import them. You can also use the Quick Search to import components directly. Additional resources See Viewing application composition using the Topology view for more information on using the Topology view in Developer perspective. 1.3. Accessing the Perspectives You can access the Administrator and Developer perspective from the web console as follows: Prerequisites To access a perspective, ensure that you have logged in to the web console. Your default perspective is automatically determined by the permission of the users. The Administrator perspective is selected for users with access to all projects, while the Developer perspective is selected for users with limited access to their own projects Additional resources See Adding User Preferences for more information on changing perspectives. Procedure Use the perspective switcher to switch to the Administrator or Developer perspective. Select an existing project from the Project drop-down list. You can also create a new project from this dropdown. Note You can use the perspective switcher only as cluster-admin . Additional resources Learn more about Cluster Administrator Overview of the Administrator perspective Creating and deploying applications on OpenShift Container Platform using the Developer perspective Viewing the applications in your project, verifying their deployment status, and interacting with them in the Topology view Viewing cluster information Configuring the web console Customizing the web console Using the web terminal Creating quick start tutorials Disabling the web console
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/web_console/web-console-overview
2.4. Creating the Resources and Resource Groups with the pcs Command
2.4. Creating the Resources and Resource Groups with the pcs Command This use case requires that you create four cluster resources. To ensure these resources all run on the same node, they are configured as part of the resource group apachegroup . The resources to create are as follows, listed in the order in which they will start. An LVM resource named my_lvm that uses the LVM volume group you created in Section 2.1, "Configuring an LVM Volume with an ext4 File System" . A Filesystem resource named my_fs , that uses the file system device /dev/my_vg/my_lv you created in Section 2.1, "Configuring an LVM Volume with an ext4 File System" . An IPaddr2 resource, which is a floating IP address for the apachegroup resource group. The IP address must not be one already associated with a physical node. If the IPaddr2 resource's NIC device is not specified, the floating IP must reside on the same network as the statically assigned IP addresses used by the cluster nodes, otherwise the NIC device to assign the floating IP address cannot be properly detected. An apache resource named Website that uses the index.html file and the Apache configuration you defined in Section 2.2, "Web Server Configuration" . The following procedure creates the resource group apachegroup and the resources that the group contains. The resources will start in the order in which you add them to the group, and they will stop in the reverse order in which they are added to the group. Run this procedure from one node of the cluster only. The following command creates the LVM resource my_lvm . This command specifies the exclusive=true parameter to ensure that only the cluster is capable of activating the LVM logical volume. Because the resource group apachegroup does not yet exist, this command creates the resource group. When you create a resource, the resource is started automatically. You can use the following command to confirm that the resource was created and has started. You can manually stop and start an individual resource with the pcs resource disable and pcs resource enable commands. The following commands create the remaining resources for the configuration, adding them to the existing resource group apachegroup . After creating the resources and the resource group that contains them, you can check the status of the cluster. Note that all four resources are running on the same node. Note that if you have not configured a fencing device for your cluster, as described in Section 1.3, "Fencing Configuration" , by default the resources do not start. Once the cluster is up and running, you can point a browser to the IP address you defined as the IPaddr2 resource to view the sample display, consisting of the simple word "Hello". If you find that the resources you configured are not running, you can run the pcs resource debug-start resource command to test the resource configuration. For information on the pcs resource debug-start command, see the High Availability Add-On Reference manual.
[ "pcs resource create my_lvm LVM volgrpname=my_vg exclusive=true --group apachegroup", "pcs resource show Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started", "pcs resource create my_fs Filesystem device=\"/dev/my_vg/my_lv\" directory=\"/var/www\" fstype=\"ext4\" --group apachegroup pcs resource create VirtualIP IPaddr2 ip=198.51.100.3 cidr_netmask=24 --group apachegroup pcs resource create Website apache configfile=\"/etc/httpd/conf/httpd.conf\" statusurl=\"http://127.0.0.1/server-status\" --group apachegroup", "pcs status Cluster name: my_cluster Last updated: Wed Jul 31 16:38:51 2013 Last change: Wed Jul 31 16:42:14 2013 via crm_attribute on z1.example.com Stack: corosync Current DC: z2.example.com (2) - partition with quorum Version: 1.1.10-5.el7-9abe687 2 Nodes configured 6 Resources configured Online: [ z1.example.com z2.example.com ] Full list of resources: myapc (stonith:fence_apc_snmp): Started z1.example.com Resource Group: apachegroup my_lvm (ocf::heartbeat:LVM): Started z1.example.com my_fs (ocf::heartbeat:Filesystem): Started z1.example.com VirtualIP (ocf::heartbeat:IPaddr2): Started z1.example.com Website (ocf::heartbeat:apache): Started z1.example.com", "Hello" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/s1-resourcegroupcreate-HAAA
Chapter 7. Configuring Identity Management for smart card authentication
Chapter 7. Configuring Identity Management for smart card authentication Identity Management (IdM) supports smart card authentication with: User certificates issued by the IdM certificate authority User certificates issued by an external certificate authority You can configure smart card authentication in IdM for both types of certificates. In this scenario, the rootca.pem CA certificate is the file containing the certificate of a trusted external certificate authority. For information about smart card authentication in IdM, see Understanding smart card authentication . For more details on configuring smart card authentication: Configuring the IdM server for smart card authentication Configuring the IdM client for smart card authentication Adding a certificate to a user entry in the IdM Web UI Adding a certificate to a user entry in the IdM CLI Installing tools for managing and using smart cards Storing a certificate on a smart card Logging in to IdM with smart cards Configuring GDM access using smart card authentication Configuring su access using smart card authentication 7.1. Configuring the IdM server for smart card authentication If you want to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts, you must obtain the following certificates so that you can add them when running the ipa-advise script that configures the IdM server: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Steps 1 - 4a in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on the IdM server on which an IdM CA instance is running. The certificates of all of the intermediate CAs; that is, intermediate between the <EXAMPLE.ORG> CA and the IdM CA. To configure an IdM server for smart card authentication: Obtain files with the CA certificates in the PEM format. Run the built-in ipa-advise script. Reload the system configuration. Prerequisites You have root access to the IdM server. You have the root CA certificate and all the intermediate CA certificates. Procedure Create a directory in which you will do the configuration: Navigate to the directory: Obtain the relevant CA certificates stored in files in PEM format. If your CA certificate is stored in a file of a different format, such as DER, convert it to PEM format. The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Convert a DER file to a PEM file: For convenience, copy the certificates to the directory in which you want to do the configuration: Optional: If you use certificates of external certificate authorities, use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Generate a configuration script with the in-built ipa-advise utility, using the administrator's privileges: The config-server-for-smart-card-auth.sh script performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Execute the script, adding the PEM files containing the root CA and sub CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, run the procedure on each IdM server. 7.2. Using Ansible to configure the IdM server for smart card authentication You can use Ansible to enable smart card authentication for users whose certificates have been issued by the certificate authority (CA) of the <EXAMPLE.ORG> domain that your Identity Management (IdM) CA trusts. To do that, you must obtain the following certificates so that you can use them when running an Ansible playbook with the ipasmartcard_server ansible-freeipa role script: The certificate of the root CA that has either issued the certificate for the <EXAMPLE.ORG> CA directly, or through one or more of its sub-CAs. You can download the certificate chain from a web page whose certificate has been issued by the authority. For details, see Step 4 in Configuring a browser to enable certificate authentication . The IdM CA certificate. You can obtain the CA certificate from the /etc/ipa/ca.crt file on any IdM CA server. The certificates of all of the CAs that are intermediate between the <EXAMPLE.ORG> CA and the IdM CA. Prerequisites You have root access to the IdM server. You know the IdM admin password. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM Certificate Authority certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: Navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory: In your Ansible inventory file, specify the following: The IdM servers that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-server.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook file, the file storing the password protecting the secret.yml file, and the inventory file: The ipasmartcard_server Ansible role performs the following actions: It configures the IdM Apache HTTP Server. It enables Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) on the Key Distribution Center (KDC). It configures the IdM Web UI to accept smart card authorization requests. Optional: If the certificate authority that issued the user certificate does not provide any Online Certificate Status Protocol (OCSP) responder, you may need to disable OCSP check for authentication to the IdM Web UI: Connect to the IdM server as root : Set the SSLOCSPEnable parameter to off in the /etc/httpd/conf.d/ssl.conf file: Restart the Apache daemon (httpd) for the changes to take effect immediately: Warning Do not disable the OCSP check if you only use user certificates issued by the IdM CA. OCSP responders are part of IdM. For instructions on how to keep the OCSP check enabled, and yet prevent a user certificate from being rejected by the IdM server if it does not contain the information about the location at which the CA that issued the user certificate listens for OCSP service requests, see the SSLOCSPDefaultResponder directive in Apache mod_ssl configuration options . The server listed in the inventory file is now configured for smart card authentication. Note To enable smart card authentication in the whole topology, set the hosts variable in the Ansible playbook to ipacluster : Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 7.3. Configuring the IdM client for smart card authentication Follow this procedure to configure IdM clients for smart card authentication. The procedure needs to be run on each IdM system, a client or a server, to which you want to connect while using a smart card for authentication. For example, to enable an ssh connection from host A to host B, the script needs to be run on host B. As an administrator, run this procedure to enable smart card authentication using The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. The following procedure assumes that you are configuring smart card authentication on an IdM client, not an IdM server. For this reason you need two computers: an IdM server to generate the configuration script, and the IdM client on which to run the script. Prerequisites Your IdM server has been configured for smart card authentication, as described in Configuring the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate and all the intermediate CA certificates. You installed the IdM client with the --mkhomedir option to ensure remote users can log in successfully. If you do not create a home directory, the default login location is the root of the directory structure, / . Procedure On an IdM server, generate a configuration script with ipa-advise using the administrator's privileges: The config-client-for-smart-card-auth.sh script performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or with their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . From the IdM server, copy the script to a directory of your choice on the IdM client machine: From the IdM server, copy the CA certificate files in PEM format for convenience to the same directory on the IdM client machine as used in the step: On the client machine, execute the script, adding the PEM files containing the CA certificates as arguments: Note Ensure that you add the root CA's certificate as an argument before any sub CA certificates and that the CA or sub CA certificates have not expired. The client is now configured for smart card authentication. 7.4. Using Ansible to configure IdM clients for smart card authentication Follow this procedure to use the ansible-freeipa ipasmartcard_client module to configure specific Identity Management (IdM) clients to permit IdM users to authenticate with a smart card. Run this procedure to enable smart card authentication for IdM users that use any of the following to access IdM: The ssh protocol For details see Configuring SSH access using smart card authentication . The console login The GNOME Display Manager (GDM) The su command Note This procedure is not required for authenticating to the IdM Web UI. Authenticating to the IdM Web UI involves two hosts, neither of which needs to be an IdM client: The machine on which the browser is running. The machine can be outside of the IdM domain. The IdM server on which httpd is running. Prerequisites Your IdM server has been configured for smart card authentication, as described in Using Ansible to configure the IdM server for smart card authentication . You have root access to the IdM server and the IdM client. You have the root CA certificate, the IdM CA certificate, and all the intermediate CA certificates. You have configured your Ansible control node to meet the following requirements: You are using Ansible version 2.13 or later. You have installed the ansible-freeipa package. The example assumes that in the ~/ MyPlaybooks / directory, you have created an Ansible inventory file with the fully-qualified domain name (FQDN) of the IdM server. The example assumes that the secret.yml Ansible vault stores your ipaadmin_password . The target node, that is the node on which the ansible-freeipa module is executed, is part of the IdM domain as an IdM client, server or replica. Procedure If your CA certificates are stored in files of a different format, such as DER , convert them to PEM format: The IdM CA certificate is in PEM format and is located in the /etc/ipa/ca.crt file. Optional: Use the openssl x509 utility to view the contents of the files in the PEM format to check that the Issuer and Subject values are correct: On your Ansible control node, navigate to your ~/ MyPlaybooks / directory: Create a subdirectory dedicated to the CA certificates: For convenience, copy all the required certificates to the ~/MyPlaybooks/SmartCard/ directory, for example: In your Ansible inventory file, specify the following: The IdM clients that you want to configure for smart card authentication. The IdM administrator password. The paths to the certificates of the CAs in the following order: The root CA certificate file The intermediate CA certificates files The IdM CA certificate file The file can look as follows: Create an install-smartcard-clients.yml playbook with the following content: Save the file. Run the Ansible playbook. Specify the playbook and inventory files: The ipasmartcard_client Ansible role performs the following actions: It configures the smart card daemon. It sets the system-wide truststore. It configures the System Security Services Daemon (SSSD) to allow users to authenticate with either their user name and password or their smart card. For more details on SSSD profile options for smart card authentication, see Smart card authentication options in RHEL . The clients listed in the ipaclients section of the inventory file are now configured for smart card authentication. Note If you have installed the IdM clients with the --mkhomedir option, remote users will be able to log in to their home directories. Otherwise, the default login location is the root of the directory structure, / . Additional resources Sample playbooks using the ipasmartcard_server role in the /usr/share/doc/ansible-freeipa/playbooks/ directory 7.5. Adding a certificate to a user entry in the IdM Web UI Follow this procedure to add an external certificate to a user entry in IdM Web UI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM Web UI as an administrator if you want to add a certificate to another user. For adding a certificate to your own profile, you do not need the administrator's credentials. Navigate to Users Active users sc_user . Find the Certificate option and click Add . On the command line, display the certificate in the PEM format using the cat utility or a text editor: Copy and paste the certificate from the CLI into the window that has opened in the Web UI. Click Add . Figure 7.1. Adding a new certificate in the IdM Web UI The sc_user entry now contains an external certificate. 7.6. Adding a certificate to a user entry in the IdM CLI Follow this procedure to add an external certificate to a user entry in IdM CLI. Note Instead of uploading the whole certificate, it is also possible to upload certificate mapping data to a user entry in IdM. User entries containing either full certificates or certificate mapping data can be used in conjunction with corresponding certificate mapping rules to facilitate the configuration of smart card authentication for system administrators. For details, see Certificate mapping rules for configuring authentication . Note If the user's certificate has been issued by the IdM Certificate Authority, the certificate is already stored in the user entry, and you do not need to follow this procedure. Prerequisites You have the certificate that you want to add to the user entry at your disposal. Procedure Log into the IdM CLI as an administrator if you want to add a certificate to another user: For adding a certificate to your own profile, you do not need the administrator's credentials: Create an environment variable containing the certificate with the header and footer removed and concatenated into a single line, which is the format expected by the ipa user-add-cert command: Note that certificate in the testuser.crt file must be in the PEM format. Add the certificate to the profile of sc_user using the ipa user-add-cert command: The sc_user entry now contains an external certificate. 7.7. Installing tools for managing and using smart cards Prerequisites The gnutls-utils package is installed. The opensc package is installed. The pcscd service is running. Before you can configure your smart card, you must install the corresponding tools, which can generate certificates and start the pscd service. Procedure Install the opensc and gnutls-utils packages: Start the pcscd service. Verification Verify that the pcscd service is up and running 7.8. Preparing your smart card and uploading your certificates and keys to your smart card Follow this procedure to configure your smart card with the pkcs15-init tool, which helps you to configure: Erasing your smart card Setting new PINs and optional PIN Unblocking Keys (PUKs) Creating a new slot on the smart card Storing the certificate, private key, and public key in the slot If required, locking the smart card settings as certain smart cards require this type of finalization Note The pkcs15-init tool may not work with all smart cards. You must use the tools that work with the smart card you are using. Prerequisites The opensc package, which includes the pkcs15-init tool, is installed. For more details, see Installing tools for managing and using smart cards . The card is inserted in the reader and connected to the computer. You have a private key, a public key, and a certificate to store on the smart card. In this procedure, testuser.key , testuserpublic.key , and testuser.crt are the names used for the private key, public key, and the certificate. You have your current smart card user PIN and Security Officer PIN (SO-PIN). Procedure Erase your smart card and authenticate yourself with your PIN: The card has been erased. Initialize your smart card, set your user PIN and PUK, and your Security Officer PIN and PUK: The pcks15-init tool creates a new slot on the smart card. Set a label and the authentication ID for the slot: The label is set to a human-readable value, in this case, testuser . The auth-id must be two hexadecimal values, in this case it is set to 01 . Store and label the private key in the new slot on the smart card: Note The value you specify for --id must be the same when storing your private key and storing your certificate in the step. Specifying your own value for --id is recommended as otherwise a more complicated value is calculated by the tool. Store and label the certificate in the new slot on the smart card: Optional: Store and label the public key in the new slot on the smart card: Note If the public key corresponds to a private key or certificate, specify the same ID as the ID of the private key or certificate. Optional: Certain smart cards require you to finalize the card by locking the settings: At this stage, your smart card includes the certificate, private key, and public key in the newly created slot. You have also created your user PIN and PUK and the Security Officer PIN and PUK. 7.9. Logging in to IdM with smart cards Follow this procedure to use smart cards for logging in to the IdM Web UI. Prerequisites The web browser is configured for using smart card authentication. The IdM server is configured for smart card authentication. The certificate installed on your smart card is either issued by the IdM server or has been added to the user entry in IdM. You know the PIN required to unlock the smart card. The smart card has been inserted into the reader. Procedure Open the IdM Web UI in the browser. Click Log In Using Certificate . If the Password Required dialog box opens, add the PIN to unlock the smart card and click the OK button. The User Identification Request dialog box opens. If the smart card contains more than one certificate, select the certificate you want to use for authentication in the drop down list below Choose a certificate to present as identification . Click the OK button. Now you are successfully logged in to the IdM Web UI. 7.10. Logging in to GDM using smart card authentication on an IdM client The GNOME Desktop Manager (GDM) requires authentication. You can use your password; however, you can also use a smart card for authentication. Follow this procedure to use smart card authentication to access GDM. Prerequisites The system has been configured for smart card authentication. For details, see Configuring the IdM client for smart card authentication . The smart card contains your certificate and private key. The user account is a member of the IdM domain. The certificate on the smart card maps to the user entry through: Assigning the certificate to a particular user entry. For details, see, Adding a certificate to a user entry in the IdM Web UI or Adding a certificate to a user entry in the IdM CLI . The certificate mapping data being applied to the account. For details, see Certificate mapping rules for configuring authentication on smart cards . Procedure Insert the smart card in the reader. Enter the smart card PIN. Click Sign In . You are successfully logged in to the RHEL system and you have a TGT provided by the IdM server. Verification In the Terminal window, enter klist and check the result: 7.11. Using smart card authentication with the su command Changing to a different user requires authentication. You can use a password or a certificate. Follow this procedure to use your smart card with the su command. It means that after entering the su command, you are prompted for the smart card PIN. Prerequisites Your IdM server and client have been configured for smart card authentication. See Configuring the IdM server for smart card authentication See Configuring the IdM client for smart card authentication The smart card contains your certificate and private key. See Storing a certificate on a smart card The card is inserted in the reader and connected to the computer. Procedure In a terminal window, change to a different user with the su command: If the configuration is correct, you are prompted to enter the smart card PIN.
[ "mkdir ~/SmartCard/", "cd ~/SmartCard/", "openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM", "cp /tmp/rootca.pem ~/SmartCard/ cp /tmp/subca.pem ~/SmartCard/ cp /tmp/issuingca.pem ~/SmartCard/", "openssl x509 -noout -text -in rootca.pem | more", "kinit admin ipa-advise config-server-for-smart-card-auth > config-server-for-smart-card-auth.sh", "chmod +x config-server-for-smart-card-auth.sh ./config-server-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful", "SSLOCSPEnable off", "systemctl restart httpd", "openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM", "openssl x509 -noout -text -in root-ca.pem | more", "cd ~/ MyPlaybooks /", "mkdir SmartCard/", "cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt", "[ipaserver] ipaserver.idm.example.com [ipareplicas] ipareplica1.idm.example.com ipareplica2.idm.example.com [ipacluster:children] ipaserver ipareplicas [ipacluster:vars] ipaadmin_password= \"{{ ipaadmin_password }}\" ipasmartcard_server_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt", "--- - name: Playbook to set up smart card authentication for an IdM server hosts: ipaserver become: true roles: - role: ipasmartcard_server state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-server.yml", "ssh [email protected]", "SSLOCSPEnable off", "systemctl restart httpd", "--- - name: Playbook to setup smartcard for IPA server and replicas hosts: ipacluster [...]", "kinit admin ipa-advise config-client-for-smart-card-auth > config-client-for-smart-card-auth.sh", "scp config-client-for-smart-card-auth.sh root @ client.idm.example.com:/root/SmartCard/ Password: config-client-for-smart-card-auth.sh 100% 2419 3.5MB/s 00:00", "scp {rootca.pem,subca.pem,issuingca.pem} root @ client.idm.example.com:/root/SmartCard/ Password: rootca.pem 100% 1237 9.6KB/s 00:00 subca.pem 100% 2514 19.6KB/s 00:00 issuingca.pem 100% 2514 19.6KB/s 00:00", "kinit admin chmod +x config-client-for-smart-card-auth.sh ./config-client-for-smart-card-auth.sh rootca.pem subca.pem issuingca.pem Ticket cache:KEYRING:persistent:0:0 Default principal: [email protected] [...] Systemwide CA database updated. The ipa-certupdate command was successful", "openssl x509 -in <filename>.der -inform DER -out <filename>.pem -outform PEM", "openssl x509 -noout -text -in root-ca.pem | more", "cd ~/ MyPlaybooks /", "mkdir SmartCard/", "cp /tmp/root-ca.pem ~/MyPlaybooks/SmartCard/ cp /tmp/intermediate-ca.pem ~/MyPlaybooks/SmartCard/ cp /etc/ipa/ca.crt ~/MyPlaybooks/SmartCard/ipa-ca.crt", "[ipaclients] ipaclient1.example.com ipaclient2.example.com [ipaclients:vars] ipaadmin_password=SomeADMINpassword ipasmartcard_client_ca_certs=/home/<user_name>/MyPlaybooks/SmartCard/root-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/intermediate-ca.pem,/home/<user_name>/MyPlaybooks/SmartCard/ipa-ca.crt", "--- - name: Playbook to set up smart card authentication for an IdM client hosts: ipaclients become: true roles: - role: ipasmartcard_client state: present", "ansible-playbook --vault-password-file=password_file -v -i inventory install-smartcard-clients.yml", "[user@client SmartCard]USD cat testuser.crt", "[user@client SmartCard]USD kinit admin", "[user@client SmartCard]USD kinit sc_user", "[user@client SmartCard]USD export CERT=`openssl x509 -outform der -in testuser.crt | base64 -w0 -`", "[user@client SmartCard]USD ipa user-add-cert sc_user --certificate=USDCERT", "yum -y install opensc gnutls-utils", "systemctl start pcscd", "systemctl status pcscd", "pkcs15-init --erase-card --use-default-transport-keys Using reader with a card: Reader name PIN [Security Officer PIN] required. Please enter PIN [Security Officer PIN]:", "pkcs15-init --create-pkcs15 --use-default-transport-keys --pin 963214 --puk 321478 --so-pin 65498714 --so-puk 784123 Using reader with a card: Reader name", "pkcs15-init --store-pin --label testuser --auth-id 01 --so-pin 65498714 --pin 963214 --puk 321478 Using reader with a card: Reader name", "pkcs15-init --store-private-key testuser.key --label testuser_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-certificate testuser.crt --label testuser_crt --auth-id 01 --id 01 --format pem --pin 963214 Using reader with a card: Reader name", "pkcs15-init --store-public-key testuserpublic.key --label testuserpublic_key --auth-id 01 --id 01 --pin 963214 Using reader with a card: Reader name", "pkcs15-init -F", "klist Ticket cache: KEYRING:persistent:1358900015:krb_cache_TObtNMd Default principal: [email protected] Valid starting Expires Service principal 04/20/2020 13:58:24 04/20/2020 23:58:24 krbtgt/[email protected] renew until 04/27/2020 08:58:15", "su - example.user PIN for smart_card" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/managing_certificates_in_idm/configuring-idm-for-smart-card-auth_working-with-idm-certificates
Deploying OpenShift Data Foundation in external mode
Deploying OpenShift Data Foundation in external mode Red Hat OpenShift Data Foundation 4.16 Instructions for deploying OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster and IBM FlashSystem. Red Hat Storage Documentation Team Abstract Read this document for instructions on installing Red Hat OpenShift Data Foundation to use an external Red Hat Ceph Storage cluster or IBM FlashSystem.
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.16/html/deploying_openshift_data_foundation_in_external_mode/index
Chapter 1. Customizing your taxonomy tree
Chapter 1. Customizing your taxonomy tree You can modify a taxonomy tree with knowledge or skills data in your RHEL AI environment to create your own custom Granite Large Language Model (LLM). On RHEL AI, knowledge and skills created data sets are formatted in YAML. This YAML configuration is called a qna.yaml file, where "qna" stands for question and answer. A taxonomy tree is a categorization and information classification method that holds your qna.yaml files. The following documentation sections describe how to create skill and knowledge qna.yaml files for your taxonomy tree. Adding knowledge to your taxonomy tree Adding skills to your taxonomy tree There are a few supported knowledge document types that you can use for training the starter Granite LLM. The current supported document types include: Markdown PDF 1.1. Overview of skill and knowledge You can use skill and knowledge sets and specify domain-specific information to teach your custom model. Knowledge A dataset that consists of information and facts. When creating knowledge data for a model, you are providing it with additional data and information so the model can answer questions more accurately. Skills A dataset where you can teach the model how to do a task. Skills on RHEL AI are split into categories: Compositional skill: Compositional skills allow AI models to perform specific tasks or functions. There are two types of composition skills: Freeform compositional skills: These are performative skills that to not require additional context or information to function. Grounded compositional skills: These are performative skills that require additional context. For example, you can teach the model to read a table, where the additional context is an example of the table layout. Foundation skills: Foundational skills are skills that involve math, reasoning, and coding. Additional Resources Sample knowledge specifications
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux_ai/1.4/html/creating_skills_and_knowledge_yaml_files/customize_taxonomy_tree
Chapter 3. Upgrading RHCS 5 to RHCS 7 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled
Chapter 3. Upgrading RHCS 5 to RHCS 7 involving RHEL 8 to RHEL 9 upgrades with stretch mode enabled You can perform an upgrade from Red Hat Ceph Storage 5 to Red Hat Ceph Storage 8 involving Red Hat Enterprise Linux 8 to Red Hat Enterprise Linux 9 with the stretch mode enabled. Important Upgrade to the latest version of Red Hat Ceph Storage 5 prior to upgrading to the latest version of Red Hat Ceph Storage 8. Prerequisites Red Hat Ceph Storage 5 on Red Hat Enterprise Linux 8 with necessary hosts and daemons running with stretch mode enabled. Backup of Ceph binary ( /usr/sbin/cephadm ), ceph.pub ( /etc/ceph ), and the Ceph cluster's public SSH keys from the admin node. Procedure Log into the Cephadm shell: Example Label a second node as the admin in the cluster to manage the cluster when the admin node is re-provisioned. Syntax Example Set the noout flag. Example Drain all the daemons from the host: Syntax Example The _no_schedule label is automatically applied to the host which blocks deployment. Check if all the daemons are removed from the storage cluster: Syntax Example Zap the devices so that if the hosts being drained have OSDs present, then they can be used to re-deploy OSDs when the host is added back. Syntax Example Check the status of OSD removal: Example When no placement groups (PG) are left on the OSD, the OSD is decommissioned and removed from the storage cluster. Remove the host from the cluster: Syntax Example Re-provision the respective hosts from RHEL 8 to RHEL 9 as described in Upgrading from RHEL 8 to RHEL 9 . Run the preflight playbook with the --limit option: Syntax Example The preflight playbook installs podman , lvm2 , chronyd , and cephadm on the new host. After installation is complete, cephadm resides in the /usr/sbin/ directory. Extract the cluster's public SSH keys to a folder: Syntax Example Copy Ceph cluster's public SSH keys to the re-provisioned node: Syntax Example Optional: If the removed host has a monitor daemon, then, before adding the host to the cluster, add the --unmanaged flag to monitor deployment. Syntax Add the host again to the cluster and add the labels present earlier: Syntax Optional: If the removed host had a monitor daemon deployed originally, the monitor daemon needs to be added back manually with the location attributes as described in Replacing the tiebreaker with a new monitor . Syntax Example Syntax Example Verify the daemons on the re-provisioned host running successfully with the same ceph version: Syntax Set back the monitor daemon placement to managed . Syntax Repeat the above steps for all hosts. .Arbiter monitor cannot be drained or removed from the host. Hence, the arbiter mon needs to be re-provisioned to another tie-breaker node, and then drained or removed from host as described in Replacing the tiebreaker with a new monitor . Follow the same approach to re-provision admin nodes and use a second admin node to manage clusters. Add the backup files again to the node. . Add admin nodes again to cluster using the second admin node. Set the mon deployment to unmanaged . Follow Replacing the tiebreaker with a new monitor to add back the old arbiter mon and remove the temporary mon created earlier. Unset the noout flag. Syntax Verify the Ceph version and the cluster status to ensure that all demons are working as expected after the Red Hat Enterprise Linux upgrade. Follow Upgrade a Red Hat Ceph Storage cluster using cephadm to perform Red Hat Ceph Storage 5 to Red Hat Ceph Storage 8 Upgrade.
[ "cephadm shell", "ceph orch host label add HOSTNAME _admin", "ceph orch host label add host02_admin", "ceph osd set noout", "ceph orch host drain HOSTNAME --force", "ceph orch host drain host02 --force", "ceph orch ps HOSTNAME", "ceph orch ps host02", "ceph orch device zap HOSTNAME DISK --force", "ceph orch device zap ceph-host02 /dev/vdb --force zap successful for /dev/vdb on ceph-host02", "ceph orch osd rm status", "ceph orch host rm HOSTNAME --force", "ceph orch host rm host02 --force", "ansible-playbook -i INVENTORY_FILE cephadm-preflight.yml --limit NEWHOST_NAME", "ansible-playbook -i hosts cephadm-preflight.yml --extra-vars \"ceph_origin={storage-product}\" --limit host02", "ceph cephadm get-pub-key ~/ PATH", "ceph cephadm get-pub-key ~/ceph.pub", "ssh-copy-id -f -i ~/ PATH root@ HOST_NAME_2", "ssh-copy-id -f -i ~/ceph.pub root@host02", "ceph orch apply mon PLACEMENT --unmanaged", "ceph orch host add HOSTNAME IP_ADDRESS --labels= LABELS", "ceph mon add HOSTNAME IP LOCATION", "ceph mon add ceph-host02 10.0.211.62 datacenter=DC2", "ceph orch daemon add mon HOSTNAME", "ceph orch daemon add mon ceph-host02", "ceph orch ps", "ceph orch apply mon PLACEMENT", "ceph osd unset noout" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/upgrade_guide/upgrading-rhcs5-to-rhcs7-involving-rhel8-to-rhel9-upgrades-with-stretch-mode-enabled_upgrade
Chapter 5. Stopping JBoss EAP
Chapter 5. Stopping JBoss EAP The following procedure uses the Management command line interface (CLI) to stop JBoss EAP. Prerequisites JBoss EAP 7.4.16 is running. Procedure Launch the Management CLI by running: Connect to the server by running the connect command: Stop the server by running the shutdown command: Close the Management CLI by running the quit command: Alternative Here is another way to stop JBoss EAP: Navigate to the terminal where JBoss EAP is running. Press Ctrl+C to stop JBoss Enterprise Application Platform.
[ "EAP_HOME/bin/jboss-cli.sh", "[disconnected /] connect", "[standalone@localhost:9999 /] shutdown", "[standalone@localhost:9999 /] quit" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/installing_on_jboss_eap/stopping-jboss-eap
7.175. quota
7.175. quota 7.175.1. RHBA-2015:1262 - quota bug fix update Updated quota packages that fix several bugs are now available for Red Hat Enterprise Linux 6. The quota packages contain a suite of system administration tools for monitoring and limiting user and group disk usage on file systems. Bug Fixes BZ# 1007785 A regression caused incomplete synchronization of the clustered Global File System 2 (GFS2). As as consequence, queries for quota limits over the network timed out. With this update, the algorithm for translating quota values to the network format has been changed to prevent indefinite cycling in the rpc.rquotad server. As a result, a file system with negative quota values can no longer make the remote procedure call quota service unresponsive. BZ# 1009397 Previously, the reported disk usage exceeded the file system capacity because of listing disk usage on a clustered GFS2 file system when a local node was not fully synchronized. Now, disk usage and quotas are printed as signed numbers to reflect the fact that negative fluctuations in disk usage accounting do occur in unsynchronized nodes of clustered file systems. As a result, negative disk usage values are properly reported. BZ# 1024097 Prior to this update, the rpc.quotad server terminated with the "Too many autofs mount points." error when querying for disk quotas over the network to a server that has automounted more than 64 file systems. To fix this bug, the code enumerating automounted file systems has been altered. Now, quota tools suppressing automounted file systems do not impose any limit on their number. Users of quota are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.7_technical_notes/package-quota
Red Hat Data Grid
Red Hat Data Grid Data Grid is a high-performance, distributed in-memory data store. Schemaless data structure Flexibility to store different objects as key-value pairs. Grid-based data storage Designed to distribute and replicate data across clusters. Elastic scaling Dynamically adjust the number of nodes to meet demand without service disruption. Data interoperability Store, retrieve, and query data in the grid from different endpoints.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.4/html/upgrading_data_grid/red-hat-data-grid
Chapter 3. Basic Security
Chapter 3. Basic Security This chapter describes the basic steps to configure security before you start Karaf for the first time. By default, Karaf is secure, but none of its services are remotely accessible. This chapter explains how to enable secure access to the ports exposed by Karaf. 3.1. Configuring Basic Security 3.1.1. Overview The Apache Karaf runtime is secured against network attack by default, because all of its exposed ports require user authentication and no users are defined initially. In other words, the Apache Karaf runtime is remotely inaccessible by default. If you want to access the runtime remotely, you must first customize the security configuration, as described here. 3.1.2. Before you start the container If you want to enable remote access to the Karaf container, you must create a secure JAAS user before starting the container: 3.1.3. Create a secure JAAS user By default, no JAAS users are defined for the container, which effectively disables remote access (it is impossible to log on). To create a secure JAAS user, edit the InstallDir/etc/users.properties file and add a new user field, as follows: Where Username and Password are the new user credentials. The admin role gives this user the privileges to access all administration and management functions of the container. Do not define a numeric username with a leading zero. Such usernames will always cause a login attempt to fail. This is because the Karaf shell, which the console uses, drops leading zeros when the input appears to be a number. For example: Warning It is strongly recommended that you define custom user credentials with a strong password. 3.1.4. Role-based access control The Karaf container supports role-based access control, which regulates access through the JMX protocol, the Karaf command console, and the Fuse Management console. When assigning roles to users, you can choose from the set of standard roles, which provide the levels of access described in Table 3.1, "Standard Roles for Access Control" . Table 3.1. Standard Roles for Access Control Roles Description viewer Grants read-only access to the container. manager Grants read-write access at the appropriate level for ordinary users, who want to deploy and run applications. But blocks access to sensitive container configuration settings. admin Grants unrestricted access to the container. ssh Grants permission for remote console access through the SSH port. For more details about role-based access control, see Role-Based Access Control . 3.1.5. Ports exposed by the Apache Karaf container The following ports are exposed by the container: Console port - enables remote control of a container instance, through Apache Karaf shell commands. This port is enabled by default and is secured both by JAAS authentication and by SSH. JMX port - enables management of the container through the JMX protocol. This port is enabled by default and is secured by JAAS authentication. Web console port - provides access to an embedded Undertow container that can host Web console servlets. By default, the Fuse Console is installed in the Undertow container. 3.1.6. Enabling the remote console port You can access the remote console port whenever both of the following conditions are true: JAAS is configured with at least one set of login credentials. The Karaf runtime has not been started in client mode (client mode disables the remote console port completely). For example, to log on to the remote console port from the same machine where the container is running, enter the following command: Where the Username and Password are the credentials of a JAAS user with the ssh role. When accessing the Karaf console through the remote port, your privileges depend on the roles assigned to the user in the etc/users.properties file. If you want access to the complete set of console commands, the user account must have the admin role. 3.1.7. Strengthening security on the remote console port You can employ the following measures to strengthen security on the remote console port: Make sure that the JAAS user credentials have strong passwords. Customize the X.509 certificate (replace the Java keystore file, InstallDir/etc/host.key , with a custom key pair). 3.1.8. Enabling the JMX port The JMX port is enabled by default and secured by JAAS authentication. In order to access the JMX port, you must have configured JAAS with at least one set of login credentials. To connect to the JMX port, open a JMX client (for example, jconsole ) and connect to the following JMX URI: You must also provide valid JAAS credentials to the JMX client in order to connect. Note In general, the tail of the JMX URI has the format /karaf- ContainerName . If you change the container name from root to some other name, you must modify the JMX URI accordingly. 3.1.9. Strengthening security on the Fuse Console port The Fuse Console is already secured by JAAS authentication. To add SSL security, see Securing the Undertow HTTP Server .
[ "Username=Password,admin", "karaf@root> echo 0123 123 karaf@root> echo 00.123 0.123 karaf@root>", "./client -u Username -p Password", "service:jmx:rmi:///jndi/rmi://localhost:1099/karaf-root" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/ESBRuntimeBasicSec
7.8.3. Related Documentation
7.8.3. Related Documentation Red Hat Linux Firewalls , by Bill McCarty; Red Hat Press - a comprehensive reference to building network and server firewalls using open source packet filtering technology such as Netfilter and iptables . It includes such topics as analyzing firewall logs, developing firewall rules, and customizing your firewall with graphical tools such as lokkit . Linux Firewalls , by Robert Ziegler; New Riders Press - contains a wealth of information on building firewalls using both 2.2 kernel ipchains as well as Netfilter and iptables . Additional security topics such as remote access issues and intrusion detection systems are also covered.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/security_guide/s2-firewall-moreinfo-bk
Chapter 5. Installing a cluster on Azure Stack Hub using ARM templates
Chapter 5. Installing a cluster on Azure Stack Hub using ARM templates In OpenShift Container Platform version 4.12, you can install a cluster on Microsoft Azure Stack Hub by using infrastructure that you provide. Several Azure Resource Manager (ARM) templates are provided to assist in completing these steps or to help model your own. Important The steps for performing a user-provisioned infrastructure installation are provided as an example only. Installing a cluster with infrastructure you provide requires knowledge of the cloud provider and the installation process of OpenShift Container Platform. Several ARM templates are provided to assist in completing these steps or to help model your own. You are also free to create the required resources through other methods; the templates are just an example. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an Azure Stack Hub account to host the cluster. You downloaded the Azure CLI and installed it on your computer. See Install the Azure CLI in the Azure documentation. The documentation below was tested using version 2.28.0 of the Azure CLI. Azure CLI commands might perform differently based on the version you use. If you use a firewall and plan to use the Telemetry service, you configured the firewall to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 5.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.12, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 5.3. Configuring your Azure Stack Hub project Before you can install OpenShift Container Platform, you must configure an Azure project to host it. Important All Azure Stack Hub resources that are available through public endpoints are subject to resource name restrictions, and you cannot create resources that use certain terms. For a list of terms that Azure Stack Hub restricts, see Resolve reserved resource name errors in the Azure documentation. 5.3.1. Azure Stack Hub account limits The OpenShift Container Platform cluster uses a number of Microsoft Azure Stack Hub components, and the default Quota types in Azure Stack Hub affect your ability to install OpenShift Container Platform clusters. The following table summarizes the Azure Stack Hub components whose limits can impact your ability to install and run OpenShift Container Platform clusters. Component Number of components required by default Description vCPU 56 A default cluster requires 56 vCPUs, so you must increase the account limit. By default, each cluster creates the following instances: One bootstrap machine, which is removed after installation Three control plane machines Three compute machines Because the bootstrap, control plane, and worker machines use Standard_DS4_v2 virtual machines, which use 8 vCPUs, a default cluster requires 56 vCPUs. The bootstrap node VM is used only during installation. To deploy more worker nodes, enable autoscaling, deploy large workloads, or use a different instance type, you must further increase the vCPU limit for your account to ensure that your cluster can deploy the machines that you require. VNet 1 Each default cluster requires one Virtual Network (VNet), which contains two subnets. Network interfaces 7 Each default cluster requires seven network interfaces. If you create more machines or your deployed workloads create load balancers, your cluster uses more network interfaces. Network security groups 2 Each cluster creates network security groups for each subnet in the VNet. The default cluster creates network security groups for the control plane and for the compute node subnets: controlplane Allows the control plane machines to be reached on port 6443 from anywhere node Allows worker nodes to be reached from the internet on ports 80 and 443 Network load balancers 3 Each cluster creates the following load balancers : default Public IP address that load balances requests to ports 80 and 443 across worker machines internal Private IP address that load balances requests to ports 6443 and 22623 across control plane machines external Public IP address that load balances requests to port 6443 across control plane machines If your applications create more Kubernetes LoadBalancer service objects, your cluster uses more load balancers. Public IP addresses 2 The public load balancer uses a public IP address. The bootstrap machine also uses a public IP address so that you can SSH into the machine to troubleshoot issues during installation. The IP address for the bootstrap node is used only during installation. Private IP addresses 7 The internal load balancer, each of the three control plane machines, and each of the three worker machines each use a private IP address. Additional resources Optimizing storage . 5.3.2. Configuring a DNS zone in Azure Stack Hub To successfully install OpenShift Container Platform on Azure Stack Hub, you must create DNS records in an Azure Stack Hub DNS zone. The DNS zone must be authoritative for the domain. To delegate a registrar's DNS zone to Azure Stack Hub, see Microsoft's documentation for Azure Stack Hub datacenter DNS integration . You can view Azure's DNS solution by visiting this example for creating DNS zones . 5.3.3. Certificate signing requests management Because your cluster has limited access to automatic machine management when you use infrastructure that you provision, you must provide a mechanism for approving cluster certificate signing requests (CSRs) after installation. The kube-controller-manager only approves the kubelet client CSRs. The machine-approver cannot guarantee the validity of a serving certificate that is requested by using kubelet credentials because it cannot confirm that the correct machine issued the request. You must determine and implement a method of verifying the validity of the kubelet serving certificate requests and approving them. 5.3.4. Required Azure Stack Hub roles Your Microsoft Azure Stack Hub account must have the following roles for the subscription that you use: Owner To set roles on the Azure portal, see the Manage access to resources in Azure Stack Hub with role-based access control in the Microsoft documentation. 5.3.5. Creating a service principal Because OpenShift Container Platform and its installation program create Microsoft Azure resources by using the Azure Resource Manager, you must create a service principal to represent it. Prerequisites Install or update the Azure CLI . Your Azure account has the required roles for the subscription that you use. Procedure Register your environment: USD az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1 1 Specify the Azure Resource Manager endpoint, `https://management.<region>.<fqdn>/`. See the Microsoft documentation for details. Set the active environment: USD az cloud set -n AzureStackCloud Update your environment configuration to use the specific API version for Azure Stack Hub: USD az cloud update --profile 2019-03-01-hybrid Log in to the Azure CLI: USD az login If you are in a multitenant environment, you must also supply the tenant ID. If your Azure account uses subscriptions, ensure that you are using the right subscription: View the list of available accounts and record the tenantId value for the subscription you want to use for your cluster: USD az account list --refresh Example output [ { "cloudName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", "user": { "name": "[email protected]", "type": "user" } } ] View your active account details and confirm that the tenantId value matches the subscription you want to use: USD az account show Example output { "environmentName": AzureStackCloud", "id": "9bab1460-96d5-40b3-a78e-17b15e978a80", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee", 1 "user": { "name": "[email protected]", "type": "user" } } 1 Ensure that the value of the tenantId parameter is the correct subscription ID. If you are not using the right subscription, change the active subscription: USD az account set -s <subscription_id> 1 1 Specify the subscription ID. Verify the subscription ID update: USD az account show Example output { "environmentName": AzureStackCloud", "id": "33212d16-bdf6-45cb-b038-f6565b61edda", "isDefault": true, "name": "Subscription Name", "state": "Enabled", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee", "user": { "name": "[email protected]", "type": "user" } } Record the tenantId and id parameter values from the output. You need these values during the OpenShift Container Platform installation. Create the service principal for your account: USD az ad sp create-for-rbac --role Contributor --name <service_principal> \ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3 1 Specify the service principal name. 2 Specify the subscription ID. 3 Specify the number of years. By default, a service principal expires in one year. By using the --years option you can extend the validity of your service principal. Example output Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { "appId": "ac461d78-bf4b-4387-ad16-7e32e328aec6", "displayName": <service_principal>", "password": "00000000-0000-0000-0000-000000000000", "tenantId": "8049c7e9-c3de-762d-a54e-dc3f6be6a7ee" } Record the values of the appId and password parameters from the output. You need these values during OpenShift Container Platform installation. Additional resources For more information about CCO modes, see About the Cloud Credential Operator . 5.4. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select Azure as the cloud provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 5.5. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses FIPS validated or Modules In Process cryptographic libraries on the x86_64 , ppc64le , and s390x architectures. do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.6. Creating the installation files for Azure Stack Hub To install OpenShift Container Platform on Microsoft Azure Stack Hub using user-provisioned infrastructure, you must generate the files that the installation program needs to deploy your cluster and modify them so that the cluster creates only the machines that it will use. You manually create the install-config.yaml file, and then generate and customize the Kubernetes manifests and Ignition config files. You also have the option to first set up a separate var partition during the preparation phases of installation. 5.6.1. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Make the following modifications for Azure Stack Hub: Set the replicas parameter to 0 for the compute pool: compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1 1 Set to 0 . The compute machines will be provisioned manually later. Update the platform.azure section of the install-config.yaml file to configure your Azure Stack Hub configuration: platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4 1 Specify the Azure Resource Manager endpoint of your Azure Stack Hub environment, like https://management.local.azurestack.external . 2 Specify the name of the resource group that contains the DNS zone for your base domain. 3 Specify the Azure Stack Hub environment, which is used to configure the Azure SDK with the appropriate Azure API endpoints. 4 Specify the name of your Azure Stack Hub region. Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. 5.6.2. Sample customized install-config.yaml file for Azure Stack Hub You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. Use it as a resource to enter parameter values into the installation configuration file that you created manually. apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 6 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 7 baseDomainResourceGroupName: resource_group 8 region: azure_stack_local_region 9 resourceGroupName: existing_resource_group 10 outboundType: Loadbalancer cloudName: AzureStackCloud 11 pullSecret: '{"auths": ...}' 12 fips: false 13 additionalTrustBundle: | 14 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 15 1 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 2 4 You can specify the size of the disk to use in GB. Minimum recommendation for control plane nodes is 1024 GB. 5 Specify the name of the cluster. 6 The cluster network plugin to install. The supported values are OVNKubernetes and OpenShiftSDN . The default value is OVNKubernetes . 7 Specify the Azure Resource Manager endpoint that your Azure Stack Hub operator provides. 8 Specify the name of the resource group that contains the DNS zone for your base domain. 9 Specify the name of your Azure Stack Hub local region. 10 Specify the name of an already existing resource group to install your cluster to. If undefined, a new resource group is created for the cluster. 11 Specify the Azure Stack Hub environment as your target platform. 12 Specify the pull secret required to authenticate your cluster. 13 Whether to enable or disable FIPS mode. By default, FIPS mode is not enabled. If FIPS mode is enabled, the Red Hat Enterprise Linux CoreOS (RHCOS) machines that OpenShift Container Platform runs on bypass the default Kubernetes cryptography suite and use the cryptography modules that are provided with RHCOS instead. Important To enable FIPS mode for your cluster, you must run the installation program from a Red Hat Enterprise Linux (RHEL) computer configured to operate in FIPS mode. For more information about configuring FIPS mode on RHEL, see Installing the system in FIPS mode . The use of FIPS validated or Modules In Process cryptographic libraries is only supported on OpenShift Container Platform deployments on the x86_64 , ppc64le , and s390x architectures. 14 If your Azure Stack Hub environment uses an internal certificate authority (CA), add the necessary certificate bundle in .pem format. 15 You can optionally provide the sshKey value that you use to access the machines in your cluster. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 5.6.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.6.4. Exporting common variables for ARM templates You must export a common set of variables that are used with the provided Azure Resource Manager (ARM) templates used to assist in completing a user-provided infrastructure install on Microsoft Azure Stack Hub. Note Specific ARM templates can also require additional exported variables, which are detailed in their related procedures. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Export common variables found in the install-config.yaml to be used by the provided ARM templates: USD export CLUSTER_NAME=<cluster_name> 1 USD export AZURE_REGION=<azure_region> 2 USD export SSH_KEY=<ssh_key> 3 USD export BASE_DOMAIN=<base_domain> 4 USD export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5 1 The value of the .metadata.name attribute from the install-config.yaml file. 2 The region to deploy the cluster into. This is the value of the .platform.azure.region attribute from the install-config.yaml file. 3 The SSH RSA public key file as a string. You must enclose the SSH key in quotes since it contains spaces. This is the value of the .sshKey attribute from the install-config.yaml file. 4 The base domain to deploy the cluster to. The base domain corresponds to the DNS zone that you created for your cluster. This is the value of the .baseDomain attribute from the install-config.yaml file. 5 The resource group where the DNS zone exists. This is the value of the .platform.azure.baseDomainResourceGroupName attribute from the install-config.yaml file. For example: USD export CLUSTER_NAME=test-cluster USD export AZURE_REGION=centralus USD export SSH_KEY="ssh-rsa xxx/xxx/xxx= [email protected]" USD export BASE_DOMAIN=example.com USD export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 5.6.5. Creating the Kubernetes manifest and Ignition config files Because you must modify some cluster definition files and manually start the cluster machines, you must generate the Kubernetes manifest and Ignition config files that the cluster needs to configure the machines. The installation configuration file transforms into the Kubernetes manifests. The manifests wrap into the Ignition configuration files, which are later used to configure the cluster machines. Important The Ignition config files that the OpenShift Container Platform installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Prerequisites You obtained the OpenShift Container Platform installation program. You created the install-config.yaml installation configuration file. Procedure Change to the directory that contains the OpenShift Container Platform installation program and generate the Kubernetes manifests for the cluster: USD ./openshift-install create manifests --dir <installation_directory> 1 1 For <installation_directory> , specify the installation directory that contains the install-config.yaml file you created. Remove the Kubernetes manifest files that define the control plane machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml By removing these files, you prevent the cluster from automatically generating control plane machines. Remove the Kubernetes manifest files that define the worker machines: USD rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml Because you create and manage the worker machines yourself, you do not need to initialize these machines. Check that the mastersSchedulable parameter in the <installation_directory>/manifests/cluster-scheduler-02-config.yml Kubernetes manifest file is set to false . This setting prevents pods from being scheduled on the control plane machines: Open the <installation_directory>/manifests/cluster-scheduler-02-config.yml file. Locate the mastersSchedulable parameter and ensure that it is set to false . Save and exit the file. Optional: If you do not want the Ingress Operator to create DNS records on your behalf, remove the privateZone and publicZone sections from the <installation_directory>/manifests/cluster-dns-02-config.yml DNS configuration file: apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {} 1 2 Remove this section completely. If you do so, you must add ingress DNS records manually in a later step. Optional: If your Azure Stack Hub environment uses an internal certificate authority (CA), you must update the .spec.trustedCA.name field in the <installation_directory>/manifests/cluster-proxy-01-config.yaml file to use user-ca-bundle : ... spec: trustedCA: name: user-ca-bundle ... Later, you must update your bootstrap ignition to include the CA. When configuring Azure on user-provisioned infrastructure, you must export some common variables defined in the manifest files to use later in the Azure Resource Manager (ARM) templates: Export the infrastructure ID by using the following command: USD export INFRA_ID=<infra_id> 1 1 The OpenShift Container Platform cluster has been assigned an identifier ( INFRA_ID ) in the form of <cluster_name>-<random_string> . This will be used as the base name for most resources created using the provided ARM templates. This is the value of the .status.infrastructureName attribute from the manifests/cluster-infrastructure-02-config.yml file. Export the resource group by using the following command: USD export RESOURCE_GROUP=<resource_group> 1 1 All resources created in this Azure deployment exists as part of a resource group . The resource group name is also based on the INFRA_ID , in the form of <cluster_name>-<random_string>-rg . This is the value of the .status.platformStatus.azure.resourceGroupName attribute from the manifests/cluster-infrastructure-02-config.yml file. Manually create your cloud credentials. From the directory that contains the installation program, obtain details of the OpenShift Container Platform release image that your openshift-install binary is built to use: USD openshift-install version Example output release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 Locate all CredentialsRequest objects in this release image that target the cloud you are deploying on: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor Create YAML files for secrets in the openshift-install manifests directory that you generated previously. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. The format for the secret data varies for each cloud provider. Sample secrets.yaml file: apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region} Optional: If you manually created a cloud identity and access management (IAM) role, locate any CredentialsRequest objects with the TechPreviewNoUpgrade annotation in the release image by running the following command: USD oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name> Example output 0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade Important The release image includes CredentialsRequest objects for Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set. You can identify these objects by their use of the release.openshift.io/feature-set: TechPreviewNoUpgrade annotation. If you are not using any of these features, do not create secrets for these objects. Creating secrets for Technology Preview features that you are not using can cause the installation to fail. If you are using any of these features, you must create secrets for the corresponding objects. Delete all CredentialsRequest objects that have the TechPreviewNoUpgrade annotation. Create a cco-configmap.yaml file in the manifests directory with the Cloud Credential Operator (CCO) disabled: Sample ConfigMap object apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: "true" data: disabled: "true" To create the Ignition configuration files, run the following command from the directory that contains the installation program: USD ./openshift-install create ignition-configs --dir <installation_directory> 1 1 For <installation_directory> , specify the same installation directory. Ignition config files are created for the bootstrap, control plane, and compute nodes in the installation directory. The kubeadmin-password and kubeconfig files are created in the ./<installation_directory>/auth directory: Additional resources Manually creating IAM 5.6.6. Optional: Creating a separate /var partition It is recommended that disk partitioning for OpenShift Container Platform be left to the installer. However, there are cases where you might want to create separate partitions in a part of the filesystem that you expect to grow. OpenShift Container Platform supports the addition of a single partition to attach storage to either the /var partition or a subdirectory of /var . For example: /var/lib/containers : Holds container-related content that can grow as more images and containers are added to a system. /var/lib/etcd : Holds data that you might want to keep separate for purposes such as performance optimization of etcd storage. /var : Holds data that you might want to keep separate for purposes such as auditing. Storing the contents of a /var directory separately makes it easier to grow storage for those areas as needed and reinstall OpenShift Container Platform at a later date and keep that data intact. With this method, you will not have to pull all your containers again, nor will you have to copy massive log files when you update systems. Because /var must be in place before a fresh installation of Red Hat Enterprise Linux CoreOS (RHCOS), the following procedure sets up the separate /var partition by creating a machine config manifest that is inserted during the openshift-install preparation phases of an OpenShift Container Platform installation. Important If you follow the steps to create a separate /var partition in this procedure, it is not necessary to create the Kubernetes manifest and Ignition config files again as described later in this section. Procedure Create a directory to hold the OpenShift Container Platform installation files: USD mkdir USDHOME/clusterconfig Run openshift-install to create a set of files in the manifest and openshift subdirectories. Answer the system questions as you are prompted: USD openshift-install create manifests --dir USDHOME/clusterconfig Example output ? SSH Public Key ... INFO Credentials loaded from the "myprofile" profile in file "/home/myuser/.aws/credentials" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift Optional: Confirm that the installation program created manifests in the clusterconfig/openshift directory: USD ls USDHOME/clusterconfig/openshift/ Example output 99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml ... Create a Butane config that configures the additional partition. For example, name the file USDHOME/clusterconfig/98-var-partition.bu , change the disk device name to the name of the storage device on the worker systems, and set the storage size as appropriate. This example places the /var directory on a separate partition: variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true 1 The storage device name of the disk that you want to partition. 2 When adding a data partition to the boot disk, a minimum value of 25000 MiB (Mebibytes) is recommended. The root file system is automatically resized to fill all available space up to the specified offset. If no value is specified, or if the specified value is smaller than the recommended minimum, the resulting root file system will be too small, and future reinstalls of RHCOS might overwrite the beginning of the data partition. 3 The size of the data partition in mebibytes. 4 The prjquota mount option must be enabled for filesystems used for container storage. Note When creating a separate /var partition, you cannot use different instance types for worker nodes, if the different instance types do not have the same device name. Create a manifest from the Butane config and save it to the clusterconfig/openshift directory. For example, run the following command: USD butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml Run openshift-install again to create Ignition configs from a set of files in the manifest and openshift subdirectories: USD openshift-install create ignition-configs --dir USDHOME/clusterconfig USD ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign Now you can use the Ignition config files as input to the installation procedures to install Red Hat Enterprise Linux CoreOS (RHCOS) systems. 5.7. Creating the Azure resource group You must create a Microsoft Azure resource group . This is used during the installation of your OpenShift Container Platform cluster on Azure Stack Hub. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the resource group in a supported Azure region: USD az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION} 5.8. Uploading the RHCOS cluster image and bootstrap Ignition config file The Azure client does not support deployments based on files existing locally. You must copy and store the RHCOS virtual hard disk (VHD) cluster image and bootstrap Ignition config file in a storage container so they are accessible during deployment. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create an Azure storage account to store the VHD cluster image: USD az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS Warning The Azure storage account name must be between 3 and 24 characters in length and use numbers and lower-case letters only. If your CLUSTER_NAME variable does not follow these restrictions, you must manually define the Azure storage account name. For more information on Azure storage account name restrictions, see Resolve errors for storage account names in the Azure documentation. Export the storage account key as an environment variable: USD export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query "[0].value" -o tsv` Export the URL of the RHCOS VHD to an environment variable: USD export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats."vhd.gz".disk.location') Important The RHCOS images might not change with every release of OpenShift Container Platform. You must specify an image with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image version that matches your OpenShift Container Platform version if it is available. Create the storage container for the VHD: USD az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} Download the compressed RHCOS VHD file locally: USD curl -O -L USD{COMPRESSED_VHD_URL} Decompress the VHD file. Note The decompressed VHD file is approximately 16 GB, so be sure that your host system has 16 GB of free space available. You can delete the VHD file after you upload it. Copy the local VHD to a blob: USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd Create a blob storage container and upload the generated bootstrap.ign file: USD az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} USD az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c "files" -f "<installation_directory>/bootstrap.ign" -n "bootstrap.ign" 5.9. Example for creating DNS zones DNS records are required for clusters that use user-provisioned infrastructure. You should choose the DNS strategy that fits your scenario. For this example, Azure Stack Hub's datacenter DNS integration is used, so you will create a DNS zone. Note The DNS zone is not required to exist in the same resource group as the cluster deployment and might already exist in your organization for the desired base domain. If that is the case, you can skip creating the DNS zone; be sure the installation config you generated earlier reflects that scenario. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Create the new DNS zone in the resource group exported in the BASE_DOMAIN_RESOURCE_GROUP environment variable: USD az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN} You can skip this step if you are using a DNS zone that already exists. You can learn more about configuring a DNS zone in Azure Stack Hub by visiting that section. 5.10. Creating a VNet in Azure Stack Hub You must create a virtual network (VNet) in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. You can customize the VNet to meet your requirements. One way to create the VNet is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Procedure Copy the template from the ARM template for the VNet section of this topic and save it as 01_vnet.json in your cluster's installation directory. This template describes the VNet that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/01_vnet.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.10.1. ARM template for the VNet You can use the following Azure Resource Manager (ARM) template to deploy the VNet that you need for your OpenShift Container Platform cluster: Example 5.1. 01_vnet.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/01_vnet.json[] 5.11. Deploying the RHCOS cluster image for the Azure Stack Hub infrastructure You must use a valid Red Hat Enterprise Linux CoreOS (RHCOS) image for Microsoft Azure Stack Hub for your OpenShift Container Platform nodes. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Store the RHCOS virtual hard disk (VHD) cluster image in an Azure storage container. Store the bootstrap Ignition config file in an Azure storage container. Procedure Copy the template from the ARM template for image storage section of this topic and save it as 02_storage.json in your cluster's installation directory. This template describes the image storage that your cluster requires. Export the RHCOS VHD blob URL as a variable: USD export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n "rhcos.vhd" -o tsv` Deploy the cluster image: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/02_storage.json" \ --parameters vhdBlobURL="USD{VHD_BLOB_URL}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 1 The blob URL of the RHCOS VHD to be used to create master and worker machines. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 5.11.1. ARM template for image storage You can use the following Azure Resource Manager (ARM) template to deploy the stored Red Hat Enterprise Linux CoreOS (RHCOS) image that you need for your OpenShift Container Platform cluster: Example 5.2. 02_storage.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/02_storage.json[] 5.12. Networking requirements for user-provisioned infrastructure All the Red Hat Enterprise Linux CoreOS (RHCOS) machines require networking to be configured in initramfs during boot to fetch their Ignition config files. 5.12.1. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Each machine must be able to resolve the hostnames of all other machines in the cluster. This section provides details about the ports that are required. Important In connected OpenShift Container Platform environments, all nodes are required to have internet access to pull images for platform containers and provide telemetry data to Red Hat. Table 5.1. Ports used for all-machine to all-machine communications Protocol Port Description ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 VXLAN 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets 123 Network Time Protocol (NTP) on UDP port 123 If an external NTP time server is configured, you must open UDP port 123 . TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 5.2. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 5.3. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 5.13. Creating networking and load balancing components in Azure Stack Hub You must configure networking and load balancing in Microsoft Azure Stack Hub for your OpenShift Container Platform cluster to use. One way to create these components is to modify the provided Azure Resource Manager (ARM) template. Load balancing requires the following DNS records: An api DNS record for the API public load balancer in the DNS zone. An api-int DNS record for the API internal load balancer in the DNS zone. Note If you do not use the provided ARM template to create your Azure Stack Hub infrastructure, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Procedure Copy the template from the ARM template for the network and load balancers section of this topic and save it as 03_infra.json in your cluster's installation directory. This template describes the networking and load balancing objects that your cluster requires. Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/03_infra.json" \ --parameters baseName="USD{INFRA_ID}" 1 1 The base name to be used in resource names; this is usually the cluster's infrastructure ID. Create an api DNS record and an api-int DNS record. When creating the API DNS records, the USD{BASE_DOMAIN_RESOURCE_GROUP} variable must point to the resource group where the DNS zone exists. Export the following variable: USD export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query "[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress" -o tsv` Export the following variable: USD export PRIVATE_IP=`az network lb frontend-ip show -g "USDRESOURCE_GROUP" --lb-name "USD{INFRA_ID}-internal" -n internal-lb-ip --query "privateIpAddress" -o tsv` Create the api DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60 Create the api-int DNS record in a new DNS zone: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z "USD{CLUSTER_NAME}.USD{BASE_DOMAIN}" -n api-int -a USD{PRIVATE_IP} --ttl 60 If you are adding the cluster to an existing DNS zone, you can create the api-int DNS record in it instead: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60 5.13.1. ARM template for the network and load balancers You can use the following Azure Resource Manager (ARM) template to deploy the networking objects and load balancers that you need for your OpenShift Container Platform cluster: Example 5.3. 03_infra.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/03_infra.json[] 5.14. Creating the bootstrap machine in Azure Stack Hub You must create the bootstrap machine in Microsoft Azure Stack Hub to use during OpenShift Container Platform cluster initialization. One way to create this machine is to modify the provided Azure Resource Manager (ARM) template. Note If you do not use the provided ARM template to create your bootstrap machine, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, you might have to contact Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Procedure Copy the template from the ARM template for the bootstrap machine section of this topic and save it as 04_bootstrap.json in your cluster's installation directory. This template describes the bootstrap machine that your cluster requires. Export the bootstrap URL variable: USD bootstrap_url_expiry=`date -u -d "10 hours" '+%Y-%m-%dT%H:%MZ'` USD export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv` Export the bootstrap ignition variable: If your environment uses a public certificate authority (CA), run this command: USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` If your environment uses an internal CA, you must add your PEM encoded bundle to the bootstrap ignition stub so that your bootstrap virtual machine can pull the bootstrap ignition from the storage account. Run the following commands, which assume your CA is in a file called CA.pem : USD export CA="data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\n')" USD export BOOTSTRAP_IGNITION=`jq -rcnM --arg v "3.2.0" --arg url "USDBOOTSTRAP_URL" --arg cert "USDCA" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create --verbose -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/04_bootstrap.json" \ --parameters bootstrapIgnition="USD{BOOTSTRAP_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The bootstrap Ignition content for the bootstrap cluster. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 5.14.1. ARM template for the bootstrap machine You can use the following Azure Resource Manager (ARM) template to deploy the bootstrap machine that you need for your OpenShift Container Platform cluster: Example 5.4. 04_bootstrap.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/04_bootstrap.json[] 5.15. Creating the control plane machines in Azure Stack Hub You must create the control plane machines in Microsoft Azure Stack Hub for your cluster to use. One way to create these machines is to modify the provided Azure Resource Manager (ARM) template. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Procedure Copy the template from the ARM template for control plane machines section of this topic and save it as 05_masters.json in your cluster's installation directory. This template describes the control plane machines that your cluster requires. Export the following variable needed by the control plane machine deployment: USD export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/05_masters.json" \ --parameters masterIgnition="USD{MASTER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" \ 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the control plane nodes (also known as the master nodes). 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 5.15.1. ARM template for control plane machines You can use the following Azure Resource Manager (ARM) template to deploy the control plane machines that you need for your OpenShift Container Platform cluster: Example 5.5. 05_masters.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/05_masters.json[] 5.16. Wait for bootstrap completion and remove bootstrap resources in Azure Stack Hub After you create all of the required infrastructure in Microsoft Azure Stack Hub, wait for the bootstrap process to complete on the machines that you provisioned by using the Ignition config files that you generated with the installation program. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Change to the directory that contains the installation program and run the following command: USD ./openshift-install wait-for bootstrap-complete --dir <installation_directory> \ 1 --log-level info 2 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. 2 To view different installation details, specify warn , debug , or error instead of info . If the command exits without a FATAL warning, your production control plane has initialized. Delete the bootstrap resources: USD az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in USD az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap USD az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes USD az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes USD az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait USD az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign USD az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip Note If you do not delete the bootstrap server, installation may not succeed due to API traffic being routed to the bootstrap server. 5.17. Creating additional worker machines in Azure Stack Hub You can create worker machines in Microsoft Azure Stack Hub for your cluster to use by launching individual instances discretely or by automated processes outside the cluster, such as auto scaling groups. You can also take advantage of the built-in cluster scaling mechanisms and the machine API in OpenShift Container Platform. In this example, you manually launch one instance by using the Azure Resource Manager (ARM) template. Additional instances can be launched by including additional resources of type 06_workers.json in the file. If you do not use the provided ARM template to create your control plane machines, you must review the provided information and manually create the infrastructure. If your cluster does not initialize correctly, consider contacting Red Hat support with your installation logs. Prerequisites Configure an Azure account. Generate the Ignition config files for your cluster. Create and configure a VNet and associated subnets in Azure Stack Hub. Create and configure networking and load balancers in Azure Stack Hub. Create control plane and compute roles. Create the bootstrap machine. Create the control plane machines. Procedure Copy the template from the ARM template for worker machines section of this topic and save it as 06_workers.json in your cluster's installation directory. This template describes the worker machines that your cluster requires. Export the following variable needed by the worker machine deployment: USD export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\n'` Create the deployment by using the az CLI: USD az deployment group create -g USD{RESOURCE_GROUP} \ --template-file "<installation_directory>/06_workers.json" \ --parameters workerIgnition="USD{WORKER_IGNITION}" \ 1 --parameters baseName="USD{INFRA_ID}" 2 --parameters diagnosticsStorageAccountName="USD{CLUSTER_NAME}sa" 3 1 The Ignition content for the worker nodes. 2 The base name to be used in resource names; this is usually the cluster's infrastructure ID. 3 The name of the storage account for your cluster. 5.17.1. ARM template for worker machines You can use the following Azure Resource Manager (ARM) template to deploy the worker machines that you need for your OpenShift Container Platform cluster: Example 5.6. 06_workers.json ARM template link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/06_workers.json[] 5.18. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.12. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.12 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.12 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 5.19. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 5.20. Approving the certificate signing requests for your machines When you add machines to a cluster, two pending certificate signing requests (CSRs) are generated for each machine that you added. You must confirm that these CSRs are approved or, if necessary, approve them yourself. The client requests must be approved first, followed by the server requests. Prerequisites You added machines to your cluster. Procedure Confirm that the cluster recognizes the machines: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0 The output lists all of the machines that you created. Note The preceding output might not include the compute nodes, also known as worker nodes, until some CSRs are approved. Review the pending CSRs and ensure that you see the client requests with the Pending or Approved status for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending ... In this example, two machines are joining the cluster. You might see more approved CSRs in the list. If the CSRs were not approved, after all of the pending CSRs for the machines you added are in Pending status, approve the CSRs for your cluster machines: Note Because the CSRs rotate automatically, approve your CSRs within an hour of adding the machines to the cluster. If you do not approve them within an hour, the certificates will rotate, and more than two certificates will be present for each node. You must approve all of these certificates. After the client CSR is approved, the Kubelet creates a secondary CSR for the serving certificate, which requires manual approval. Then, subsequent serving certificate renewal requests are automatically approved by the machine-approver if the Kubelet requests a new certificate with identical parameters. Note For clusters running on platforms that are not machine API enabled, such as bare metal and other user-provisioned infrastructure, you must implement a method of automatically approving the kubelet serving certificate requests (CSRs). If a request is not approved, then the oc exec , oc rsh , and oc logs commands cannot succeed, because a serving certificate is required when the API server connects to the kubelet. Any operation that contacts the Kubelet endpoint requires this certificate approval to be in place. The method must watch for new CSRs, confirm that the CSR was submitted by the node-bootstrapper service account in the system:node or system:admin groups, and confirm the identity of the node. To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve Note Some Operators might not become available until some CSRs are approved. Now that your client requests are approved, you must review the server requests for each machine that you added to the cluster: USD oc get csr Example output NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending ... If the remaining CSRs are not approved, and are in the Pending status, approve the CSRs for your cluster machines: To approve them individually, run the following command for each valid CSR: USD oc adm certificate approve <csr_name> 1 1 <csr_name> is the name of a CSR from the list of current CSRs. To approve all pending CSRs, run the following command: USD oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{"\n"}}{{end}}{{end}}' | xargs oc adm certificate approve After all client and server CSRs have been approved, the machines have the Ready status. Verify this by running the following command: USD oc get nodes Example output NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0 Note It can take a few minutes after approval of the server CSRs for the machines to transition to the Ready status. Additional information For more information on CSRs, see Certificate Signing Requests . 5.21. Adding the Ingress DNS records If you removed the DNS Zone configuration when creating Kubernetes manifests and generating Ignition configs, you must manually create DNS records that point at the Ingress load balancer. You can create either a wildcard *.apps.{baseDomain}. or specific records. You can use A, CNAME, and other records per your requirements. Prerequisites You deployed an OpenShift Container Platform cluster on Microsoft Azure Stack Hub by using infrastructure that you provisioned. Install the OpenShift CLI ( oc ). Install or update the Azure CLI . Procedure Confirm the Ingress router has created a load balancer and populated the EXTERNAL-IP field: USD oc -n openshift-ingress get service router-default Example output NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20 Export the Ingress router IP as a variable: USD export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'` Add a *.apps record to the DNS zone. If you are adding this cluster to a new DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you are adding this cluster to an already existing DNS zone, run: USD az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300 If you prefer to add explicit domains instead of using a wildcard, you can create entries for each of the cluster's current routes: USD oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{"\n"}{end}{end}' routes Example output oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com 5.22. Completing an Azure Stack Hub installation on user-provisioned infrastructure After you start the OpenShift Container Platform installation on Microsoft Azure Stack Hub user-provisioned infrastructure, you can monitor the cluster events until the cluster is ready. Prerequisites Deploy the bootstrap machine for an OpenShift Container Platform cluster on user-provisioned Azure Stack Hub infrastructure. Install the oc CLI and log in. Procedure Complete the cluster installation: USD ./openshift-install --dir <installation_directory> wait-for install-complete 1 Example output INFO Waiting up to 30m0s for the cluster to initialize... 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. Additional resources See About remote health monitoring for more information about the Telemetry service.
[ "az cloud register -n AzureStackCloud --endpoint-resource-manager <endpoint> 1", "az cloud set -n AzureStackCloud", "az cloud update --profile 2019-03-01-hybrid", "az login", "az account list --refresh", "[ { \"cloudName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } } ]", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"9bab1460-96d5-40b3-a78e-17b15e978a80\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"6057c7e9-b3ae-489d-a54e-de3f6bf6a8ee\", 1 \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az account set -s <subscription_id> 1", "az account show", "{ \"environmentName\": AzureStackCloud\", \"id\": \"33212d16-bdf6-45cb-b038-f6565b61edda\", \"isDefault\": true, \"name\": \"Subscription Name\", \"state\": \"Enabled\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\", \"user\": { \"name\": \"[email protected]\", \"type\": \"user\" } }", "az ad sp create-for-rbac --role Contributor --name <service_principal> \\ 1 --scopes /subscriptions/<subscription_id> 2 --years <years> 3", "Creating 'Contributor' role assignment under scope '/subscriptions/<subscription_id>' The output includes credentials that you must protect. Be sure that you do not include these credentials in your code or check the credentials into your source control. For more information, see https://aka.ms/azadsp-cli { \"appId\": \"ac461d78-bf4b-4387-ad16-7e32e328aec6\", \"displayName\": <service_principal>\", \"password\": \"00000000-0000-0000-0000-000000000000\", \"tenantId\": \"8049c7e9-c3de-762d-a54e-dc3f6be6a7ee\" }", "tar -xvf openshift-install-linux.tar.gz", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "mkdir <installation_directory>", "compute: - hyperthreading: Enabled name: worker platform: {} replicas: 0 1", "platform: azure: armEndpoint: <azurestack_arm_endpoint> 1 baseDomainResourceGroupName: <resource_group> 2 cloudName: AzureStackCloud 3 region: <azurestack_region> 4", "apiVersion: v1 baseDomain: example.com controlPlane: 1 name: master platform: azure: osDisk: diskSizeGB: 1024 2 diskType: premium_LRS replicas: 3 compute: 3 - name: worker platform: azure: osDisk: diskSizeGB: 512 4 diskType: premium_LRS replicas: 0 metadata: name: test-cluster 5 networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 networkType: OVNKubernetes 6 serviceNetwork: - 172.30.0.0/16 platform: azure: armEndpoint: azurestack_arm_endpoint 7 baseDomainResourceGroupName: resource_group 8 region: azure_stack_local_region 9 resourceGroupName: existing_resource_group 10 outboundType: Loadbalancer cloudName: AzureStackCloud 11 pullSecret: '{\"auths\": ...}' 12 fips: false 13 additionalTrustBundle: | 14 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- sshKey: ssh-ed25519 AAAA... 15", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "export CLUSTER_NAME=<cluster_name> 1 export AZURE_REGION=<azure_region> 2 export SSH_KEY=<ssh_key> 3 export BASE_DOMAIN=<base_domain> 4 export BASE_DOMAIN_RESOURCE_GROUP=<base_domain_resource_group> 5", "export CLUSTER_NAME=test-cluster export AZURE_REGION=centralus export SSH_KEY=\"ssh-rsa xxx/xxx/xxx= [email protected]\" export BASE_DOMAIN=example.com export BASE_DOMAIN_RESOURCE_GROUP=ocp-cluster", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "./openshift-install create manifests --dir <installation_directory> 1", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_master-machines-*.yaml", "rm -f <installation_directory>/openshift/99_openshift-cluster-api_worker-machineset-*.yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: null name: cluster spec: baseDomain: example.openshift.com privateZone: 1 id: mycluster-100419-private-zone publicZone: 2 id: example.openshift.com status: {}", "spec: trustedCA: name: user-ca-bundle", "export INFRA_ID=<infra_id> 1", "export RESOURCE_GROUP=<resource_group> 1", "openshift-install version", "release image quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=azure", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-azure namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor", "apiVersion: v1 kind: Secret metadata: name: USD{secret_name} namespace: USD{secret_namespace} stringData: azure_subscription_id: USD{subscription_id} azure_client_id: USD{app_id} azure_client_secret: USD{client_secret} azure_tenant_id: USD{tenant_id} azure_resource_prefix: USD{cluster_name} azure_resourcegroup: USD{resource_group} azure_region: USD{azure_region}", "oc adm release extract quay.io/openshift-release-dev/ocp-release:4.y.z-x86_64 --credentials-requests --cloud=<platform_name>", "0000_30_capi-operator_00_credentials-request.yaml: release.openshift.io/feature-set: TechPreviewNoUpgrade", "apiVersion: v1 kind: ConfigMap metadata: name: cloud-credential-operator-config namespace: openshift-cloud-credential-operator annotations: release.openshift.io/create-only: \"true\" data: disabled: \"true\"", "./openshift-install create ignition-configs --dir <installation_directory> 1", ". β”œβ”€β”€ auth β”‚ β”œβ”€β”€ kubeadmin-password β”‚ └── kubeconfig β”œβ”€β”€ bootstrap.ign β”œβ”€β”€ master.ign β”œβ”€β”€ metadata.json └── worker.ign", "mkdir USDHOME/clusterconfig", "openshift-install create manifests --dir USDHOME/clusterconfig", "? SSH Public Key INFO Credentials loaded from the \"myprofile\" profile in file \"/home/myuser/.aws/credentials\" INFO Consuming Install Config from target directory INFO Manifests created in: USDHOME/clusterconfig/manifests and USDHOME/clusterconfig/openshift", "ls USDHOME/clusterconfig/openshift/", "99_kubeadmin-password-secret.yaml 99_openshift-cluster-api_master-machines-0.yaml 99_openshift-cluster-api_master-machines-1.yaml 99_openshift-cluster-api_master-machines-2.yaml", "variant: openshift version: 4.12.0 metadata: labels: machineconfiguration.openshift.io/role: worker name: 98-var-partition storage: disks: - device: /dev/disk/by-id/<device_id> 1 partitions: - label: var start_mib: <partition_start_offset> 2 size_mib: <partition_size> 3 number: 5 filesystems: - device: /dev/disk/by-partlabel/var path: /var format: xfs mount_options: [defaults, prjquota] 4 with_mount_unit: true", "butane USDHOME/clusterconfig/98-var-partition.bu -o USDHOME/clusterconfig/openshift/98-var-partition.yaml", "openshift-install create ignition-configs --dir USDHOME/clusterconfig ls USDHOME/clusterconfig/ auth bootstrap.ign master.ign metadata.json worker.ign", "az group create --name USD{RESOURCE_GROUP} --location USD{AZURE_REGION}", "az storage account create -g USD{RESOURCE_GROUP} --location USD{AZURE_REGION} --name USD{CLUSTER_NAME}sa --kind Storage --sku Standard_LRS", "export ACCOUNT_KEY=`az storage account keys list -g USD{RESOURCE_GROUP} --account-name USD{CLUSTER_NAME}sa --query \"[0].value\" -o tsv`", "export COMPRESSED_VHD_URL=USD(openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.azurestack.formats.\"vhd.gz\".disk.location')", "az storage container create --name vhd --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "curl -O -L USD{COMPRESSED_VHD_URL}", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -f rhcos-<rhcos_version>-azurestack.x86_64.vhd", "az storage container create --name files --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY}", "az storage blob upload --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c \"files\" -f \"<installation_directory>/bootstrap.ign\" -n \"bootstrap.ign\"", "az network dns zone create -g USD{BASE_DOMAIN_RESOURCE_GROUP} -n USD{CLUSTER_NAME}.USD{BASE_DOMAIN}", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/01_vnet.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/01_vnet.json[]", "export VHD_BLOB_URL=`az storage blob url --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -c vhd -n \"rhcos.vhd\" -o tsv`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/02_storage.json\" --parameters vhdBlobURL=\"USD{VHD_BLOB_URL}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2", "link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/02_storage.json[]", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/03_infra.json\" --parameters baseName=\"USD{INFRA_ID}\" 1", "export PUBLIC_IP=`az network public-ip list -g USD{RESOURCE_GROUP} --query \"[?name=='USD{INFRA_ID}-master-pip'] | [0].ipAddress\" -o tsv`", "export PRIVATE_IP=`az network lb frontend-ip show -g \"USDRESOURCE_GROUP\" --lb-name \"USD{INFRA_ID}-internal\" -n internal-lb-ip --query \"privateIpAddress\" -o tsv`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n api -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api.USD{CLUSTER_NAME} -a USD{PUBLIC_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z \"USD{CLUSTER_NAME}.USD{BASE_DOMAIN}\" -n api-int -a USD{PRIVATE_IP} --ttl 60", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n api-int.USD{CLUSTER_NAME} -a USD{PRIVATE_IP} --ttl 60", "link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/03_infra.json[]", "bootstrap_url_expiry=`date -u -d \"10 hours\" '+%Y-%m-%dT%H:%MZ'`", "export BOOTSTRAP_URL=`az storage blob generate-sas -c 'files' -n 'bootstrap.ign' --https-only --full-uri --permissions r --expiry USDbootstrap_url_expiry --account-name USD{CLUSTER_NAME}sa --account-key USD{ACCOUNT_KEY} -o tsv`", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url USD{BOOTSTRAP_URL} '{ignition:{version:USDv,config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "export CA=\"data:text/plain;charset=utf-8;base64,USD(cat CA.pem |base64 |tr -d '\\n')\"", "export BOOTSTRAP_IGNITION=`jq -rcnM --arg v \"3.2.0\" --arg url \"USDBOOTSTRAP_URL\" --arg cert \"USDCA\" '{ignition:{version:USDv,security:{tls:{certificateAuthorities:[{source:USDcert}]}},config:{replace:{source:USDurl}}}}' | base64 | tr -d '\\n'`", "az deployment group create --verbose -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/04_bootstrap.json\" --parameters bootstrapIgnition=\"USD{BOOTSTRAP_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3", "link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/04_bootstrap.json[]", "export MASTER_IGNITION=`cat <installation_directory>/master.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/05_masters.json\" --parameters masterIgnition=\"USD{MASTER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" \\ 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3", "link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/05_masters.json[]", "./openshift-install wait-for bootstrap-complete --dir <installation_directory> \\ 1 --log-level info 2", "az network nsg rule delete -g USD{RESOURCE_GROUP} --nsg-name USD{INFRA_ID}-nsg --name bootstrap_ssh_in az vm stop -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm deallocate -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap az vm delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap --yes az disk delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap_OSDisk --no-wait --yes az network nic delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-nic --no-wait az storage blob delete --account-key USD{ACCOUNT_KEY} --account-name USD{CLUSTER_NAME}sa --container-name files --name bootstrap.ign az network public-ip delete -g USD{RESOURCE_GROUP} --name USD{INFRA_ID}-bootstrap-ssh-pip", "export WORKER_IGNITION=`cat <installation_directory>/worker.ign | base64 | tr -d '\\n'`", "az deployment group create -g USD{RESOURCE_GROUP} --template-file \"<installation_directory>/06_workers.json\" --parameters workerIgnition=\"USD{WORKER_IGNITION}\" \\ 1 --parameters baseName=\"USD{INFRA_ID}\" 2 --parameters diagnosticsStorageAccountName=\"USD{CLUSTER_NAME}sa\" 3", "link:https://raw.githubusercontent.com/openshift/installer/release-4.12/upi/azurestack/06_workers.json[]", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 63m v1.25.0 master-1 Ready master 63m v1.25.0 master-2 Ready master 64m v1.25.0", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-8b2br 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending csr-8vnps 15m system:serviceaccount:openshift-machine-config-operator:node-bootstrapper Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs --no-run-if-empty oc adm certificate approve", "oc get csr", "NAME AGE REQUESTOR CONDITION csr-bfd72 5m26s system:node:ip-10-0-50-126.us-east-2.compute.internal Pending csr-c57lv 5m26s system:node:ip-10-0-95-157.us-east-2.compute.internal Pending", "oc adm certificate approve <csr_name> 1", "oc get csr -o go-template='{{range .items}}{{if not .status}}{{.metadata.name}}{{\"\\n\"}}{{end}}{{end}}' | xargs oc adm certificate approve", "oc get nodes", "NAME STATUS ROLES AGE VERSION master-0 Ready master 73m v1.25.0 master-1 Ready master 73m v1.25.0 master-2 Ready master 74m v1.25.0 worker-0 Ready worker 11m v1.25.0 worker-1 Ready worker 11m v1.25.0", "oc -n openshift-ingress get service router-default", "NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE router-default LoadBalancer 172.30.20.10 35.130.120.110 80:32288/TCP,443:31215/TCP 20", "export PUBLIC_IP_ROUTER=`oc -n openshift-ingress get service router-default --no-headers | awk '{print USD4}'`", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{CLUSTER_NAME}.USD{BASE_DOMAIN} -n *.apps -a USD{PUBLIC_IP_ROUTER} --ttl 300", "az network dns record-set a add-record -g USD{BASE_DOMAIN_RESOURCE_GROUP} -z USD{BASE_DOMAIN} -n *.apps.USD{CLUSTER_NAME} -a USD{PUBLIC_IP_ROUTER} --ttl 300", "oc get --all-namespaces -o jsonpath='{range .items[*]}{range .status.ingress[*]}{.host}{\"\\n\"}{end}{end}' routes", "oauth-openshift.apps.cluster.basedomain.com console-openshift-console.apps.cluster.basedomain.com downloads-openshift-console.apps.cluster.basedomain.com alertmanager-main-openshift-monitoring.apps.cluster.basedomain.com prometheus-k8s-openshift-monitoring.apps.cluster.basedomain.com", "./openshift-install --dir <installation_directory> wait-for install-complete 1", "INFO Waiting up to 30m0s for the cluster to initialize" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/installing_on_azure_stack_hub/installing-azure-stack-hub-user-infra
Security APIs
Security APIs OpenShift Container Platform 4.16 Reference guide for security APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html-single/security_apis/index
Chapter 10. Understanding and creating service accounts
Chapter 10. Understanding and creating service accounts 10.1. Service accounts overview A service account is an OpenShift Container Platform account that allows a component to directly access the API. Service accounts are API objects that exist within each project. Service accounts provide a flexible way to control API access without sharing a regular user's credentials. When you use the OpenShift Container Platform CLI or web console, your API token authenticates you to the API. You can associate a component with a service account so that they can access the API without using a regular user's credentials. For example, service accounts can allow: Replication controllers to make API calls to create or delete pods Applications inside containers to make API calls for discovery purposes External applications to make API calls for monitoring or integration purposes Each service account's user name is derived from its project and name: system:serviceaccount:<project>:<name> Every service account is also a member of two groups: Group Description system:serviceaccounts Includes all service accounts in the system. system:serviceaccounts:<project> Includes all service accounts in the specified project. 10.1.1. Automatically generated image pull secrets By default, OpenShift Container Platform creates an image pull secret for each service account. Note Prior to OpenShift Container Platform 4.16, a long-lived service account API token secret was also generated for each service account that was created. Starting with OpenShift Container Platform 4.16, this service account API token secret is no longer created. After upgrading to 4.16, any existing long-lived service account API token secrets are not deleted and will continue to function. For information about detecting long-lived API tokens that are in use in your cluster or deleting them if they are not needed, see the Red Hat Knowledgebase article Long-lived service account API tokens in OpenShift Container Platform . This image pull secret is necessary to integrate the OpenShift image registry into the cluster's user authentication and authorization system. However, if you do not enable the ImageRegistry capability or if you disable the integrated OpenShift image registry in the Cluster Image Registry Operator's configuration, an image pull secret is not generated for each service account. When the integrated OpenShift image registry is disabled on a cluster that previously had it enabled, the previously generated image pull secrets are deleted automatically. 10.2. Creating service accounts You can create a service account in a project and grant it permissions by binding it to a role. Procedure Optional: To view the service accounts in the current project: USD oc get sa Example output NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d To create a new service account in the current project: USD oc create sa <service_account_name> 1 1 To create a service account in a different project, specify -n <project_name> . Example output serviceaccount "robot" created Tip You can alternatively apply the following YAML to create the service account: apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project> Optional: View the secrets for the service account: USD oc describe sa robot Example output Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none> 10.3. Granting roles to service accounts You can grant roles to service accounts in the same way that you grant roles to a regular user account. You can modify the service accounts for the current project. For example, to add the view role to the robot service account in the top-secret project: USD oc policy add-role-to-user view system:serviceaccount:top-secret:robot Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret You can also grant access to a specific service account in a project. For example, from the project to which the service account belongs, use the -z flag and specify the <service_account_name> USD oc policy add-role-to-user <role_name> -z <service_account_name> Important If you want to grant access to a specific service account in a project, use the -z flag. Using this flag helps prevent typos and ensures that access is granted to only the specified service account. Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name> To modify a different namespace, you can use the -n option to indicate the project namespace it applies to, as shown in the following examples. For example, to allow all service accounts in all projects to view resources in the my-project project: USD oc policy add-role-to-group view system:serviceaccounts -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts To allow all service accounts in the managers project to edit resources in the my-project project: USD oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project Tip You can alternatively apply the following YAML to add the role: apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers
[ "system:serviceaccount:<project>:<name>", "oc get sa", "NAME SECRETS AGE builder 1 2d default 1 2d deployer 1 2d", "oc create sa <service_account_name> 1", "serviceaccount \"robot\" created", "apiVersion: v1 kind: ServiceAccount metadata: name: <service_account_name> namespace: <current_project>", "oc describe sa robot", "Name: robot Namespace: project1 Labels: <none> Annotations: openshift.io/internal-registry-pull-secret-ref: robot-dockercfg-qzbhb Image pull secrets: robot-dockercfg-qzbhb Mountable secrets: robot-dockercfg-qzbhb Tokens: <none> Events: <none>", "oc policy add-role-to-user view system:serviceaccount:top-secret:robot", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: top-secret roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - kind: ServiceAccount name: robot namespace: top-secret", "oc policy add-role-to-user <role_name> -z <service_account_name>", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: <rolebinding_name> namespace: <current_project_name> roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: <role_name> subjects: - kind: ServiceAccount name: <service_account_name> namespace: <current_project_name>", "oc policy add-role-to-group view system:serviceaccounts -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: view namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: view subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts", "oc policy add-role-to-group edit system:serviceaccounts:managers -n my-project", "apiVersion: rbac.authorization.k8s.io/v1 kind: RoleBinding metadata: name: edit namespace: my-project roleRef: apiGroup: rbac.authorization.k8s.io kind: ClusterRole name: edit subjects: - apiGroup: rbac.authorization.k8s.io kind: Group name: system:serviceaccounts:managers" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/authentication_and_authorization/understanding-and-creating-service-accounts
2.8. SSSD Clients and Active Directory DNS Site Autodiscovery
2.8. SSSD Clients and Active Directory DNS Site Autodiscovery Active Directory forests can be very large, with numerous different domain controllers, domains and child domains, and physical sites. Active Directory uses the concept of sites to identify the physical location for its domain controllers. This enables clients to connect to the domain controller that is geographically closest, which increases client performance. By default, SSSD clients use autodiscovery to find its AD site and connect to the closest domain controller. The process consists of these steps: SSSD queries SRV records from the DNS server in the AD forest. The returned records contain the names of DCs in the forest. SSSD sends an LDAP ping to each of these DCs. If a DC does not respond within a configured interval, the request times out and SSSD sends the LDAP ping to the one. If the connection succeeds, the response contains information about the AD site the SSSD client belongs to. SSSD then queries SRV records from the DNS server to locate DCs within the site it belongs to, and connects to one of them. Note SSSD remembers the AD site it belongs to by default. In this way, SSSD can send the LDAP ping directly to a DC in this site during the autodiscovery process to refresh the site information. Consequently, the procedure of autodiscovery is very fast as no timeouts occur normally. If the site no longer exists or the client has meanwhile been assigned to a different site, SSSD starts querying for SRV records in the forest and goes through the whole process again. To override the autodiscovery, specify the AD site to which you want the client to connect by using the ad_site option in the [domain] section of the /etc/sssd/sssd.conf file. Additional Resources See the sssd-ad (5) man page for details on ad_site . For environments with a trust between Identity Management and Active Directory, see Section 5.6, "Restricting Identity Management or SSSD to Selected Active Directory Servers or Sites in a Trusted Active Directory Domain" .
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/windows_integration_guide/sssd-ad-dns-sites
5.3.7. Removing Physical Volumes from a Volume Group
5.3.7. Removing Physical Volumes from a Volume Group To remove unused physical volumes from a volume group, use the vgreduce command. The vgreduce command shrinks a volume group's capacity by removing one or more empty physical volumes. This frees those physical volumes to be used in different volume groups or to be removed from the system. Before removing a physical volume from a volume group, you can make sure that the physical volume is not used by any logical volumes by using the pvdisplay command. If the physical volume is still being used you will have to migrate the data to another physical volume using the pvmove command. Then use the vgreduce command to remove the physical volume. The following command removes the physical volume /dev/hda1 from the volume group my_volume_group . If a logical volume contains a physical volume that fails, you cannot use that logical volume. To remove missing physical volumes from a volume group, you can use the --removemissing parameter of the vgreduce command, if there are no logical volumes that are allocated on the missing physical volumes.
[ "pvdisplay /dev/hda1 -- Physical volume --- PV Name /dev/hda1 VG Name myvg PV Size 1.95 GB / NOT usable 4 MB [LVM: 122 KB] PV# 1 PV Status available Allocatable yes (but full) Cur LV 1 PE Size (KByte) 4096 Total PE 499 Free PE 0 Allocated PE 499 PV UUID Sd44tK-9IRw-SrMC-MOkn-76iP-iftz-OVSen7", "vgreduce my_volume_group /dev/hda1" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/VG_remove_PV
5.7. Performing Asynchronous Tasks
5.7. Performing Asynchronous Tasks From the Red Hat Virtualization Manager 3.5 release onwards, you can perform asynchronous tasks on the Red Hat Gluster Storage volume such as rebalance and remove brick operations.
null
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/configuring_red_hat_virtualization_with_red_hat_gluster_storage/performing_asynchronous_tasks
6.6 Release Notes
6.6 Release Notes Red Hat Enterprise Linux 6 Release Notes for Red Hat Enterprise Linux 6.6 Edition 6 Red Hat Customer Content Services
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.6_release_notes/index
3.3. Putting the Configuration Together
3.3. Putting the Configuration Together After determining which of the preceding routing methods to use, the hardware should be connected together and configured. Important The network adapters on the LVS routers must be configured to access the same networks. For instance if eth0 connects to the public network and eth1 connects to the private network, then these same devices on the backup LVS router must connect to the same networks. Also the gateway listed in the first interface to come up at boot time is added to the routing table and subsequent gateways listed in other interfaces are ignored. This is especially important to consider when configuring the real servers. After connecting the hardware to the network, configure the network interfaces on the primary and backup LVS routers. This should be done by editing the network configuration files manually. For more information about working with network configuration files, see the Red Hat Enterprise Linux 7 Networking Guide . 3.3.1. General Load Balancer Networking Tips Configure the real IP addresses for both the public and private networks on the LVS routers before attempting to configure Load Balancer using Keepalived. The sections on each topology give example network addresses, but the actual network addresses are needed. Below are some useful commands for bringing up network interfaces or checking their status. Bringing Up Real Network Interfaces To open a real network interface, use the following command as root , replacing N with the number corresponding to the interface ( eth0 and eth1 ). ifup eth N Warning Do not use the ifup scripts to open any floating IP addresses you may configure using Keepalived ( eth0:1 or eth1:1 ). Use the service or systemctl command to start keepalived instead. Bringing Down Real Network Interfaces To bring down a real network interface, use the following command as root , replacing N with the number corresponding to the interface ( eth0 and eth1 ). ifdown eth N Checking the Status of Network Interfaces If you need to check which network interfaces are up at any given time, enter the following command: ip link To view the routing table for a machine, issue the following command: ip route 3.3.2. Firewall Requirements If you are running a firewall (by means of firewalld or iptables ), you must allow VRRP traffic to pass between the keepalived nodes. To configure the firewall to allow the VRRP traffic with firewalld , run the following commands: If the zone is omitted the default zone will be used. If, however, you need to allow the VRRP traffic with iptables , run the following commands:
[ "firewall-cmd --add-rich-rule='rule protocol value=\"vrrp\" accept' --permanent firewall-cmd --reload", "iptables -I INPUT -p vrrp -j ACCEPT iptables-save > /etc/sysconfig/iptables systemctl restart iptables" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/load_balancer_administration/s1-lvs-connect-vsa
4.6. Removing the Cluster Configuration
4.6. Removing the Cluster Configuration To remove all cluster configuration files and stop all cluster services, thus permanently destroying a cluster, use the following command. Warning This command permanently removes any cluster configuration that has been created. It is recommended that you run pcs cluster stop before destroying the cluster.
[ "pcs cluster destroy" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-clusterremove-HAAR