title
stringlengths
4
168
content
stringlengths
7
1.74M
commands
sequencelengths
1
5.62k
url
stringlengths
79
342
4.350. xinetd
4.350. xinetd 4.350.1. RHBA-2012:1161 - xinetd bug fix update An updated xinetd package that fixes one bug is now available for Red Hat Enterprise Linux 6 Extended Update Support. Xinetd is a secure replacement for inetd, the Internet services daemon. Xinetd provides access control for all services based on the address of the remote host and/or on time of access, and can prevent denial-of-access attacks. Xinetd provides extensive logging, has no limit on the number of server arguments, and allows users to bind specific services to specific IP addresses on a host machine. Each service has its own specific configuration file for Xinetd; the files are located in the /etc/xinetd.d directory. Bug Fix BZ# 841915 Due to incorrect handling of a file descriptor array in the service.c source file, some of the descriptors remained open when xinetd was under heavy load. Additionally, the system log was filled with a large number of messages that took up a lot of disk space over time. This bug has been fixed in the code, xinetd now handles the file descriptors correctly and no longer fills the system log. All users of xinetd are advised to upgrade to this updated package, which fixes this bug. 4.350.2. RHBA-2011:1713 - xinetd bug fix update An updated xinetd package that fixes two bugs is now available for Red Hat Enterprise Linux 6. The xinetd daemon is a secure replacement for inetd, the Internet services daemon. It provides access control for all services based on the address of the remote host and/or on time of access, and can prevent denial of service (DoS) attacks. Bug Fixes BZ# 706976 Previously, the configuration files of the xinetd utility were readable for all users. This update makes the permissions more restrictive and the configuration files are now readable only by root. BZ# 738662 Previously, the /etc/xinet.d/ directory was owned by both the filesystem and xinetd packages. This bug has been fixed, and the directory is now owned only by the filesystem package. Users of xinetd are advised to upgrade to this updated package which fixes these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.2_technical_notes/xinetd
OperatorHub APIs
OperatorHub APIs OpenShift Container Platform 4.13 Reference guide for OperatorHub APIs Red Hat OpenShift Documentation Team
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html-single/operatorhub_apis/index
Getting started with Red Hat OpenShift AI Cloud Service
Getting started with Red Hat OpenShift AI Cloud Service Red Hat OpenShift AI Cloud Service 1 Learn how to work in an OpenShift AI environment
null
https://docs.redhat.com/en/documentation/red_hat_openshift_ai_cloud_service/1/html/getting_started_with_red_hat_openshift_ai_cloud_service/index
Chapter 3. Installing a cluster on vSphere with customizations
Chapter 3. Installing a cluster on vSphere with customizations In OpenShift Container Platform version 4.13, you can install a cluster on your VMware vSphere instance by using installer-provisioned infrastructure. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. Note OpenShift Container Platform supports deploying a cluster to a single VMware vCenter only. Deploying a cluster with machines/machine sets on multiple vCenters is not supported. 3.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You provisioned persistent storage for your cluster. To deploy a private image registry, your storage must provide ReadWriteMany access modes. The OpenShift Container Platform installer requires access to port 443 on the vCenter and ESXi hosts. You verified that port 443 is accessible. If you use a firewall, you confirmed with the administrator that port 443 is accessible. Control plane nodes must be able to reach vCenter and ESXi hosts on port 443 for the installation to succeed. If you use a firewall, you configured it to allow the sites that your cluster requires access to. Note Be sure to also review this site list if you are configuring a proxy. 3.2. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.13, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager Hybrid Cloud Console to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 3.3. VMware vSphere infrastructure requirements You must install an OpenShift Container Platform cluster on one of the following versions of a VMware vSphere instance that meets the requirements for the components that you use: Version 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later Version 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later You can host the VMware vSphere infrastructure on-premise or on a VMware Cloud Verified provider that meets the requirements outlined in the following table: Table 3.1. Version requirements for vSphere virtual environments Virtual environment product Required version VMware virtual hardware 15 or later vSphere ESXi hosts 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter host 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Table 3.2. Minimum supported vSphere version for VMware components Component Minimum supported versions Description Hypervisor vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later with virtual hardware version 15 This hypervisor version is the minimum version that Red Hat Enterprise Linux CoreOS (RHCOS) supports. For more information about supported hardware on the latest version of Red Hat Enterprise Linux (RHEL) that is compatible with RHCOS, see Hardware on the Red Hat Customer Portal. Storage with in-tree drivers vSphere 7.0 Update 2 and later; 8.0 Update 1 or later This plugin creates vSphere storage by using the in-tree storage drivers for vSphere included in OpenShift Container Platform. Optional: Networking (NSX-T) vSphere 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; vSphere 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later For more information about the compatibility of NSX and OpenShift Container Platform, see the Release Notes section of VMware's NSX container plugin documentation . CPU micro-architecture x86-64-v2 or higher OpenShift 4.13 and later are based on RHEL 9.2 host operating system which raised the microarchitecture requirements to x86-64-v2. See the RHEL Microarchitecture requirements documentation . You can verify compatibility by following the procedures outlined in this KCS article . Important You must ensure that the time on your ESXi hosts is synchronized before you install OpenShift Container Platform. See Edit Time Configuration for a Host in the VMware documentation. Additional resources For more information about CSI automatic migration, see "Overview" in VMware vSphere CSI Driver Operator . 3.4. Network connectivity requirements You must configure the network connectivity between machines to allow OpenShift Container Platform cluster components to communicate. Review the following details about the required network ports. Table 3.3. Ports used for all-machine to all-machine communications Protocol Port Description VRRP N/A Required for keepalived ICMP N/A Network reachability tests TCP 1936 Metrics 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 and the Cluster Version Operator on port 9099 . 10250 - 10259 The default ports that Kubernetes reserves 10256 openshift-sdn UDP 4789 virtual extensible LAN (VXLAN) 6081 Geneve 9000 - 9999 Host level services, including the node exporter on ports 9100 - 9101 . 500 IPsec IKE packets 4500 IPsec NAT-T packets TCP/UDP 30000 - 32767 Kubernetes node port ESP N/A IPsec Encapsulating Security Payload (ESP) Table 3.4. Ports used for all-machine to control plane communications Protocol Port Description TCP 6443 Kubernetes API Table 3.5. Ports used for control plane machine to control plane machine communications Protocol Port Description TCP 2379 - 2380 etcd server and peer ports 3.5. VMware vSphere CSI Driver Operator requirements To install the vSphere CSI Driver Operator, the following requirements must be met: VMware vSphere version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later vCenter version: 7.0 Update 2 or later, or VMware Cloud Foundation 4.3 or later; 8.0 Update 1 or later, or VMware Cloud Foundation 5.0 or later Virtual machines of hardware version 15 or later No third-party vSphere CSI driver already installed in the cluster If a third-party vSphere CSI driver is present in the cluster, OpenShift Container Platform does not overwrite it. The presence of a third-party vSphere CSI driver prevents OpenShift Container Platform from updating to OpenShift Container Platform 4.13 or later. Note The VMware vSphere CSI Driver Operator is supported only on clusters deployed with platform: vsphere in the installation manifest. Additional resources To remove a third-party vSphere CSI driver, see Removing a third-party vSphere CSI Driver . To update the hardware version for your vSphere nodes, see Updating hardware on nodes running in vSphere . 3.6. vCenter requirements Before you install an OpenShift Container Platform cluster on your vCenter that uses infrastructure that the installer provisions, you must prepare your environment. Required vCenter account privileges To install an OpenShift Container Platform cluster in a vCenter, the installation program requires access to an account with privileges to read and create the required resources. Using an account that has global administrative privileges is the simplest way to access all of the necessary permissions. If you cannot use an account with global administrative privileges, you must create roles to grant the privileges necessary for OpenShift Container Platform cluster installation. While most of the privileges are always required, some are required only if you plan for the installation program to provision a folder to contain the OpenShift Container Platform cluster on your vCenter instance, which is the default behavior. You must create or amend vSphere roles for the specified objects to grant the required privileges. An additional role is required if the installation program is to create a vSphere virtual machine folder. Example 3.1. Roles and privileges required for installation in vSphere API vSphere object for role When required Required privileges in vSphere API vSphere vCenter Always Cns.Searchable InventoryService.Tagging.AttachTag InventoryService.Tagging.CreateCategory InventoryService.Tagging.CreateTag InventoryService.Tagging.DeleteCategory InventoryService.Tagging.DeleteTag InventoryService.Tagging.EditCategory InventoryService.Tagging.EditTag Sessions.ValidateSession StorageProfile.Update StorageProfile.View vSphere vCenter Cluster If VMs will be created in the cluster root Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere vCenter Resource Pool If an existing resource pool is provided Host.Config.Storage Resource.AssignVMToPool VApp.AssignResourcePool VApp.Import VirtualMachine.Config.AddNewDisk vSphere Datastore Always Datastore.AllocateSpace Datastore.Browse Datastore.FileManagement InventoryService.Tagging.ObjectAttachable vSphere Port Group Always Network.Assign Virtual Machine Folder Always InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.MarkAsTemplate VirtualMachine.Provisioning.DeployTemplate vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. InventoryService.Tagging.ObjectAttachable Resource.AssignVMToPool VApp.Import VirtualMachine.Config.AddExistingDisk VirtualMachine.Config.AddNewDisk VirtualMachine.Config.AddRemoveDevice VirtualMachine.Config.AdvancedConfig VirtualMachine.Config.Annotation VirtualMachine.Config.CPUCount VirtualMachine.Config.DiskExtend VirtualMachine.Config.DiskLease VirtualMachine.Config.EditDevice VirtualMachine.Config.Memory VirtualMachine.Config.RemoveDisk VirtualMachine.Config.Rename VirtualMachine.Config.ResetGuestInfo VirtualMachine.Config.Resource VirtualMachine.Config.Settings VirtualMachine.Config.UpgradeVirtualHardware VirtualMachine.Interact.GuestControl VirtualMachine.Interact.PowerOff VirtualMachine.Interact.PowerOn VirtualMachine.Interact.Reset VirtualMachine.Inventory.Create VirtualMachine.Inventory.CreateFromExisting VirtualMachine.Inventory.Delete VirtualMachine.Provisioning.Clone VirtualMachine.Provisioning.DeployTemplate VirtualMachine.Provisioning.MarkAsTemplate Folder.Create Folder.Delete Example 3.2. Roles and privileges required for installation in vCenter graphical user interface (GUI) vSphere object for role When required Required privileges in vCenter GUI vSphere vCenter Always Cns.Searchable "vSphere Tagging"."Assign or Unassign vSphere Tag" "vSphere Tagging"."Create vSphere Tag Category" "vSphere Tagging"."Create vSphere Tag" vSphere Tagging"."Delete vSphere Tag Category" "vSphere Tagging"."Delete vSphere Tag" "vSphere Tagging"."Edit vSphere Tag Category" "vSphere Tagging"."Edit vSphere Tag" Sessions."Validate session" "Profile-driven storage"."Profile-driven storage update" "Profile-driven storage"."Profile-driven storage view" vSphere vCenter Cluster If VMs will be created in the cluster root Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere vCenter Resource Pool If an existing resource pool is provided Host.Configuration."Storage partition configuration" Resource."Assign virtual machine to resource pool" VApp."Assign resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add new disk" vSphere Datastore Always Datastore."Allocate space" Datastore."Browse datastore" Datastore."Low level file operations" "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" vSphere Port Group Always Network."Assign network" Virtual Machine Folder Always "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Mark as template" "Virtual machine".Provisioning."Deploy template" vSphere vCenter Datacenter If the installation program creates the virtual machine folder. For UPI, VirtualMachine.Inventory.Create and VirtualMachine.Inventory.Delete privileges are optional if your cluster does not use the Machine API. "vSphere Tagging"."Assign or Unassign vSphere Tag on Object" Resource."Assign virtual machine to resource pool" VApp.Import "Virtual machine"."Change Configuration"."Add existing disk" "Virtual machine"."Change Configuration"."Add new disk" "Virtual machine"."Change Configuration"."Add or remove device" "Virtual machine"."Change Configuration"."Advanced configuration" "Virtual machine"."Change Configuration"."Set annotation" "Virtual machine"."Change Configuration"."Change CPU count" "Virtual machine"."Change Configuration"."Extend virtual disk" "Virtual machine"."Change Configuration"."Acquire disk lease" "Virtual machine"."Change Configuration"."Modify device settings" "Virtual machine"."Change Configuration"."Change Memory" "Virtual machine"."Change Configuration"."Remove disk" "Virtual machine"."Change Configuration".Rename "Virtual machine"."Change Configuration"."Reset guest information" "Virtual machine"."Change Configuration"."Change resource" "Virtual machine"."Change Configuration"."Change Settings" "Virtual machine"."Change Configuration"."Upgrade virtual machine compatibility" "Virtual machine".Interaction."Guest operating system management by VIX API" "Virtual machine".Interaction."Power off" "Virtual machine".Interaction."Power on" "Virtual machine".Interaction.Reset "Virtual machine"."Edit Inventory"."Create new" "Virtual machine"."Edit Inventory"."Create from existing" "Virtual machine"."Edit Inventory"."Remove" "Virtual machine".Provisioning."Clone virtual machine" "Virtual machine".Provisioning."Deploy template" "Virtual machine".Provisioning."Mark as template" Folder."Create folder" Folder."Delete folder" Additionally, the user requires some ReadOnly permissions, and some of the roles require permission to propogate the permissions to child objects. These settings vary depending on whether or not you install the cluster into an existing folder. Example 3.3. Required permissions and propagation settings vSphere object When required Propagate to children Permissions required vSphere vCenter Always False Listed required privileges vSphere vCenter Datacenter Existing folder False ReadOnly permission Installation program creates the folder True Listed required privileges vSphere vCenter Cluster Existing resource pool False ReadOnly permission VMs in cluster root True Listed required privileges vSphere vCenter Datastore Always False Listed required privileges vSphere Switch Always False ReadOnly permission vSphere Port Group Always False Listed required privileges vSphere vCenter Virtual Machine Folder Existing folder True Listed required privileges vSphere vCenter Resource Pool Existing resource pool True Listed required privileges For more information about creating an account with only the required privileges, see vSphere Permissions and User Management Tasks in the vSphere documentation. Using OpenShift Container Platform with vMotion If you intend on using vMotion in your vSphere environment, consider the following before installing an OpenShift Container Platform cluster. Using Storage vMotion can cause issues and is not supported. Using VMware compute vMotion to migrate the workloads for both OpenShift Container Platform compute machines and control plane machines is generally supported, where generally implies that you meet all VMware best practices for vMotion. To help ensure the uptime of your compute and control plane nodes, ensure that you follow the VMware best practices for vMotion, and use VMware anti-affinity rules to improve the availability of OpenShift Container Platform during maintenance or hardware issues. For more information about vMotion and anti-affinity rules, see the VMware vSphere documentation for vMotion networking requirements and VM anti-affinity rules . If you are using VMware vSphere volumes in your pods, migrating a VM across datastores, either manually or through Storage vMotion, causes invalid references within OpenShift Container Platform persistent volume (PV) objects that can result in data loss. OpenShift Container Platform does not support selective migration of VMDKs across datastores, using datastore clusters for VM provisioning or for dynamic or static provisioning of PVs, or using a datastore that is part of a datastore cluster for dynamic or static provisioning of PVs. Cluster resources When you deploy an OpenShift Container Platform cluster that uses installer-provisioned infrastructure, the installation program must be able to create several resources in your vCenter instance. A standard OpenShift Container Platform installation creates the following vCenter resources: 1 Folder 1 Tag category 1 Tag Virtual machines: 1 template 1 temporary bootstrap node 3 control plane nodes 3 compute machines Although these resources use 856 GB of storage, the bootstrap node is destroyed during the cluster installation process. A minimum of 800 GB of storage is required to use a standard cluster. If you deploy more compute machines, the OpenShift Container Platform cluster will use more storage. Cluster limits Available resources vary between clusters. The number of possible clusters within a vCenter is limited primarily by available storage space and any limitations on the number of required resources. Be sure to consider both limitations to the vCenter resources that the cluster creates and the resources that you require to deploy a cluster, such as IP addresses and networks. Networking requirements You can use Dynamic Host Configuration Protocol (DHCP) for the network and configure the DHCP server to set persistent IP addresses to machines in your cluster. In the DHCP lease, you must configure the DHCP to use the default gateway. Note You do not need to use the DHCP for the network if you want to provision nodes with static IP addresses. If you are installing to a restricted environment, the VM in your restricted network must have access to vCenter so that it can provision and manage nodes, persistent volume claims (PVCs), and other resources. Note Ensure that each OpenShift Container Platform node in the cluster has access to a Network Time Protocol (NTP) server that is discoverable by DHCP. Installation is possible without an NTP server. However, asynchronous server clocks can cause errors, which the NTP server prevents. Additionally, you must create the following networking resources before you install the OpenShift Container Platform cluster: Required IP Addresses An installer-provisioned vSphere installation requires two static IP addresses: The API address is used to access the cluster API. The Ingress address is used for cluster ingress traffic. You must provide these IP addresses to the installation program when you install the OpenShift Container Platform cluster. DNS records You must create DNS records for two static IP addresses in the appropriate DNS server for the vCenter instance that hosts your OpenShift Container Platform cluster. In each record, <cluster_name> is the cluster name and <base_domain> is the cluster base domain that you specify when you install the cluster. A complete DNS record takes the form: <component>.<cluster_name>.<base_domain>. . Table 3.6. Required DNS records Component Record Description API VIP api.<cluster_name>.<base_domain>. This DNS A/AAAA or CNAME record must point to the load balancer for the control plane machines. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. Ingress VIP *.apps.<cluster_name>.<base_domain>. A wildcard DNS A/AAAA or CNAME record that points to the load balancer that targets the machines that run the Ingress router pods, which are the worker nodes by default. This record must be resolvable by both clients external to the cluster and from all the nodes within the cluster. 3.7. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 3.8. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a machine that runs Linux, for example Red Hat Enterprise Linux 8, with 500 MB of local disk space. Important If you attempt to run the installation program on macOS, a known issue related to the golang compiler causes the installation of the OpenShift Container Platform cluster to fail. For more information about this issue, see the section named "Known Issues" in the OpenShift Container Platform 4.13 release notes document. Procedure Access the Infrastructure Provider page on the OpenShift Cluster Manager site. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Select your infrastructure provider. Navigate to the page for your installation type, download the installation program that corresponds with your host operating system and architecture, and place the file in the directory where you will store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both files are required to delete the cluster. Important Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from the Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. 3.9. Adding vCenter root CA certificates to your system trust Because the installation program requires access to your vCenter's API, you must add your vCenter's trusted root CA certificates to your system trust before you install an OpenShift Container Platform cluster. Procedure From the vCenter home page, download the vCenter's root CA certificates. Click Download trusted root CA certificates in the vSphere Web Services SDK section. The <vCenter>/certs/download.zip file downloads. Extract the compressed file that contains the vCenter root CA certificates. The contents of the compressed file resemble the following file structure: Add the files for your operating system to the system trust. For example, on a Fedora operating system, run the following command: # cp certs/lin/* /etc/pki/ca-trust/source/anchors Update your system trust. For example, on a Fedora operating system, run the following command: # update-ca-trust extract 3.10. VMware vSphere region and zone enablement You can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. Each datacenter can run multiple clusters. This configuration reduces the risk of a hardware failure or network outage that can cause your cluster to fail. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Important The VMware vSphere region and zone enablement feature requires the vSphere Container Storage Interface (CSI) driver as the default storage driver in the cluster. As a result, the feature only available on a newly installed cluster. A cluster that was upgraded from a release defaults to using the in-tree vSphere driver, so you must enable CSI automatic migration for the cluster. You can then configure multiple regions and zones for the upgraded cluster. The default installation configuration deploys a cluster to a single vSphere datacenter. If you want to deploy a cluster to multiple vSphere datacenters, you must create an installation configuration file that enables the region and zone feature. The default install-config.yaml file includes vcenters and failureDomains fields, where you can specify multiple vSphere datacenters and clusters for your OpenShift Container Platform cluster. You can leave these fields blank if you want to install an OpenShift Container Platform cluster in a vSphere environment that consists of single datacenter. The following list describes terms associated with defining zones and regions for your cluster: Failure domain: Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. Region: Specifies a vCenter datacenter. You define a region by using a tag from the openshift-region tag category. Zone: Specifies a vCenter cluster. You define a zone by using a tag from the openshift-zone tag category. Note If you plan on specifying more than one failure domain in your install-config.yaml file, you must create tag categories, zone tags, and region tags in advance of creating the configuration file. You must create a vCenter tag for each vCenter datacenter, which represents a region. Additionally, you must create a vCenter tag for each cluster than runs in a datacenter, which represents a zone. After you create the tags, you must attach each tag to their respective datacenters and clusters. The following table outlines an example of the relationship among regions, zones, and tags for a configuration with multiple vSphere datacenters running in a single VMware vCenter. Datacenter (region) Cluster (zone) Tags us-east us-east-1 us-east-1a us-east-1b us-east-2 us-east-2a us-east-2b us-west us-west-1 us-west-1a us-west-1b us-west-2 us-west-2a us-west-2b Additional resources Additional VMware vSphere configuration parameters Deprecated VMware vSphere configuration parameters vSphere automatic migration VMware vSphere CSI Driver Operator 3.11. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on VMware vSphere. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Obtain service principal permissions at the subscription level. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Note Always delete the ~/.powervs directory to avoid reusing a stale configuration. Run the following command: USD rm -rf ~/.powervs At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select vsphere as the platform to target. Specify the name of your vCenter instance. Specify the user name and password for the vCenter account that has the required permissions to create the cluster. The installation program connects to your vCenter instance. Select the data center in your vCenter instance to connect to. Note After you create the installation configuration file, you can modify the file to create a multiple vSphere datacenters environment. This means that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. For more information about creating this environment, see the section named VMware vSphere region and zone enablement . Select the default vCenter datastore to use. Select the vCenter cluster to install the OpenShift Container Platform cluster in. The installation program uses the root resource pool of the vSphere cluster as the default resource pool. Select the network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. Enter the virtual IP address that you configured for control plane API access. Enter the virtual IP address that you configured for cluster ingress. Enter the base domain. This base domain must be the same one that you used in the DNS records that you configured. Enter a descriptive name for your cluster. The cluster name you enter must match the cluster name you specified when configuring the DNS records. Paste the pull secret from the Red Hat OpenShift Cluster Manager . Modify the install-config.yaml file. You can find more information about the available parameters in the "Installation configuration parameters" section. Note If you are installing a three-node cluster, be sure to set the compute.replicas parameter to 0 . This ensures that cluster's control planes are schedulable. For more information, see "Installing a three-node cluster on vSphere". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. 3.11.1. Installation configuration parameters Before you deploy an OpenShift Container Platform cluster, you provide parameter values to describe your account on the cloud platform that hosts your cluster and optionally customize your cluster's platform. When you create the install-config.yaml installation configuration file, you provide values for the required parameters through the command line. If you customize your cluster, you can modify the install-config.yaml file to provide more details about the platform. Note After installation, you cannot modify these parameters in the install-config.yaml file. 3.11.1.1. Required configuration parameters Required installation configuration parameters are described in the following table: Table 3.7. Required parameters Parameter Description Values apiVersion The API version for the install-config.yaml content. The current version is v1 . The installation program may also support older API versions. String baseDomain The base domain of your cloud provider. The base domain is used to create routes to your OpenShift Container Platform cluster components. The full DNS name for your cluster is a combination of the baseDomain and metadata.name parameter values that uses the <metadata.name>.<baseDomain> format. A fully-qualified domain or subdomain name, such as example.com . metadata Kubernetes resource ObjectMeta , from which only the name parameter is consumed. Object metadata.name The name of the cluster. DNS records for the cluster are all subdomains of {{.metadata.name}}.{{.baseDomain}} . String of lowercase letters and hyphens ( - ), such as dev . platform The configuration for the specific platform upon which to perform the installation: alibabacloud , aws , baremetal , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} . For additional information about platform.<platform> parameters, consult the table for your specific platform that follows. Object pullSecret Get a pull secret from the Red Hat OpenShift Cluster Manager to authenticate downloading container images for OpenShift Container Platform components from services such as Quay.io. { "auths":{ "cloud.openshift.com":{ "auth":"b3Blb=", "email":"[email protected]" }, "quay.io":{ "auth":"b3Blb=", "email":"[email protected]" } } } 3.11.1.2. Network configuration parameters You can customize your installation configuration based on the requirements of your existing network infrastructure. For example, you can expand the IP address block for the cluster network or provide different IP address blocks than the defaults. If you use the Red Hat OpenShift Networking OVN-Kubernetes network plugin, both IPv4 and IPv6 address families are supported. If you use the Red Hat OpenShift Networking OpenShift SDN network plugin, only the IPv4 address family is supported. Note On VMware vSphere, dual-stack networking must specify IPv4 as the primary address family. The following additional limitations apply to dual-stack networking: Nodes report only their IPv6 IP address in node.status.addresses Nodes with only a single NIC are supported Pods configured for host networking report only their IPv6 addresses in pod.status.IP If you configure your cluster to use both IP address families, review the following requirements: Both IP families must use the same network interface for the default gateway. Both IP families must have the default gateway. You must specify IPv4 and IPv6 addresses in the same order for all network configuration parameters. For example, in the following configuration IPv4 addresses are listed before IPv6 addresses. networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112 Note Globalnet is not supported with Red Hat OpenShift Data Foundation disaster recovery solutions. For regional disaster recovery scenarios, ensure that you use a nonoverlapping range of private IP addresses for the cluster and service networks in each cluster. Table 3.8. Network parameters Parameter Description Values networking The configuration for the cluster network. Object Note You cannot modify parameters specified by the networking object after installation. networking.networkType The Red Hat OpenShift Networking network plugin to install. Either OpenShiftSDN or OVNKubernetes . OpenShiftSDN is a CNI plugin for all-Linux networks. OVNKubernetes is a CNI plugin for Linux networks and hybrid networks that contain both Linux and Windows servers. The default value is OVNKubernetes . networking.clusterNetwork The IP address blocks for pods. The default value is 10.128.0.0/14 with a host prefix of /23 . If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 networking.clusterNetwork.cidr Required if you use networking.clusterNetwork . An IP address block. An IPv4 network. An IP address block in Classless Inter-Domain Routing (CIDR) notation. The prefix length for an IPv4 block is between 0 and 32 . networking.clusterNetwork.hostPrefix The subnet prefix length to assign to each individual node. For example, if hostPrefix is set to 23 then each node is assigned a /23 subnet out of the given cidr . A hostPrefix value of 23 provides 510 (2^(32 - 23) - 2) pod IP addresses. A subnet prefix. The default value is 23 . networking.serviceNetwork The IP address block for services. The default value is 172.30.0.0/16 . The OpenShift SDN and OVN-Kubernetes network plugins support only a single IP address block for the service network. An array with an IP address block in CIDR format. For example: networking: serviceNetwork: - 172.30.0.0/16 networking.machineNetwork The IP address blocks for machines. If you specify multiple IP address blocks, the blocks must not overlap. An array of objects. For example: networking: machineNetwork: - cidr: 10.0.0.0/16 networking.machineNetwork.cidr Required if you use networking.machineNetwork . An IP address block. The default value is 10.0.0.0/16 for all platforms other than libvirt and IBM Power Virtual Server. For libvirt, the default value is 192.168.126.0/24 . For IBM Power Virtual Server, the default value is 192.168.0.0/24 . An IP network block in CIDR notation. For example, 10.0.0.0/16 . Note Set the networking.machineNetwork to match the CIDR that the preferred NIC resides in. 3.11.1.3. Optional configuration parameters Optional installation configuration parameters are described in the following table: Table 3.9. Optional parameters Parameter Description Values additionalTrustBundle A PEM-encoded X.509 certificate bundle that is added to the nodes' trusted certificate store. This trust bundle may also be used when a proxy has been configured. String capabilities Controls the installation of optional core cluster components. You can reduce the footprint of your OpenShift Container Platform cluster by disabling optional components. For more information, see the "Cluster capabilities" page in Installing . String array capabilities.baselineCapabilitySet Selects an initial set of optional capabilities to enable. Valid values are None , v4.11 , v4.12 and vCurrent . The default value is vCurrent . String capabilities.additionalEnabledCapabilities Extends the set of optional capabilities beyond what you specify in baselineCapabilitySet . You may specify multiple capabilities in this parameter. String array cpuPartitioningMode Enables workload partitioning, which isolates OpenShift Container Platform services, cluster management workloads, and infrastructure pods to run on a reserved set of CPUs. Workload partitioning can only be enabled during installation and cannot be disabled after installation. While this field enables workload partitioning, it does not configure workloads to use specific CPUs. For more information, see the Workload partitioning page in the Scalability and Performance section. None or AllNodes . None is the default value. compute The configuration for the machines that comprise the compute nodes. Array of MachinePool objects. compute.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String compute.name Required if you use compute . The name of the machine pool. worker compute.platform Required if you use compute . Use this parameter to specify the cloud provider to host the worker machines. This parameter value must match the controlPlane.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} compute.replicas The number of compute machines, which are also known as worker machines, to provision. A positive integer greater than or equal to 2 . The default value is 3 . featureSet Enables the cluster for a feature set. A feature set is a collection of OpenShift Container Platform features that are not enabled by default. For more information about enabling a feature set during installation, see "Enabling features using feature gates". String. The name of the feature set to enable, such as TechPreviewNoUpgrade . controlPlane The configuration for the machines that comprise the control plane. Array of MachinePool objects. controlPlane.architecture Determines the instruction set architecture of the machines in the pool. Currently, clusters with varied architectures are not supported. All pools must specify the same architecture. Valid values are amd64 (the default). String controlPlane.name Required if you use controlPlane . The name of the machine pool. master controlPlane.platform Required if you use controlPlane . Use this parameter to specify the cloud provider that hosts the control plane machines. This parameter value must match the compute.platform parameter value. alibabacloud , aws , azure , gcp , ibmcloud , nutanix , openstack , ovirt , powervs , vsphere , or {} controlPlane.replicas The number of control plane machines to provision. The only supported value is 3 , which is the default value. credentialsMode The Cloud Credential Operator (CCO) mode. If no mode is specified, the CCO dynamically tries to determine the capabilities of the provided credentials, with a preference for mint mode on the platforms where multiple modes are supported. Note Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the Cloud Credential Operator entry in the Cluster Operators reference content. Note If your AWS account has service control policies (SCP) enabled, you must configure the credentialsMode parameter to Mint , Passthrough or Manual . Mint , Passthrough , Manual or an empty string ( "" ). imageContentSources Sources and repositories for the release-image content. Array of objects. Includes a source and, optionally, mirrors , as described in the following rows of this table. imageContentSources.source Required if you use imageContentSources . Specify the repository that users refer to, for example, in image pull specifications. String imageContentSources.mirrors Specify one or more repositories that may also contain the same images. Array of strings publish How to publish or expose the user-facing endpoints of your cluster, such as the Kubernetes API, OpenShift routes. Internal or External . The default value is External . Setting this field to Internal is not supported on non-cloud platforms. Important If the value of the field is set to Internal , the cluster will become non-functional. For more information, refer to BZ#1953035 . sshKey The SSH key to authenticate access to your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. For example, sshKey: ssh-ed25519 AAAA.. . Not all CCO modes are supported for all cloud providers. For more information about CCO modes, see the "Managing cloud provider credentials" entry in the Authentication and authorization content. 3.11.1.4. Additional VMware vSphere configuration parameters Additional VMware vSphere configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.10. Additional VMware vSphere cluster parameters Parameter Description Values Describes your account on the cloud platform that hosts your cluster. You can use the parameter to customize the platform. If you provide additional configuration settings for compute and control plane machines in the machine pool, the parameter is not required. You can only specify one vCenter server for your OpenShift Container Platform cluster. A dictionary of vSphere configuration objects Virtual IP (VIP) addresses that you configured for control plane API access. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Optional: The disk provisioning method. This value defaults to the vSphere default storage policy if not set. Valid values are thin , thick , or eagerZeroedThick . If you define multiple failure domains for your cluster, you must attach the tag to each vCenter datacenter. To define a region, use a tag from the openshift-region tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as datacenter , for the parameter. String Specifies the fully-qualified hostname or IP address of the VMware vCenter server, so that a client can access failure domain resources. You must apply the server role to the vSphere vCenter server location. String If you define multiple failure domains for your cluster, you must attach a tag to each vCenter cluster. To define a zone, use a tag from the openshift-zone tag category. For a single vSphere datacenter environment, you do not need to attach a tag, but you must enter an alphanumeric value, such as cluster , for the parameter. String Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the vcenters field. String Specifies the path to a vSphere datastore that stores virtual machines files for a failure domain. You must apply the datastore role to the vSphere vCenter datastore location. String Optional: The absolute path of an existing folder where the user creates the virtual machines, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . If you do not provide this value, the installation program creates a top-level folder in the datacenter virtual machine folder that is named with the infrastructure ID. If you are providing the infrastructure for the cluster and you do not want to use the default StorageClass object, named thin , you can omit the folder parameter from the install-config.yaml file. String Lists any network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String Virtual IP (VIP) addresses that you configured for cluster Ingress. Note This parameter applies only to installer-provisioned infrastructure without an external load balancer configured. You must not specify this parameter in user-provisioned infrastructure. Multiple IP addresses Configures the connection details so that services can communicate with a vCenter server. Currently, only a single vCenter server is supported. An array of vCenter configuration objects. Lists and defines the datacenters where OpenShift Container Platform virtual machines (VMs) operate. The list of datacenters must match the list of datacenters specified in the failureDomains field. String The password associated with the vSphere user. String The port number used to communicate with the vCenter server. Integer The fully qualified host name (FQHN) or IP address of the vCenter server. String The username associated with the vSphere user. String 3.11.1.5. Deprecated VMware vSphere configuration parameters In OpenShift Container Platform 4.13, the following vSphere configuration parameters are deprecated. You can continue to use these parameters, but the installation program does not automatically specify these parameters in the install-config.yaml file. The following table lists each deprecated vSphere configuration parameter. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.11. Deprecated VMware vSphere cluster parameters Parameter Description Values The virtual IP (VIP) address that you configured for control plane API access. Note In OpenShift Container Platform 4.12 and later, the apiVIP configuration setting is deprecated. Instead, use a List format to enter a value in the apiVIPs configuration setting. An IP address, for example 128.0.0.1 . The vCenter cluster to install the OpenShift Container Platform cluster in. String Defines the datacenter where OpenShift Container Platform virtual machines (VMs) operate. String The name of the default datastore to use for provisioning volumes. String Optional: The absolute path of an existing folder where the installation program creates the virtual machines. If you do not provide this value, the installation program creates a folder that is named with the infrastructure ID in the data center virtual machine folder. String, for example, /<datacenter_name>/vm/<folder_name>/<subfolder_name> . Virtual IP (VIP) addresses that you configured for cluster Ingress. Note In OpenShift Container Platform 4.12 and later, the ingressVIP configuration setting is deprecated. Instead, use a List format to enter a value in the ingressVIPs configuration setting. An IP address, for example 128.0.0.1 . The network in the vCenter instance that contains the virtual IP addresses and DNS records that you configured. String The password for the vCenter user name. String Optional: The absolute path of an existing resource pool where the installation program creates the virtual machines. If you do not specify a value, the installation program installs the resources in the root of the cluster under /<datacenter_name>/host/<cluster_name>/Resources . String, for example, /<datacenter_name>/host/<cluster_name>/Resources/<resource_pool_name>/<optional_nested_resource_pool_name> . The user name to use to connect to the vCenter instance with. This user must have at least the roles and privileges that are required for static or dynamic persistent volume provisioning in vSphere. String The fully-qualified hostname or IP address of a vCenter server. String 3.11.1.6. Optional VMware vSphere machine pool configuration parameters Optional VMware vSphere machine pool configuration parameters are described in the following table. Note The platform.vsphere parameter prefixes each parameter listed in the table. Table 3.12. Optional VMware vSphere machine pool parameters Parameter Description Values clusterOSImage The location from which the installation program downloads the RHCOS image. You must set this parameter to perform an installation in a restricted network. An HTTP or HTTPS URL, optionally with a SHA-256 checksum. For example, https://mirror.openshift.com/images/rhcos-<version>-vmware.<architecture>.ova . osDisk.diskSizeGB The size of the disk in gigabytes. Integer cpus The total number of virtual processor cores to assign a virtual machine. The value of platform.vsphere.cpus must be a multiple of platform.vsphere.coresPerSocket value. Integer coresPerSocket The number of cores per socket in a virtual machine. The number of virtual sockets on the virtual machine is platform.vsphere.cpus / platform.vsphere.coresPerSocket . The default value for control plane nodes and worker nodes is 4 and 2 , respectively. Integer memoryMB The size of a virtual machine's memory in megabytes. Integer 3.11.2. Sample install-config.yaml file for an installer-provisioned VMware vSphere cluster You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: "/<datacenter>/host/<cluster>" datacenter: <datacenter> datastore: "/<datacenter>/datastore/<datastore>" 7 networks: - <VM_Network_name> resourcePool: "/<datacenter>/host/<cluster>/Resources/<resourcePool>" 8 folder: "/<datacenter_name>/vm/<folder_name>/<subfolder_name>" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{"auths": ...}' sshKey: 'ssh-ed25519 AAAA...' 1 The base domain of the cluster. All DNS records must be sub-domains of this base and include the cluster name. 2 3 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Only one control plane pool is used. 4 The cluster name that you specified in your DNS records. 5 Optional: Provides additional configuration for the machine pool parameters for the compute and control plane machines. 6 Establishes the relationships between a region and zone. You define a failure domain by using vCenter objects, such as a datastore object. A failure domain defines the vCenter location for OpenShift Container Platform cluster nodes. 7 The path to the vSphere datastore that holds virtual machine files, templates, and ISO images. Important You can specify the path of any datastore that exists in a datastore cluster. By default, Storage vMotion is automatically enabled for a datastore cluster. Red Hat does not support Storage vMotion, so you must disable Storage vMotion to avoid data loss issues for your OpenShift Container Platform cluster. If you must specify VMs across multiple datastores, use a datastore object to specify a failure domain in your cluster's install-config.yaml configuration file. For more information, see "VMware vSphere region and zone enablement". 8 Optional: Provides an existing resource pool for machine creation. If you do not specify a value, the installation program uses the root resource pool of the vSphere cluster. 9 The vSphere disk provisioning method. Note In OpenShift Container Platform 4.12 and later, the apiVIP and ingressVIP configuration settings are deprecated. Instead, use a list format to enter values in the apiVIPs and ingressVIPs configuration settings. 3.11.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. You must include vCenter's IP address and the IP range that you use for its machines. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 3.11.4. Configuring regions and zones for a VMware vCenter You can modify the default installation configuration file, so that you can deploy an OpenShift Container Platform cluster to multiple vSphere datacenters that run in a single VMware vCenter. The default install-config.yaml file configuration from the release of OpenShift Container Platform is deprecated. You can continue to use the deprecated default configuration, but the openshift-installer will prompt you with a warning message that indicates the use of deprecated fields in the configuration file. Important The example uses the govc command. The govc command is an open source command available from VMware; it is not available from Red Hat. The Red Hat support team does not maintain the govc command. Instructions for downloading and installing govc are found on the VMware documentation website Prerequisites You have an existing install-config.yaml installation configuration file. Important You must specify at least one failure domain for your OpenShift Container Platform cluster, so that you can provision datacenter objects for your VMware vCenter server. Consider specifying multiple failure domains if you need to provision virtual machine nodes in different datacenters, clusters, datastores, and other components. To enable regions and zones, you must define multiple failure domains for your OpenShift Container Platform cluster. Procedure Enter the following govc command-line tool commands to create the openshift-region and openshift-zone vCenter tag categories: Important If you specify different names for the openshift-region and openshift-zone vCenter tag categories, the installation of the OpenShift Container Platform cluster fails. USD govc tags.category.create -d "OpenShift region" openshift-region USD govc tags.category.create -d "OpenShift zone" openshift-zone To create a region tag for each region vSphere datacenter where you want to deploy your cluster, enter the following command in your terminal: USD govc tags.create -c <region_tag_category> <region_tag> To create a zone tag for each vSphere cluster where you want to deploy your cluster, enter the following command: USD govc tags.create -c <zone_tag_category> <zone_tag> Attach region tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1> Attach the zone tags to each vCenter datacenter object by entering the following command: USD govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1 Change to the directory that contains the installation program and initialize the cluster deployment according to your chosen installation requirements. Sample install-config.yaml file with multiple datacenters defined in a vSphere center --- compute: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- controlPlane: --- vsphere: zones: - "<machine_pool_zone_1>" - "<machine_pool_zone_2>" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: "/<datacenter1>/host/<cluster1>" networks: - <VM_Network1_name> datastore: "/<datacenter1>/datastore/<datastore1>" resourcePool: "/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>" folder: "/<datacenter1>/vm/<folder1>" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: "/<datacenter2>/host/<cluster2>" networks: - <VM_Network2_name> datastore: "/<datacenter2>/datastore/<datastore2>" resourcePool: "/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>" folder: "/<datacenter2>/vm/<folder2>" --- 3.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites Obtain the OpenShift Container Platform installation program and the pull secret for your cluster. Verify the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Optional: Before you create the cluster, configure an external load balancer in place of the default load balancer. Important You do not need to specify API and Ingress static addresses for your installation program. If you choose this configuration, you must take additional actions to define network targets that accept an IP address from each referenced vSphere subnet. See the section "Configuring an external load balancer". Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 3.13. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.13. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.13 macOS Client entry and save the file. Note For macOS arm64, choose the OpenShift v4.13 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin 3.15. Creating registry storage After you install the cluster, you must create storage for the registry Operator. 3.15.1. Image registry removed during installation On platforms that do not provide shareable object storage, the OpenShift Image Registry Operator bootstraps itself as Removed . This allows openshift-installer to complete installations on these platform types. After installation, you must edit the Image Registry Operator configuration to switch the managementState from Removed to Managed . When this has completed, you must configure storage. 3.15.2. Image registry storage configuration The Image Registry Operator is not initially available for platforms that do not provide default storage. After installation, you must configure your registry to use storage so that the Registry Operator is made available. Instructions are shown for configuring a persistent volume, which is required for production clusters. Where applicable, instructions are shown for configuring an empty directory as the storage location, which is available for only non-production clusters. Additional instructions are provided for allowing the image registry to use block storage types by using the Recreate rollout strategy during upgrades. 3.15.2.1. Configuring registry storage for VMware vSphere As a cluster administrator, following installation you must configure your registry to use storage. Prerequisites Cluster administrator permissions. A cluster on VMware vSphere. Persistent storage provisioned for your cluster, such as Red Hat OpenShift Data Foundation. Important OpenShift Container Platform supports ReadWriteOnce access for image registry storage when you have only one replica. ReadWriteOnce access also requires that the registry uses the Recreate rollout strategy. To deploy an image registry that supports high availability with two or more replicas, ReadWriteMany access is required. Must have "100Gi" capacity. Important Testing shows issues with using the NFS server on RHEL as storage backend for core services. This includes the OpenShift Container Registry and Quay, Prometheus for monitoring storage, and Elasticsearch for logging storage. Therefore, using RHEL NFS to back PVs used by core services is not recommended. Other NFS implementations on the marketplace might not have these issues. Contact the individual NFS implementation vendor for more information on any testing that was possibly completed against these OpenShift Container Platform core components. Procedure To configure your registry to use storage, change the spec.storage.pvc in the configs.imageregistry/cluster resource. Note When you use shared storage, review your security settings to prevent outside access. Verify that you do not have a registry pod: USD oc get pod -n openshift-image-registry -l docker-registry=default Example output No resourses found in openshift-image-registry namespace Note If you do have a registry pod in your output, you do not need to continue with this procedure. Check the registry configuration: USD oc edit configs.imageregistry.operator.openshift.io Example output storage: pvc: claim: 1 1 Leave the claim field blank to allow the automatic creation of an image-registry-storage persistent volume claim (PVC). The PVC is generated based on the default storage class. However, be aware that the default storage class might provide ReadWriteOnce (RWO) volumes, such as a RADOS Block Device (RBD), which can cause issues when you replicate to more than one replica. Check the clusteroperator status: USD oc get clusteroperator image-registry Example output NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m 3.15.2.2. Configuring block registry storage for VMware vSphere To allow the image registry to use block storage types such as vSphere Virtual Machine Disk (VMDK) during upgrades as a cluster administrator, you can use the Recreate rollout strategy. Important Block storage volumes are supported but not recommended for use with image registry on production clusters. An installation where the registry is configured on block storage is not highly available because the registry cannot have more than one replica. Procedure Enter the following command to set the image registry storage as a block storage type, patch the registry so that it uses the Recreate rollout strategy, and runs with only 1 replica: USD oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{"spec":{"rolloutStrategy":"Recreate","replicas":1}}' Provision the PV for the block storage device, and create a PVC for that volume. The requested block volume uses the ReadWriteOnce (RWO) access mode. Create a pvc.yaml file with the following contents to define a VMware vSphere PersistentVolumeClaim object: kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4 1 A unique name that represents the PersistentVolumeClaim object. 2 The namespace for the PersistentVolumeClaim object, which is openshift-image-registry . 3 The access mode of the persistent volume claim. With ReadWriteOnce , the volume can be mounted with read and write permissions by a single node. 4 The size of the persistent volume claim. Enter the following command to create the PersistentVolumeClaim object from the file: USD oc create -f pvc.yaml -n openshift-image-registry Enter the following command to edit the registry configuration so that it references the correct PVC: USD oc edit config.imageregistry.operator.openshift.io -o yaml Example output storage: pvc: claim: 1 1 By creating a custom PVC, you can leave the claim field blank for the default automatic creation of an image-registry-storage PVC. For instructions about configuring registry storage so that it references the correct PVC, see Configuring the registry for vSphere . 3.16. Backing up VMware vSphere volumes OpenShift Container Platform provisions new volumes as independent persistent disks to freely attach and detach the volume on any node in the cluster. As a consequence, it is not possible to back up volumes that use snapshots, or to restore volumes from snapshots. See Snapshot Limitations for more information. Procedure To create a backup of persistent volumes: Stop the application that is using the persistent volume. Clone the persistent volume. Restart the application. Create a backup of the cloned volume. Delete the cloned volume. 3.17. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.13, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager Hybrid Cloud Console . After you confirm that your OpenShift Cluster Manager Hybrid Cloud Console inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 3.18. Services for an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Configuring an external load balancer depends on your vendor's load balancer. The information and examples in this section are for guideline purposes only. Consult the vendor documentation for more specific information about the vendor's load balancer. Red Hat supports the following services for an external load balancer: Ingress Controller OpenShift API OpenShift MachineConfig API You can choose whether you want to configure one or all of these services for an external load balancer. Configuring only the Ingress Controller service is a common configuration option. To better understand each service, view the following diagrams: Figure 3.1. Example network workflow that shows an Ingress Controller operating in an OpenShift Container Platform environment Figure 3.2. Example network workflow that shows an OpenShift API operating in an OpenShift Container Platform environment Figure 3.3. Example network workflow that shows an OpenShift MachineConfig API operating in an OpenShift Container Platform environment The following configuration options are supported for external load balancers: Use a node selector to map the Ingress Controller to a specific set of nodes. You must assign a static IP address to each node in this set, or configure each node to receive the same IP address from the Dynamic Host Configuration Protocol (DHCP). Infrastructure nodes commonly receive this type of configuration. Target all IP addresses on a subnet. This configuration can reduce maintenance overhead, because you can create and destroy nodes within those networks without reconfiguring the load balancer targets. If you deploy your ingress pods by using a machine set on a smaller network, such as a /27 or /28 , you can simplify your load balancer targets. Tip You can list all IP addresses that exist in a network by checking the machine config pool's resources. Before you configure an external load balancer for your OpenShift Container Platform cluster, consider the following information: For a front-end IP address, you can use the same IP address for the front-end IP address, the Ingress Controller's load balancer, and API load balancer. Check the vendor's documentation for this capability. For a back-end IP address, ensure that an IP address for an OpenShift Container Platform control plane node does not change during the lifetime of the external load balancer. You can achieve this by completing one of the following actions: Assign a static IP address to each control plane node. Configure each node to receive the same IP address from the DHCP every time the node requests a DHCP lease. Depending on the vendor, the DHCP lease might be in the form of an IP reservation or a static DHCP assignment. Manually define each node that runs the Ingress Controller in the external load balancer for the Ingress Controller back-end service. For example, if the Ingress Controller moves to an undefined node, a connection outage can occur. 3.18.1. Configuring an external load balancer You can configure an OpenShift Container Platform cluster to use an external load balancer in place of the default load balancer. Important Before you configure an external load balancer, ensure that you read the "Services for an external load balancer" section. Read the following prerequisites that apply to the service that you want to configure for your external load balancer. Note MetalLB, that runs on a cluster, functions as an external load balancer. OpenShift API prerequisites You defined a front-end IP address. TCP ports 6443 and 22623 are exposed on the front-end IP address of your load balancer. Check the following items: Port 6443 provides access to the OpenShift API service. Port 22623 can provide ignition startup configurations to nodes. The front-end IP address and port 6443 are reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address and port 22623 are reachable only by OpenShift Container Platform nodes. The load balancer backend can communicate with OpenShift Container Platform control plane nodes on port 6443 and 22623. Ingress Controller prerequisites You defined a front-end IP address. TCP ports 443 and 80 are exposed on the front-end IP address of your load balancer. The front-end IP address, port 80 and port 443 are be reachable by all users of your system with a location external to your OpenShift Container Platform cluster. The front-end IP address, port 80 and port 443 are reachable to all nodes that operate in your OpenShift Container Platform cluster. The load balancer backend can communicate with OpenShift Container Platform nodes that run the Ingress Controller on ports 80, 443, and 1936. Prerequisite for health check URL specifications You can configure most load balancers by setting health check URLs that determine if a service is available or unavailable. OpenShift Container Platform provides these health checks for the OpenShift API, Machine Configuration API, and Ingress Controller backend services. The following examples demonstrate health check specifications for the previously listed backend services: Example of a Kubernetes API health check specification Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of a Machine Config API health check specification Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10 Example of an Ingress Controller health check specification Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10 Procedure Configure the HAProxy Ingress Controller, so that you can enable access to the cluster from your load balancer on ports 6443, 443, and 80: Example HAProxy configuration #... listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2 # ... Use the curl CLI command to verify that the external load balancer and its resources are operational: Verify that the cluster machine configuration API is accessible to the Kubernetes API server resource, by running the following command and observing the response: USD curl https://<loadbalancer_ip_address>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that the cluster machine configuration API is accessible to the Machine config server resource, by running the following command and observing the output: USD curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that the controller is accessible to the Ingress Controller resource on port 80, by running the following command and observing the output: USD curl -I -L -H "Host: console-openshift-console.apps.<cluster_name>.<base_domain>" http://<load_balancer_front_end_IP_address> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache Verify that the controller is accessible to the Ingress Controller resource on port 443, by running the following command and observing the output: USD curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain> If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private Configure the DNS records for your cluster to target the front-end IP addresses of the external load balancer. You must update records to your DNS server for the cluster API and applications over the load balancer. Examples of modified DNS records <load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End <load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End Important DNS propagation might take some time for each DNS record to become available. Ensure that each DNS record propagates before validating each record. Use the curl CLI command to verify that the external load balancer and DNS record configuration are operational: Verify that you can access the cluster API, by running the following command and observing the output: USD curl https://api.<cluster_name>.<base_domain>:6443/version --insecure If the configuration is correct, you receive a JSON object in response: { "major": "1", "minor": "11+", "gitVersion": "v1.11.0+ad103ed", "gitCommit": "ad103ed", "gitTreeState": "clean", "buildDate": "2019-01-09T06:44:10Z", "goVersion": "go1.10.3", "compiler": "gc", "platform": "linux/amd64" } Verify that you can access the cluster machine configuration, by running the following command and observing the output: USD curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK Content-Length: 0 Verify that you can access each cluster application on port, by running the following command and observing the output: USD curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private Verify that you can access each cluster application on port 443, by running the following command and observing the output: USD curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure If the configuration is correct, the output from the command shows the following response: HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private 3.19. steps Customize your cluster . If necessary, you can opt out of remote health reporting . Set up your registry and configure registry storage . Optional: View the events from the vSphere Problem Detector Operator to determine if the cluster has permission or storage configuration issues.
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "certs ├── lin │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 ├── mac │ ├── 108f4d17.0 │ ├── 108f4d17.r1 │ ├── 7e757f6a.0 │ ├── 8e4f8471.0 │ └── 8e4f8471.r0 └── win ├── 108f4d17.0.crt ├── 108f4d17.r1.crl ├── 7e757f6a.0.crt ├── 8e4f8471.0.crt └── 8e4f8471.r0.crl 3 directories, 15 files", "cp certs/lin/* /etc/pki/ca-trust/source/anchors", "update-ca-trust extract", "./openshift-install create install-config --dir <installation_directory> 1", "rm -rf ~/.powervs", "{ \"auths\":{ \"cloud.openshift.com\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" }, \"quay.io\":{ \"auth\":\"b3Blb=\", \"email\":\"[email protected]\" } } }", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 - cidr: fd00:10:128::/56 hostPrefix: 64 serviceNetwork: - 172.30.0.0/16 - fd00:172:16::/112", "networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23", "networking: serviceNetwork: - 172.30.0.0/16", "networking: machineNetwork: - cidr: 10.0.0.0/16", "platform: vsphere:", "platform: vsphere: apiVIPs:", "platform: vsphere: diskType:", "platform: vsphere: failureDomains: region:", "platform: vsphere: failureDomains: server:", "platform: vsphere: failureDomains: zone:", "platform: vsphere: failureDomains: topology: datacenter:", "platform: vsphere: failureDomains: topology: datastore:", "platform: vsphere: failureDomains: topology: folder:", "platform: vsphere: failureDomains: topology: networks:", "platform: vsphere: failureDomains: topology: resourcePool:", "platform: vsphere: ingressVIPs:", "platform: vsphere: vcenters:", "platform: vsphere: vcenters: datacenters:", "platform: vsphere: vcenters: password:", "platform: vsphere: vcenters: port:", "platform: vsphere: vcenters: server:", "platform: vsphere: vcenters: user:", "platform: vsphere: apiVIP:", "platform: vsphere: cluster:", "platform: vsphere: datacenter:", "platform: vsphere: defaultDatastore:", "platform: vsphere: folder:", "platform: vsphere: ingressVIP:", "platform: vsphere: network:", "platform: vsphere: password:", "platform: vsphere: resourcePool:", "platform: vsphere: username:", "platform: vsphere: vCenter:", "apiVersion: v1 baseDomain: example.com 1 compute: 2 - architecture: amd64 name: <worker_node> platform: {} replicas: 3 controlPlane: 3 architecture: amd64 name: <parent_node> platform: {} replicas: 3 metadata: creationTimestamp: null name: test 4 platform: vsphere: 5 apiVIPs: - 10.0.0.1 failureDomains: 6 - name: <failure_domain_name> region: <default_region_name> server: <fully_qualified_domain_name> topology: computeCluster: \"/<datacenter>/host/<cluster>\" datacenter: <datacenter> datastore: \"/<datacenter>/datastore/<datastore>\" 7 networks: - <VM_Network_name> resourcePool: \"/<datacenter>/host/<cluster>/Resources/<resourcePool>\" 8 folder: \"/<datacenter_name>/vm/<folder_name>/<subfolder_name>\" zone: <default_zone_name> ingressVIPs: - 10.0.0.2 vcenters: - datacenters: - <datacenter> password: <password> port: 443 server: <fully_qualified_domain_name> user: [email protected] diskType: thin 9 fips: false pullSecret: '{\"auths\": ...}' sshKey: 'ssh-ed25519 AAAA...'", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "govc tags.category.create -d \"OpenShift region\" openshift-region", "govc tags.category.create -d \"OpenShift zone\" openshift-zone", "govc tags.create -c <region_tag_category> <region_tag>", "govc tags.create -c <zone_tag_category> <zone_tag>", "govc tags.attach -c <region_tag_category> <region_tag_1> /<datacenter_1>", "govc tags.attach -c <zone_tag_category> <zone_tag_1> /<datacenter_1>/host/vcs-mdcnc-workload-1", "--- compute: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- controlPlane: --- vsphere: zones: - \"<machine_pool_zone_1>\" - \"<machine_pool_zone_2>\" --- platform: vsphere: vcenters: --- datacenters: - <datacenter1_name> - <datacenter2_name> failureDomains: - name: <machine_pool_zone_1> region: <region_tag_1> zone: <zone_tag_1> server: <fully_qualified_domain_name> topology: datacenter: <datacenter1> computeCluster: \"/<datacenter1>/host/<cluster1>\" networks: - <VM_Network1_name> datastore: \"/<datacenter1>/datastore/<datastore1>\" resourcePool: \"/<datacenter1>/host/<cluster1>/Resources/<resourcePool1>\" folder: \"/<datacenter1>/vm/<folder1>\" - name: <machine_pool_zone_2> region: <region_tag_2> zone: <zone_tag_2> server: <fully_qualified_domain_name> topology: datacenter: <datacenter2> computeCluster: \"/<datacenter2>/host/<cluster2>\" networks: - <VM_Network2_name> datastore: \"/<datacenter2>/datastore/<datastore2>\" resourcePool: \"/<datacenter2>/host/<cluster2>/Resources/<resourcePool2>\" folder: \"/<datacenter2>/vm/<folder2>\" ---", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc get pod -n openshift-image-registry -l docker-registry=default", "No resourses found in openshift-image-registry namespace", "oc edit configs.imageregistry.operator.openshift.io", "storage: pvc: claim: 1", "oc get clusteroperator image-registry", "NAME VERSION AVAILABLE PROGRESSING DEGRADED SINCE MESSAGE image-registry 4.7 True False False 6h50m", "oc patch config.imageregistry.operator.openshift.io/cluster --type=merge -p '{\"spec\":{\"rolloutStrategy\":\"Recreate\",\"replicas\":1}}'", "kind: PersistentVolumeClaim apiVersion: v1 metadata: name: image-registry-storage 1 namespace: openshift-image-registry 2 spec: accessModes: - ReadWriteOnce 3 resources: requests: storage: 100Gi 4", "oc create -f pvc.yaml -n openshift-image-registry", "oc edit config.imageregistry.operator.openshift.io -o yaml", "storage: pvc: claim: 1", "Path: HTTPS:6443/readyz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTPS:22623/healthz Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 10 Interval: 10", "Path: HTTP:1936/healthz/ready Healthy threshold: 2 Unhealthy threshold: 2 Timeout: 5 Interval: 10", "# listen my-cluster-api-6443 bind 192.168.1.100:6443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /readyz http-check expect status 200 server my-cluster-master-2 192.168.1.101:6443 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:6443 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:6443 check inter 10s rise 2 fall 2 listen my-cluster-machine-config-api-22623 bind 192.168.1.100:22623 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz http-check expect status 200 server my-cluster-master-2 192.168.1.101:22623 check inter 10s rise 2 fall 2 server my-cluster-master-0 192.168.1.102:22623 check inter 10s rise 2 fall 2 server my-cluster-master-1 192.168.1.103:22623 check inter 10s rise 2 fall 2 listen my-cluster-apps-443 bind 192.168.1.100:443 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:443 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:443 check port 1936 inter 10s rise 2 fall 2 listen my-cluster-apps-80 bind 192.168.1.100:80 mode tcp balance roundrobin option httpchk http-check connect http-check send meth GET uri /healthz/ready http-check expect status 200 server my-cluster-worker-0 192.168.1.111:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-1 192.168.1.112:80 check port 1936 inter 10s rise 2 fall 2 server my-cluster-worker-2 192.168.1.113:80 check port 1936 inter 10s rise 2 fall 2", "curl https://<loadbalancer_ip_address>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://<loadbalancer_ip_address>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl -I -L -H \"Host: console-openshift-console.apps.<cluster_name>.<base_domain>\" http://<load_balancer_front_end_IP_address>", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.ocp4.private.opequon.net/ cache-control: no-cache", "curl -I -L --insecure --resolve console-openshift-console.apps.<cluster_name>.<base_domain>:443:<Load Balancer Front End IP Address> https://console-openshift-console.apps.<cluster_name>.<base_domain>", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "<load_balancer_ip_address> A api.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "<load_balancer_ip_address> A apps.<cluster_name>.<base_domain> A record pointing to Load Balancer Front End", "curl https://api.<cluster_name>.<base_domain>:6443/version --insecure", "{ \"major\": \"1\", \"minor\": \"11+\", \"gitVersion\": \"v1.11.0+ad103ed\", \"gitCommit\": \"ad103ed\", \"gitTreeState\": \"clean\", \"buildDate\": \"2019-01-09T06:44:10Z\", \"goVersion\": \"go1.10.3\", \"compiler\": \"gc\", \"platform\": \"linux/amd64\" }", "curl -v https://api.<cluster_name>.<base_domain>:22623/healthz --insecure", "HTTP/1.1 200 OK Content-Length: 0", "curl http://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 302 Found content-length: 0 location: https://console-openshift-console.apps.<cluster-name>.<base domain>/ cache-control: no-cacheHTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=39HoZgztDnzjJkq/JuLJMeoKNXlfiVv2YgZc09c3TBOBU4NI6kDXaJH1LdicNhN1UsQWzon4Dor9GWGfopaTEQ==; Path=/; Secure x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Tue, 17 Nov 2020 08:42:10 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=9b714eb87e93cf34853e87a92d6894be; path=/; HttpOnly; Secure; SameSite=None cache-control: private", "curl https://console-openshift-console.apps.<cluster_name>.<base_domain> -I -L --insecure", "HTTP/1.1 200 OK referrer-policy: strict-origin-when-cross-origin set-cookie: csrf-token=UlYWOyQ62LWjw2h003xtYSKlh1a0Py2hhctw0WmV2YEdhJjFyQwWcGBsja261dGLgaYO0nxzVErhiXt6QepA7g==; Path=/; Secure; SameSite=Lax x-content-type-options: nosniff x-dns-prefetch-control: off x-frame-options: DENY x-xss-protection: 1; mode=block date: Wed, 04 Oct 2023 16:29:38 GMT content-type: text/html; charset=utf-8 set-cookie: 1e2670d92730b515ce3a1bb65da45062=1bf5e9573c9a2760c964ed1659cc1673; path=/; HttpOnly; Secure; SameSite=None cache-control: private" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/installing_on_vsphere/installing-vsphere-installer-provisioned-customizations
Chapter 83. Zip File
Chapter 83. Zip File The Zip File Data Format is a message compression and de-compression format. Messages can be marshalled (compressed) to Zip files containing a single entry, and Zip files containing a single entry can be unmarshalled (decompressed) to the original file contents. This data format supports ZIP64, as long as Java 7 or later is being used]. 83.1. ZipFile Options The Zip File dataformat supports 4 options, which are listed below. Name Default Java Type Description usingIterator Boolean If the zip file has more then one entry, the setting this option to true, allows to work with the splitter EIP, to split the data using an iterator in a streaming mode. allowEmptyDirectory Boolean If the zip file has more then one entry, setting this option to true, allows to get the iterator even if the directory is empty. preservePathElements Boolean If the file name contains path elements, setting this option to true, allows the path to be maintained in the zip file. maxDecompressedSize Integer Set the maximum decompressed size of a zip file (in bytes). The default value if not specified corresponds to 1 gigabyte. An IOException will be thrown if the decompressed size exceeds this amount. Set to -1 to disable setting a maximum decompressed size. 83.2. Marshal In this example we marshal a regular text/XML payload to a compressed payload using Zip file compression, and send it to an ActiveMQ queue called MY_QUEUE. from("direct:start") .marshal().zipFile() .to("activemq:queue:MY_QUEUE"); The name of the Zip entry inside the created Zip file is based on the incoming CamelFileName message header, which is the standard message header used by the file component. Additionally, the outgoing CamelFileName message header is automatically set to the value of the incoming CamelFileName message header, with the ".zip" suffix. So for example, if the following route finds a file named "test.txt" in the input directory, the output will be a Zip file named "test.txt.zip" containing a single Zip entry named "test.txt": from("file:input/directory?antInclude=*/.txt") .marshal().zipFile() .to("file:output/directory"); If there is no incoming CamelFileName message header (for example, if the file component is not the consumer), then the message ID is used by default, and since the message ID is normally a unique generated ID, you will end up with filenames like ID-MACHINENAME-2443-1211718892437-1-0.zip . If you want to override this behavior, then you can set the value of the CamelFileName header explicitly in your route: from("direct:start") .setHeader(Exchange.FILE_NAME, constant("report.txt")) .marshal().zipFile() .to("file:output/directory"); This route would result in a Zip file named "report.txt.zip" in the output directory, containing a single Zip entry named "report.txt". 83.3. Unmarshal In this example we unmarshal a Zip file payload from an ActiveMQ queue called MY_QUEUE to its original format, and forward it for processing to the UnZippedMessageProcessor . from("activemq:queue:MY_QUEUE") .unmarshal().zipFile() .process(new UnZippedMessageProcessor()); If the zip file has more then one entry, the usingIterator option of ZipFileDataFormat to be true, and you can use splitter to do the further work. ZipFileDataFormat zipFile = new ZipFileDataFormat(); zipFile.setUsingIterator(true); from("file:src/test/resources/org/apache/camel/dataformat/zipfile/?delay=1000&noop=true") .unmarshal(zipFile) .split(body(Iterator.class)).streaming() .process(new UnZippedMessageProcessor()) .end(); Or you can use the ZipSplitter as an expression for splitter directly like this from("file:src/test/resources/org/apache/camel/dataformat/zipfile?delay=1000&noop=true") .split(new ZipSplitter()).streaming() .process(new UnZippedMessageProcessor()) .end(); 83.3.1. Aggregate Note This aggregation strategy requires eager completion check to work properly. In this example we aggregate all text files found in the input directory into a single Zip file that is stored in the output directory. from("file:input/directory?antInclude=*/.txt") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .to("file:output/directory"); The outgoing CamelFileName message header is created using java.io.File.createTempFile, with the ".zip" suffix. If you want to override this behavior, then you can set the value of the CamelFileName header explicitly in your route: from("file:input/directory?antInclude=*/.txt") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .setHeader(Exchange.FILE_NAME, constant("reports.zip")) .to("file:output/directory"); 83.4. Dependencies To use Zip files in your camel routes you need to add a dependency on camel-zipfile which implements this data format. If you use Maven you can just add the following to your pom.xml , substituting the version number for the latest & greatest release (see the download page for the latest versions). <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-zipfile</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency> 83.5. Spring Boot Auto-Configuration When using zipfile with Spring Boot make sure to use the following Maven dependency to have support for auto configuration: <dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-zipfile-starter</artifactId> </dependency> The component supports 5 options, which are listed below. Name Description Default Type camel.dataformat.zipfile.allow-empty-directory If the zip file has more then one entry, setting this option to true, allows to get the iterator even if the directory is empty. false Boolean camel.dataformat.zipfile.enabled Whether to enable auto configuration of the zipfile data format. This is enabled by default. Boolean camel.dataformat.zipfile.max-decompressed-size Set the maximum decompressed size of a zip file (in bytes). The default value if not specified corresponds to 1 gigabyte. An IOException will be thrown if the decompressed size exceeds this amount. Set to -1 to disable setting a maximum decompressed size. 1073741824 Long camel.dataformat.zipfile.preserve-path-elements If the file name contains path elements, setting this option to true, allows the path to be maintained in the zip file. false Boolean camel.dataformat.zipfile.using-iterator If the zip file has more then one entry, the setting this option to true, allows to work with the splitter EIP, to split the data using an iterator in a streaming mode. false Boolean
[ "from(\"direct:start\") .marshal().zipFile() .to(\"activemq:queue:MY_QUEUE\");", "from(\"file:input/directory?antInclude=*/.txt\") .marshal().zipFile() .to(\"file:output/directory\");", "from(\"direct:start\") .setHeader(Exchange.FILE_NAME, constant(\"report.txt\")) .marshal().zipFile() .to(\"file:output/directory\");", "from(\"activemq:queue:MY_QUEUE\") .unmarshal().zipFile() .process(new UnZippedMessageProcessor());", "ZipFileDataFormat zipFile = new ZipFileDataFormat(); zipFile.setUsingIterator(true); from(\"file:src/test/resources/org/apache/camel/dataformat/zipfile/?delay=1000&noop=true\") .unmarshal(zipFile) .split(body(Iterator.class)).streaming() .process(new UnZippedMessageProcessor()) .end();", "from(\"file:src/test/resources/org/apache/camel/dataformat/zipfile?delay=1000&noop=true\") .split(new ZipSplitter()).streaming() .process(new UnZippedMessageProcessor()) .end();", "from(\"file:input/directory?antInclude=*/.txt\") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .to(\"file:output/directory\");", "from(\"file:input/directory?antInclude=*/.txt\") .aggregate(constant(true), new ZipAggregationStrategy()) .completionFromBatchConsumer().eagerCheckCompletion() .setHeader(Exchange.FILE_NAME, constant(\"reports.zip\")) .to(\"file:output/directory\");", "<dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-zipfile</artifactId> <version>{CamelSBVersion}</version> <!-- use the same version as your Camel core version --> </dependency>", "<dependency> <groupId>org.apache.camel.springboot</groupId> <artifactId>camel-zipfile-starter</artifactId> </dependency>" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_apache_camel_for_spring_boot/3.20/html/camel_spring_boot_reference/csb-camel-zipfile-dataformat-starter
Providing Feedback on Red Hat Documentation
Providing Feedback on Red Hat Documentation We appreciate your input on our documentation. Please let us know how we could make it better. You can submit feedback by filing a ticket in Bugzilla: Navigate to the Bugzilla website. In the Component field, use Documentation . In the Description field, enter your suggestion for improvement. Include a link to the relevant parts of the documentation. Click Submit Bug .
null
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/configuring_capsules_with_a_load_balancer/providing-feedback-on-red-hat-documentation_load-balancing
Chapter 24. Authentication and Interoperability
Chapter 24. Authentication and Interoperability The IdM LDAP server no longer becomes unresponsive when resolving an AD user takes a long time When the System Security Services Daemon (SSSD) took a long time to resolve a user from a trusted Active Directory (AD) domain on the Identity Management (IdM) server, the IdM LDAP server sometimes exhausted its own worker threads. Consequently, the IdM LDAP server was unable to respond to further requests from SSSD clients or other LDAP clients. This update adds a new API to SSSD on the IdM server, which enables identity requests to time out. Also, the IdM LDAP extended identity operations plug-in and the Schema Compatibility plug-in now support this API to enable canceling requests that take too long. As a result, the IdM LDAP server can recover from the described situation and keep responding to further requests. (BZ# 1415162 , BZ# 1473571 , BZ# 1473577 ) Application configuration snippets in /etc/krb5.conf.d/ are now automatically read in existing configurations Previously, Kerberos did not automatically add support for the /etc/krb5.conf.d/ directory to existing configurations. Consequently, application configuration snippets in /etc/krb5.conf.d/ were not read unless the user added the include statement for the directory manually. This update modifies existing configurations to include the appropriate includedir line pointing to /etc/krb5.conf.d/ . As a result, applications can rely on their configuration snippets in /etc/krb5.conf.d . Note that if you manually remove the includedir line after this update, successive updates will not add it again. (BZ# 1431198 ) pam_mkhomedir can now create home directories under / Previously, the pam_mkhomedir module was unable to create subdirectories under the / directory. Consequently, when a user with a home directory in a non-existent directory under / attempted to log in, the attempt failed with this error: This update fixes the described problem, and pam_mkhomedir is now able to create home directories in this situation. Note that even after applying this update, SELinux might still prevent pam_mkhomedir from creating the home directory, which is the expected SELinux behavior. To ensure pam_mkhomedir is allowed to create the home directory, modify the SELinux policy using a custom SELinux module, which enables the required paths to be created with the correct SELinux context. (BZ#1509338) Kerberos operations depending on KVNO in the keytab file no longer fail when a RODC is used The adcli utility did not handle the key version number (KVNO) properly when updating Kerberos keys on a read-only domain controller (RODC). Consequently, some operations, such as validating a Kerberos ticket, failed because no key with a matching KVNO was found in the keytab file. With this update, adcli detects if a RODC is used and handles the KVNO accordingly. As a result, the keytab file contains the right KVNO, and all Kerberos operations depending on this behavior work as expected. (BZ# 1471021 ) krb5 properly displays errors about PKINIT misconfiguration in single-realm KDC environments Previously, when Public Key Cryptography for Initial Authentication in Kerberos (PKINIT) was misconfigured, the krb5 package did not report the incorrect configuration to the administrator. For example, this problem occurred when the deprecated pkinit_kdc_ocsp option was specified in the /etc/krb5.conf file. With this update, krb5 exposes PKINIT initialization failures when only one realm is specified in the Kerberos key distribution center (KDC). As a result, single-realm KDCs report PKINIT misconfiguration properly. (BZ#1460089) Certificate System no longer incorrectly logs ROLE_ASSUME audit events Previously, Certificate System incorrectly generated the ROLE_ASSUME audit event for certain operations even if no privileged access occurred for a user. Consequently, the event was incorrectly logged. The problem has been fixed and ROLE_ASSUME events are no longer logged in the mentioned scenario. (BZ# 1461524 ) Updated attributes in CERT_STATUS_CHANGE_REQUEST_PROCESSED audit log event Previously, the CERT_STATUS_CHANGE_REQUEST_PROCESSED audit event in log files contained the following attributes: ReqID - The requester ID SubjectID - The subject ID of the certificate For consistency with other audit events, the attributes have been modified and now contain the following information: ReqID - The request ID SubjectID - The requester ID (BZ# 1461217 ) Signed audit log verification now works correctly Previously, due to improper logging system initialization and incorrect signature calculation by the verification tool, signed audit log verification could fail on the first log entry and after log rotation. With this update, the logging system and the verification tool have been fixed. As a result, signed audit log verification now works correctly in the mentioned scenarios. (BZ# 1404794 ) Certificate System now validates the banner file A version of Certificate System introduced a configurable access banner - a custom message to be displayed in the PKI console at the start of every secure session. The contents of this banner were not validated, which could cause a JAXBUnmarshalException error if the message contained invalid UTF-8 characters. With this update, the contents of the banner file are validated both on server startup and on client requests. If the file is found to contain invalid UTF-8 characters on server startup, the server will not start. If invalid characters are found when a client requests the banner, the server will return an error message and not send the banner to the client. (BZ#1446579) The TPS subsystem no longer fails when performing a symmetric key changeover on a HSM Previously, attempting to perform a symmetric key changeover with the master key on a Hardware Security Module (HSM) token failed with an error reported by the Certificate System Token Processing System (TPS) subsystem. This update fixes the way the master key on a HSM is used to calculate the new key set, allowing the TPS to successfully upgrade a token key set when the master resides on a HSM. The fix is currently verified with the G&D SmartCafe 6.0 HSM. (BZ#1465142) Certificate System CAs no longer display an error when handing subject DNs without a CN component Previously, an incoming request missing the Common Name (CN) component caused a NullPointerException on the Certificate Authority (CA) because the implementation expected the CN to be present in the subject Distinguished Name (DN) of the Certificate Management over CMS (CMC). This update allows the CA to handle subject DN without a CN component, preventing the exception from being thrown. (BZ# 1474658 ) The pki-server-upgrade utility no longer fails if target files are missing A bug in the pki-server-upgrade utiltiy caused it to attempt to locate a non-existent file. As a consequence, the upgrade process failed to complete, and could possibly leave the PKI deployment in an invalid state. With this update, pki-server-upgrade has been modified to correctly handle cases where target files are missing, and PKI upgrades now work correctly. (BZ# 1479663 ) The Certificate System CA key replication now works correctly A update to one of the key unwrapping functions introduced a requirement for a key usage parameter which was not being supplied at the call site, which caused lightweight Certiciate Authority (CA) key replication to fail. This bug has been fixed by modifying the call site so that it supplies the key usage parameter, and lightweight CA key replication now works as expected. (BZ# 1484359 ) Certificate System no longer fails to import PKCS #12 files An earlier change to PKCS #12 password encoding in the Network Security Services (NSS) caused Certificate System to fail to import PKCS #12 files. As a consequence, the Certificate Authority (CA) clone installation could not be completed. With this update, PKI will retry a failed PKCS #12 decryption with a different password encoding, which allows it to import PKCS #12 files produced by both old and new versions of NSS, and CA clone installation succeeds. (BZ# 1486225 ) The TPS user interface now displays the token type and origin fields Previously, the tps-cert-find and tps-cert-show Token Processing System (TPS) user interface utilites did not display the token type and origin fields which were present in the legacy TPS interface. The interface has been updated and now displays the missing information. (BZ#1491052) Certificate System issued certificates with an expiration date later than the expiration date of the CA certificate Previously, when signing a certificate for an external Certificate Authority (CA), Certificate System used the ValidityConstraint plug-in. Consequently, it was possible to issue certificates with a later expiry date than the expiry date of the issuing CA. This update adds the CAValidityConstraint plug-in to the registry so that it becomes available for the enrollment profiles. In addition, the ValidityConstraint plug-in in the caCMCcaCert profile has been replaced with the CAValidityConstraint plug-in which effectively sets the restrictions. As a result, issuing certificates with an expiry date later than the issuing CA is no longer allowed. (BZ# 1518096 ) CA certificates without SKI extension no longer causes issuance failures A update of Certificate System incorrectly removed a fallback procedure, which generated the Issuer Key Identifier. Consequently, the Certificate Authority (CA) failed to issue certificates if the CA signing certificate does not have the Subject Key Identifier (SKI) extension set. With this update, the missing procedure has been added again. As a result, issuing certificates no longer fails if the CA signing certificate does not contain the SKI extension. (BZ# 1499054 ) Certificate System correctly logs the user name in CMC request audit events Previously, when Certificate System received a Certificate Management over CMS (CMC) request, the server logged an audit event with the SubjectID field set to USDNonRoleUserUSD . As a result, administrators could not verify who issued the request. This update fixes the problem, and Certificate System now correctly logs the user name in the mentioned scenario. (BZ# 1506819 ) The Directory Server trivial word check password policy now works as expected Previously, when you set a userPassword attribute to exactly the same value as an attribute restricted by the passwordTokenMin setting with the same length, Directory Server incorrectly allowed the password update operation. With this update, the trivial word check password policy feature now correctly verifies the entire user attribute value as a whole, and the described problem no longer occurs. (BZ# 1517788 ) The pkidestroy utility now fully removes instances that are started by the pki-tomcatd-nuxwdog service Previously, the pkidestroy utility did not remove Certificate System instances that used the pki-tomcatd-nuxwdog service as a starting mechanism. As a consequence, administrators had to migrate pki-tomcatd-nuxwdog to the service without watchdog before using pkidestroy to fully remove an instance. The utility has been updated, and instances are correctly removed in the mentioned scenario. Note that if you manually removed the password file before running pkidestroy , the utility will ask for the password to update the security domain. (BZ# 1498957 ) The Certificate System deployment archive file no longer contains passwords in plain text Previously, when you created a new Certificate System instance by passing a configuration file with a password in the [DEFAULT] section to the pkispawn utility, the password was visible in the archived deployment file. Although this file has world readable permissions, it is contained within a directory that is only accessible by the Certificate Server instance user, which is pkiuser , by default. With this update, permissions on this file have been restricted to the Certificate Server instance user, and pkispawn now masks the password in the archived deployment file. To restrict access to the password on an existing installation, manually remove the password from the /etc/sysconfig/pki/tomcat/<instance_name>/<subsystem>/deployment.cfg file, and set the file's permissions to 600 . (BZ# 1532759 ) ACIs with the targetfilter keyword work correctly Previously, if an Access Control Instruction (ACI) in Directory Server used the targetfilter keyword, searches containing the geteffective rights control returned before the code was executed for template entries. Consequently, the GetEffectireRights() function could not determine the permissions when creating entries and returned false-negative results when verifying an ACI. With this update, Directory Server creates a template entry based on the provided geteffective attribute and verifies access to this template entry. As a result, ACIs in the mentioned scenario work correctly. (BZ# 1459946 ) Directory Server searches with a scope set to one have been fixed Due to a bug in Directory Server, searches with a scope set to one returned all child entries instead of only the ones that matched the filter. This update fixes the problem. As a result, searches with scope one only return entries which are matching the filter. (BZ# 1511462 ) Clear error message when sending TLS data to a non-LDAPS port Previously, Directory Server decoded TLS protocol handshakes sent to a port that was configured to use plain text as an LDAPMessage data type. However, decoding failed and the server reported the misleading BER was 3 bytes, but actually was <greater> error. With this update, Directory Server detects if TLS data is sent to a port configured for plain text and returns the following error message to the client: As a result, the new error message indicates that an incorrect client configuration causes the problem. (BZ# 1445188 ) Directory Server no longer logs an error if not running the cleanallruv task After removing a replica server from an existing replication topology without running the cleanallruv task, Directory Server previously logged an error about not being able to replace referral entries. This update adds a check for duplicate referrals and removes them. As a result, the error is no longer logged. (BZ# 1434335 ) Using a large number of CoS templates no longer slow down the virtual attribute processing time Due to a bug, using a large number of Class of Service (CoS) templates in Directory Server increased the virtual attribute processing time. This update improves the structure of the CoS storage. As a result, using a large number of CoS templates no longer increases the virtual attribute processing time. (BZ# 1523183 ) Directory Server now handles binds during an online initialization correctly During an online initialization from one Directory Server master to another, the master receiving the changes is temporarily set into a referral mode. While in this mode, the server only returns referrals. Previously, Directory Server incorrectly generated these bind referrals. As a consequence, the server could terminate unexpectedly in the mentioned scenario. With this update, the server correctly generates bind referrals. As a result, the server now correctly handles binds during an online initialization. (BZ# 1483681 ) The [email protected] meta target is now linked to multi-user.target Previously, the [email protected] meta target had the Wants parameter set to dirsrv.target in its systemd file. When you enabled [email protected] , this correctly enabled the service to the dirsrv.target , but dirsrv.target was not enabled. Consequently, Directory Server did not start when the system booted. With this update, the [email protected] meta target is now linked to multi-user.target . As a result, when you enable [email protected] , Directory Server starts automatically when the system boots. (BZ# 1476207 ) The memberOf plug-in now logs all update attempts of the memberOf attribute In certain situations, Directory Server fails to update the memberOf attribute of a user entry. In this case, the memberOf plug-in logs an error message and forces the update. In the Directory Server version, the second try was not logged if it was successful. Consequently, the log entries were misleading, because only the failed attempt was logged. With this update, the memberOf plug-in also logs the successful update if the first try failed. As a result, the plug-in now logs the initial failure, and the subsequent successful retry as well. (BZ# 1533571 ) The Directory Server password policies now work correctly Previously, subtree and user password policies did not use the same default values as the global password policy. As a consequence, Directory Server incorrectly skipped certain syntax checks. This bug has been fixed. As a result, the password policy features work the same for the global configuration and the subtree and user policies. (BZ# 1465600 ) A buffer overflow has been fixed in Directory Server Previously, if you configured an attribute to be indexed and imported an entry that contained a large binary value into this attribute, the server terminated unexpectedly due to an buffer overflow. The buffer has been fixed. As a result, the server works as expected in the mentioned scenario. (BZ# 1498980 ) Directory Server now sends the password expired control during grace logins Previously, Directory Server did not send the expired password control when an expired password had grace logins left. Consequently, clients could not tell the user that the password was expired or how many grace logins were left. The problem has been fixed. As a result, clients can now tell the user if a password is expired and how many grace logins remain. (BZ# 1464505 ) An unnecessary global lock has been removed from Directory Server Previously, when the memberOf plug-in was enabled and users and groups were stored in separate back ends, a deadlock could occur. An unnecessary global lock has been removed and, as a result, the deadlock no longer occurs in the mentioned scenario. (BZ# 1501058 ) Replication now works correctly with TLS client authentication and FIPS mode enabled Previously, if you used TLS client authentication in a Directory Server replication environment with Federal Information Processing Standard (FIPS) mode enabled, the internal Network Security Services (NSS) database token differed from a token on a system with FIPS mode disabled. As a consequence, replication failed. The problem has been fixed, and as a result, replication agreements with TLS client authentication now work correctly if FIPS mode is enabled. (BZ# 1464463 ) Directory Server now correctly sets whether virtual attributes are operational The pwdpolicysubentry subtree password policy attribute in Directory Server is flagged as operational. However, in the version of Directory Server, this flag was incorrectly applied to following virtual attributes that were processed. As a consequence, the search results were not visible to the client. With this update, the server now resets the attribute before processing the virtual attribute and Class of Service (CoS). As a result, the expected virtual attributes and CoS are now returned to the client. (BZ# 1453155 ) Backup now succeeds if replication was enabled and a changelog file existed Previously, if replication was enabled and a changelog file existed, performing a backup on this master server failed. This update sets the internal options for correctly copying a file. As a result, creating a backup now succeeds in the mentioned scenario. (BZ# 1476322 ) Certificate System updates the revocation reason correctly Previously, if a user temporarily lost a smart card token, the administrator of a Certificate System Token Processing System (TPS) in some cases changed the status of the certificate from on hold to permanently lost or damaged . However, the new revocation reason did not get reflected on the CA. With this update, it is possible to change a certificate status from on hold directly to revoked . As a result, the revocation reason is updated correctly. (BZ#1500474) A race condition has been fixed in the Certificate System clone installation process In certain situations, a race condition arose between the LDAP replication of security domain session objects and the execution of an authenticated operation against a clone other than the clone where the login occurred. As a consequence, cloning a Certificate System installation failed. With this update, the clone installation process now waits for the security domain login to finish before it enables the security domain session objects to be replicated to other clones. As a result, the clone installation no longer fails. (BZ#1402280) Certificate System now uses strong ciphers by default With this update, the list of enabled ciphers has been changed. By default, only strong ciphers, which are compliant with the Federal Information Processing Standard (FIPS), are enabled in Certificate System. RSA ciphers enabled by default: TLS_DHE_RSA_WITH_AES_128_CBC_SHA TLS_DHE_RSA_WITH_AES_256_CBC_SHA TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA256 TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_RSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA Note that the TLS_RSA_WITH_AES_128_CBC_SHA and TLS_RSA_WITH_AES_256_CBC_SHA ciphers need to be enabled to enable the pkispawn utility to connect to the LDAP server during the installation and configuration. ECC ciphers enabled by default: TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA TLS_RSA_WITH_AES_256_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256 TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256 In addition, the default ranges of the sslVersionRangeStream and sslVersionRangeDatagram parameters in the /var/lib/pki/<instance_name>/conf/server.xml file now use only TLS 1.1 and TLS 1.2 ciphers. (BZ# 1539125 ) The pkispawn utility no longer displays incorrect errors Previously, during a successful installation of Certificate System, the pkispawn utility incorrectly displayed errors related to deleting temporary certificates. The problem has been fixed, and the error messages no longer display if the installation succeeds. (BZ# 1520277 ) The Certificate System profile configuration update method now correctly handles backslashes Previously, a parser in Certificate System removed backslash characters from the configuration when a user updated a profile. As a consequence, affected profile configurations could not be correctly imported, and issuing certificates failed or the system issued incorrect certificates. Certificate System now uses a parser that handles backslashes correctly. As a result, profile configuration updates import the configuration correctly. (BZ# 1541853 )
[ "Unable to create and initialize directory '/<directory_path>'.", "Incoming BER Element may be misformed. This may indicate an attempt to use TLS on a plaintext port, IE ldaps://localhost:389. Check your client LDAP_URI settings." ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/7.5_release_notes/bug_fixes_authentication_and_interoperability
Chapter 9. Managing domains
Chapter 9. Managing domains Identity Service (keystone) domains are additional namespaces that you can create in keystone. Note Identity Service includes a built-in domain called Default . It is suggested you reserve this domain only for service accounts, and create a separate domain for user accounts. 9.1. Viewing a list of domains You can view a list of domains with the openstack domain list command: Example command Example output 9.2. Creating a new domain You can create a new domain with the openstack domain create command: Example command Example output 9.3. Viewing the details of a domain You can view the details of a domain with the openstack domain show command: Example command Example output 9.4. Disabling a domain You can disable and enable domains according to your requirements. Procedure Disable a domain using the --disable option: Example command Confirm that the domain has been disabled: Example command Example output Use the --enable option to re-enable the domain, if required: Example command
[ "openstack domain list", "+----------------------------------+------------------+---------+--------------------+ | ID | Name | Enabled | Description | +----------------------------------+------------------+---------+--------------------+ | 3abefa6f32c14db9a9703bf5ce6863e1 | TestDomain | True | | | 69436408fdcb44ab9e111691f8e9216d | corp | True | | | a4f61a8feb8d4253b260054c6aa41adb | federated_domain | True | | | default | Default | True | The default domain | +----------------------------------+------------------+---------+--------------------+", "openstack domain create TestDomain", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+", "openstack domain show TestDomain", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | True | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+", "openstack domain set TestDomain --disable", "openstack domain show TestDomain", "+-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | | | enabled | False | | id | 3abefa6f32c14db9a9703bf5ce6863e1 | | name | TestDomain | +-------------+----------------------------------+", "openstack domain set TestDomain --enable" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/performing_security_operations/assembly-managing-domains_performing-security-services
Chapter 18. Setting Time and Date in Red Hat Enterprise Linux 7
Chapter 18. Setting Time and Date in Red Hat Enterprise Linux 7 The section contains how to set time and date in Red Hat Enterprise Linux 7: The system time is always kept in Coordinated Universal Time (UTC) and converted in applications to local time as needed. Local time is the actual time in your current time zone, taking into account daylight saving time (DST). The timedatectl utility is distributed as part of the systemd system and service manager and allows you to review and change the configuration of the system clock. Changing the Current Time Replace HH with an hour, MM with a minute, and SS with a second, all typed in two-digit form. Changing the Current Date Replace YYYY with a four-digit year, MM with a two-digit month, and DD with a two-digit day of the month. The time change is audited by the operating system. For more information see the Auditing Time Change Events section in the Red Hat Certificate System Planning, Installation, and Deployment Guide .
[ "timedatectl set-time HH:MM:SS", "timedatectl set-time YYYY-MM-DD" ]
https://docs.redhat.com/en/documentation/red_hat_certificate_system/10/html/administration_guide/setting_time_and_date_in_rhel
Chapter 5. Installing a cluster on OpenStack in a restricted network
Chapter 5. Installing a cluster on OpenStack in a restricted network In OpenShift Container Platform 4.16, you can install a cluster on Red Hat OpenStack Platform (RHOSP) in a restricted network by creating an internal mirror of the installation release content. 5.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You verified that OpenShift Container Platform 4.16 is compatible with your RHOSP version by using the Supported platforms for OpenShift clusters section. You can also compare platform support across different versions by viewing the OpenShift Container Platform on RHOSP support matrix . You created a registry on your mirror host and obtained the imageContentSources data for your version of OpenShift Container Platform. Important Because the installation media is on the mirror host, you can use that computer to complete all installation steps. You understand performance and scalability practices for cluster scaling, control plane sizing, and etcd. For more information, see Recommended practices for scaling the cluster . You have the metadata service enabled in RHOSP. 5.2. About installations in restricted networks In OpenShift Container Platform 4.16, you can perform an installation that does not require an active connection to the internet to obtain software components. Restricted network installations can be completed using installer-provisioned infrastructure or user-provisioned infrastructure, depending on the cloud platform to which you are installing the cluster. If you choose to perform a restricted network installation on a cloud platform, you still require access to its cloud APIs. Some cloud functions, like Amazon Web Service's Route 53 DNS and IAM services, require internet access. Depending on your network, you might require less internet access for an installation on bare metal hardware, Nutanix, or on VMware vSphere. To complete a restricted network installation, you must create a registry that mirrors the contents of the OpenShift image registry and contains the installation media. You can create this registry on a mirror host, which can access both the internet and your closed network, or by using other methods that meet your restrictions. 5.2.1. Additional limits Clusters in restricted networks have the following additional limitations and restrictions: The ClusterVersion status includes an Unable to retrieve available updates error. By default, you cannot use the contents of the Developer Catalog because you cannot access the required image stream tags. 5.3. Resource guidelines for installing OpenShift Container Platform on RHOSP To support an OpenShift Container Platform installation, your Red Hat OpenStack Platform (RHOSP) quota must meet the following requirements: Table 5.1. Recommended resources for a default OpenShift Container Platform cluster on RHOSP Resource Value Floating IP addresses 3 Ports 15 Routers 1 Subnets 1 RAM 88 GB vCPUs 22 Volume storage 275 GB Instances 7 Security groups 3 Security group rules 60 Server groups 2 - plus 1 for each additional availability zone in each machine pool A cluster might function with fewer than recommended resources, but its performance is not guaranteed. Important If RHOSP object storage (Swift) is available and operated by a user account with the swiftoperator role, it is used as the default backend for the OpenShift Container Platform image registry. In this case, the volume storage requirement is 175 GB. Swift space requirements vary depending on the size of the image registry. Note By default, your security group and security group rule quotas might be low. If you encounter problems, run openstack quota set --secgroups 3 --secgroup-rules 60 <project> as an administrator to increase them. An OpenShift Container Platform deployment comprises control plane machines, compute machines, and a bootstrap machine. 5.3.1. Control plane machines By default, the OpenShift Container Platform installation process creates three control plane machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.3.2. Compute machines By default, the OpenShift Container Platform installation process creates three compute machines. Each machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 8 GB memory and 2 vCPUs At least 100 GB storage space from the RHOSP quota Tip Compute machines host the applications that you run on OpenShift Container Platform; aim to run as many as you can. 5.3.3. Bootstrap machine During installation, a bootstrap machine is temporarily provisioned to stand up the control plane. After the production control plane is ready, the bootstrap machine is deprovisioned. The bootstrap machine requires: An instance from the RHOSP quota A port from the RHOSP quota A flavor with at least 16 GB memory and 4 vCPUs At least 100 GB storage space from the RHOSP quota 5.4. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.16, you require access to the internet to obtain the images that are necessary to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. 5.5. Enabling Swift on RHOSP Swift is operated by a user account with the swiftoperator role. Add the role to an account before you run the installation program. Important If the Red Hat OpenStack Platform (RHOSP) object storage service , commonly known as Swift, is available, OpenShift Container Platform uses it as the image registry storage. If it is unavailable, the installation program relies on the RHOSP block storage service, commonly known as Cinder. If Swift is present and you want to use it, you must enable access to it. If it is not present, or if you do not want to use it, skip this section. Important RHOSP 17 sets the rgw_max_attr_size parameter of Ceph RGW to 256 characters. This setting causes issues with uploading container images to the OpenShift Container Platform registry. You must set the value of rgw_max_attr_size to at least 1024 characters. Before installation, check if your RHOSP deployment is affected by this problem. If it is, reconfigure Ceph RGW. Prerequisites You have a RHOSP administrator account on the target environment. The Swift service is installed. On Ceph RGW , the account in url option is enabled. Procedure To enable Swift on RHOSP: As an administrator in the RHOSP CLI, add the swiftoperator role to the account that will access Swift: USD openstack role add --user <user> --project <project> swiftoperator Your RHOSP deployment can now use Swift for the image registry. 5.6. Defining parameters for the installation program The OpenShift Container Platform installation program relies on a file that is called clouds.yaml . The file describes Red Hat OpenStack Platform (RHOSP) configuration parameters, including the project name, log in information, and authorization service URLs. Procedure Create the clouds.yaml file: If your RHOSP distribution includes the Horizon web UI, generate a clouds.yaml file in it. Important Remember to add a password to the auth field. You can also keep secrets in a separate file from clouds.yaml . If your RHOSP distribution does not include the Horizon web UI, or you do not want to use Horizon, create the file yourself. For detailed information about clouds.yaml , see Config files in the RHOSP documentation. clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0' If your RHOSP installation uses self-signed certificate authority (CA) certificates for endpoint authentication: Copy the certificate authority file to your machine. Add the cacerts key to the clouds.yaml file. The value must be an absolute, non-root-accessible path to the CA certificate: clouds: shiftstack: ... cacert: "/etc/pki/ca-trust/source/anchors/ca.crt.pem" Tip After you run the installer with a custom CA certificate, you can update the certificate by editing the value of the ca-cert.pem key in the cloud-provider-config keymap. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config Place the clouds.yaml file in one of the following locations: The value of the OS_CLIENT_CONFIG_FILE environment variable The current directory A Unix-specific user configuration directory, for example ~/.config/openstack/clouds.yaml A Unix-specific site configuration directory, for example /etc/openstack/clouds.yaml The installation program searches for clouds.yaml in that order. 5.7. Setting OpenStack Cloud Controller Manager options Optionally, you can edit the OpenStack Cloud Controller Manager (CCM) configuration for your cluster. This configuration controls how OpenShift Container Platform interacts with Red Hat OpenStack Platform (RHOSP). For a complete list of configuration parameters, see the "OpenStack Cloud Controller Manager reference guide" page in the "Installing on OpenStack" documentation. Procedure If you have not already generated manifest files for your cluster, generate them by running the following command: USD openshift-install --dir <destination_directory> create manifests In a text editor, open the cloud-provider configuration manifest file. For example: USD vi openshift/manifests/cloud-provider-config.yaml Modify the options according to the CCM reference guide. Configuring Octavia for load balancing is a common case. For example: #... [LoadBalancer] lb-provider = "amphora" 1 floating-network-id="d3deb660-4190-40a3-91f1-37326fe6ec4a" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #... 1 This property sets the Octavia provider that your load balancer uses. It accepts "ovn" or "amphora" as values. If you choose to use OVN, you must also set lb-method to SOURCE_IP_PORT . 2 This property is required if you want to use multiple external networks with your cluster. The cloud provider creates floating IP addresses on the network that is specified here. 3 This property controls whether the cloud provider creates health monitors for Octavia load balancers. Set the value to True to create health monitors. As of RHOSP 16.2, this feature is only available for the Amphora provider. 4 This property sets the frequency with which endpoints are monitored. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 5 This property sets the time that monitoring requests are open before timing out. The value must be in the time.ParseDuration() format. This property is required if the value of the create-monitor property is True . 6 This property defines how many successful monitoring requests are required before a load balancer is marked as online. The value must be an integer. This property is required if the value of the create-monitor property is True . Important Prior to saving your changes, verify that the file is structured correctly. Clusters might fail if properties are not placed in the appropriate section. Important You must set the value of the create-monitor property to True if you use services that have the value of the .spec.externalTrafficPolicy property set to Local . The OVN Octavia provider in RHOSP 16.2 does not support health monitors. Therefore, services that have ETP parameter values set to Local might not respond when the lb-provider value is set to "ovn" . Save the changes to the file and proceed with installation. Tip You can update your cloud provider configuration after you run the installer. On a command line, run: USD oc edit configmap -n openshift-config cloud-provider-config After you save your changes, your cluster will take some time to reconfigure itself. The process is complete if none of your nodes have a SchedulingDisabled status. 5.8. Creating the RHCOS image for restricted network installations Download the Red Hat Enterprise Linux CoreOS (RHCOS) image to install OpenShift Container Platform on a restricted network Red Hat OpenStack Platform (RHOSP) environment. Prerequisites Obtain the OpenShift Container Platform installation program. For a restricted network installation, the program is on your mirror registry host. Procedure Log in to the Red Hat Customer Portal's Product Downloads page . Under Version , select the most recent release of OpenShift Container Platform 4.16 for RHEL 8. Important The RHCOS images might not change with every release of OpenShift Container Platform. You must download images with the highest version that is less than or equal to the OpenShift Container Platform version that you install. Use the image versions that match your OpenShift Container Platform version if they are available. Download the Red Hat Enterprise Linux CoreOS (RHCOS) - OpenStack Image (QCOW) image. Decompress the image. Note You must decompress the image before the cluster can use it. The name of the downloaded file might not contain a compression extension, like .gz or .tgz . To find out if or how the file is compressed, in a command line, enter: Upload the image that you decompressed to a location that is accessible from the bastion server, like Glance. For example: Important Depending on your RHOSP environment, you might be able to upload the image in either .raw or .qcow2 formats . If you use Ceph, you must use the .raw format. Warning If the installation program finds multiple images with the same name, it chooses one of them at random. To avoid this behavior, create unique names for resources in RHOSP. The image is now available for a restricted installation. Note the image name or location for use in OpenShift Container Platform deployment. 5.9. Creating the installation configuration file You can customize the OpenShift Container Platform cluster you install on Red Hat OpenStack Platform (RHOSP). Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. For a restricted network installation, these files are on your mirror host. You have the imageContentSources values that were generated during mirror registry creation. You have obtained the contents of the certificate for your mirror registry. You have retrieved a Red Hat Enterprise Linux CoreOS (RHCOS) image and uploaded it to an accessible location. Procedure Create the install-config.yaml file. Change to the directory that contains the installation program and run the following command: USD ./openshift-install create install-config --dir <installation_directory> 1 1 For <installation_directory> , specify the directory name to store the files that the installation program creates. When specifying the directory: Verify that the directory has the execute permission. This permission is required to run Terraform binaries under the installation directory. Use an empty directory. Some installation assets, such as bootstrap X.509 certificates, have short expiration intervals, therefore you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. At the prompts, provide the configuration details for your cloud: Optional: Select an SSH key to use to access your cluster machines. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. Select openstack as the platform to target. Specify the Red Hat OpenStack Platform (RHOSP) external network name to use for installing the cluster. Specify the floating IP address to use for external access to the OpenShift API. Specify a RHOSP flavor with at least 16 GB RAM to use for control plane nodes and 8 GB RAM for compute nodes. Select the base domain to deploy the cluster to. All DNS records will be sub-domains of this base and will also include the cluster name. Enter a name for your cluster. The name must be 14 or fewer characters long. In the install-config.yaml file, set the value of platform.openstack.clusterOSImage to the image location or name. For example: platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d Edit the install-config.yaml file to give the additional information that is required for an installation in a restricted network. Update the pullSecret value to contain the authentication information for your registry: pullSecret: '{"auths":{"<mirror_host_name>:5000": {"auth": "<credentials>","email": "[email protected]"}}}' For <mirror_host_name> , specify the registry domain name that you specified in the certificate for your mirror registry, and for <credentials> , specify the base64-encoded user name and password for your mirror registry. Add the additionalTrustBundle parameter and value. additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- The value must be the contents of the certificate file that you used for your mirror registry. The certificate file can be an existing, trusted certificate authority, or the self-signed certificate that you generated for the mirror registry. Add the image content resources, which resemble the following YAML excerpt: imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release For these values, use the imageContentSources that you recorded during mirror registry creation. Optional: Set the publishing strategy to Internal : publish: Internal By setting this option, you create an internal Ingress Controller and a private load balancer. Make any other modifications to the install-config.yaml file that you require. For more information about the parameters, see "Installation configuration parameters". Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the installation process. If you want to reuse the file, you must back it up now. Additional resources Installation configuration parameters for OpenStack 5.9.1. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 5.9.2. Sample customized install-config.yaml file for restricted OpenStack installations This sample install-config.yaml demonstrates all of the possible Red Hat OpenStack Platform (RHOSP) customization options. Important This sample file is provided for reference only. You must obtain your install-config.yaml file by using the installation program. apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{"auths": ...}' sshKey: ssh-ed25519 AAAA... additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev 5.10. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches such as AWS key pairs . Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. Note If you plan to install an OpenShift Container Platform cluster that uses the RHEL cryptographic libraries that have been submitted to NIST for FIPS 140-2/140-3 Validation on only the x86_64 , ppc64le , and s390x architectures, do not create a key that uses the ed25519 algorithm. Instead, create a key that uses the rsa or ecdsa algorithm. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Note If your cluster is in FIPS mode, only use FIPS-compliant algorithms to generate the SSH key. The key must be either RSA or ECDSA. Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 5.11. Enabling access to the environment At deployment, all OpenShift Container Platform machines are created in a Red Hat OpenStack Platform (RHOSP)-tenant network. Therefore, they are not accessible directly in most RHOSP deployments. You can configure OpenShift Container Platform API and application access by using floating IP addresses (FIPs) during installation. You can also complete an installation without configuring FIPs, but the installer will not configure a way to reach the API or applications externally. 5.11.1. Enabling access with floating IP addresses Create floating IP (FIP) addresses for external access to the OpenShift Container Platform API and cluster applications. Procedure Using the Red Hat OpenStack Platform (RHOSP) CLI, create the API FIP: USD openstack floating ip create --description "API <cluster_name>.<base_domain>" <external_network> Using the Red Hat OpenStack Platform (RHOSP) CLI, create the apps, or Ingress, FIP: USD openstack floating ip create --description "Ingress <cluster_name>.<base_domain>" <external_network> Add records that follow these patterns to your DNS server for the API and Ingress FIPs: api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP> Note If you do not control the DNS server, you can access the cluster by adding the cluster domain names such as the following to your /etc/hosts file: <api_floating_ip> api.<cluster_name>.<base_domain> <application_floating_ip> grafana-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> prometheus-k8s-openshift-monitoring.apps.<cluster_name>.<base_domain> <application_floating_ip> oauth-openshift.apps.<cluster_name>.<base_domain> <application_floating_ip> console-openshift-console.apps.<cluster_name>.<base_domain> application_floating_ip integrated-oauth-server-openshift-authentication.apps.<cluster_name>.<base_domain> The cluster domain names in the /etc/hosts file grant access to the web console and the monitoring interface of your cluster locally. You can also use the kubectl or oc . You can access the user applications by using the additional entries pointing to the <application_floating_ip>. This action makes the API and applications accessible to only you, which is not suitable for production deployment, but does allow installation for development and testing. Add the FIPs to the install-config.yaml file as the values of the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you use these values, you must also enter an external network as the value of the platform.openstack.externalNetwork parameter in the install-config.yaml file. Tip You can make OpenShift Container Platform resources available outside of the cluster by assigning a floating IP address and updating your firewall configuration. 5.11.2. Completing installation without floating IP addresses You can install OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) without providing floating IP addresses. In the install-config.yaml file, do not define the following parameters: platform.openstack.ingressFloatingIP platform.openstack.apiFloatingIP If you cannot provide an external network, you can also leave platform.openstack.externalNetwork blank. If you do not provide a value for platform.openstack.externalNetwork , a router is not created for you, and, without additional action, the installer will fail to retrieve an image from Glance. You must configure external connectivity on your own. If you run the installer from a system that cannot reach the cluster API due to a lack of floating IP addresses or name resolution, installation fails. To prevent installation failure in these cases, you can use a proxy network or run the installer from a system that is on the same network as your machines. Note You can enable name resolution by creating DNS records for the API and Ingress ports. For example: api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP> If you do not control the DNS server, you can add the record to your /etc/hosts file. This action makes the API accessible to only you, which is not suitable for production deployment but does allow installation for development and testing. 5.12. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 5.13. Verifying cluster status You can verify your OpenShift Container Platform cluster's status during or after installation. Procedure In the cluster environment, export the administrator's kubeconfig file: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. View the control plane and compute machines created after a deployment: USD oc get nodes View your cluster's version: USD oc get clusterversion View your Operators' status: USD oc get clusteroperator View all running pods in the cluster: USD oc get pods -A 5.14. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources See Accessing the web console for more details about accessing and understanding the OpenShift Container Platform web console. 5.15. Disabling the default OperatorHub catalog sources Operator catalogs that source content provided by Red Hat and community projects are configured for OperatorHub by default during an OpenShift Container Platform installation. In a restricted network environment, you must disable the default catalogs as a cluster administrator. Procedure Disable the sources for the default catalogs by adding disableAllDefaultSources: true to the OperatorHub object: USD oc patch OperatorHub cluster --type json \ -p '[{"op": "add", "path": "/spec/disableAllDefaultSources", "value": true}]' Tip Alternatively, you can use the web console to manage catalog sources. From the Administration Cluster Settings Configuration OperatorHub page, click the Sources tab, where you can create, update, delete, disable, and enable individual sources. 5.16. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.16, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources See About remote health monitoring for more information about the Telemetry service 5.17. steps Customize your cluster . If the mirror registry that you used to install your cluster has a trusted CA, add it to the cluster by configuring additional trust stores . If necessary, you can opt out of remote health reporting . If necessary, see Registering your disconnected cluster Configure image streams for the Cluster Samples Operator and the must-gather tool. Learn how to use Operator Lifecycle Manager (OLM) on restricted networks . If you did not configure RHOSP to accept application traffic over floating IP addresses, configure RHOSP access with floating IP addresses .
[ "openstack role add --user <user> --project <project> swiftoperator", "clouds: shiftstack: auth: auth_url: http://10.10.14.42:5000/v3 project_name: shiftstack username: <username> password: <password> user_domain_name: Default project_domain_name: Default dev-env: region_name: RegionOne auth: username: <username> password: <password> project_name: 'devonly' auth_url: 'https://10.10.14.22:5001/v2.0'", "clouds: shiftstack: cacert: \"/etc/pki/ca-trust/source/anchors/ca.crt.pem\"", "oc edit configmap -n openshift-config cloud-provider-config", "openshift-install --dir <destination_directory> create manifests", "vi openshift/manifests/cloud-provider-config.yaml", "# [LoadBalancer] lb-provider = \"amphora\" 1 floating-network-id=\"d3deb660-4190-40a3-91f1-37326fe6ec4a\" 2 create-monitor = True 3 monitor-delay = 10s 4 monitor-timeout = 10s 5 monitor-max-retries = 1 6 #", "oc edit configmap -n openshift-config cloud-provider-config", "file <name_of_downloaded_file>", "openstack image create --file rhcos-44.81.202003110027-0-openstack.x86_64.qcow2 --disk-format qcow2 rhcos-USD{RHCOS_VERSION}", "./openshift-install create install-config --dir <installation_directory> 1", "platform: openstack: clusterOSImage: http://mirror.example.com/images/rhcos-43.81.201912131630.0-openstack.x86_64.qcow2.gz?sha256=ffebbd68e8a1f2a245ca19522c16c86f67f9ac8e4e0c1f0a812b068b16f7265d", "pullSecret: '{\"auths\":{\"<mirror_host_name>:5000\": {\"auth\": \"<credentials>\",\"email\": \"[email protected]\"}}}'", "additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE-----", "imageContentSources: - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_host_name>:5000/<repo_name>/release source: registry.redhat.io/ocp/release", "publish: Internal", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: example.com controlPlane: name: master platform: {} replicas: 3 compute: - name: worker platform: openstack: type: ml.large replicas: 3 metadata: name: example networking: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 machineNetwork: - cidr: 10.0.0.0/16 serviceNetwork: - 172.30.0.0/16 networkType: OVNKubernetes platform: openstack: region: region1 cloud: mycloud externalNetwork: external computeFlavor: m1.xlarge apiFloatingIP: 128.0.0.1 fips: false pullSecret: '{\"auths\": ...}' sshKey: ssh-ed25519 AAAA additionalTrustBundle: | -----BEGIN CERTIFICATE----- ZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZZ -----END CERTIFICATE----- imageContentSources: - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-release - mirrors: - <mirror_registry>/<repo_name>/release source: quay.io/openshift-release-dev/ocp-v4.0-art-dev", "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "openstack floating ip create --description \"API <cluster_name>.<base_domain>\" <external_network>", "openstack floating ip create --description \"Ingress <cluster_name>.<base_domain>\" <external_network>", "api.<cluster_name>.<base_domain>. IN A <API_FIP> *.apps.<cluster_name>.<base_domain>. IN A <apps_FIP>", "api.<cluster_name>.<base_domain>. IN A <api_port_IP> *.apps.<cluster_name>.<base_domain>. IN A <ingress_port_IP>", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc get nodes", "oc get clusterversion", "oc get clusteroperator", "oc get pods -A", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin", "oc patch OperatorHub cluster --type json -p '[{\"op\": \"add\", \"path\": \"/spec/disableAllDefaultSources\", \"value\": true}]'" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_openstack/installing-openstack-installer-restricted
Chapter 8. Managing Content Views
Chapter 8. Managing Content Views Red Hat Satellite uses Content Views to allow your hosts access to a deliberately curated subset of content. To do this, you must define which repositories to use and then apply certain filters to the content. These filters include package filters, package group filters, errata filters, module stream filters, and container image tag filters. You can use Content Views to define which software versions a particular environment uses. For example, a Production environment might use a Content View containing older package versions, while a Development environment might use a Content View containing newer package versions. Alternatively, a Default Organization View is an application-controlled Content View for all content that is synchronized to Satellite. This type is useful if you want to register a host to Satellite and access content using a subscription, without manipulating content views and lifecycle environments. Each Content View creates a set of repositories across each environment, which Satellite Server stores and manages. When you promote a Content View from one environment to the environment in the application life cycle, the respective repository on Satellite Server updates and publishes the packages. Development Testing Production Content View Version and Contents Version 2 - example_software -1.1-0.noarch.rpm Version 1 - example_software -1.0-0.noarch.rpm Version 1 - example_software -1.0-0.noarch.rpm The repositories for Testing and Production contain the example_software -1.0-0.noarch.rpm package. If you promote Version 2 of the Content View from Development to Testing, the repository for Testing regenerates and then contains the example_software -1.1-0.noarch.rpm package: Development Testing Production Content View Version and Contents Version 2 - example_software -1.1-0.noarch.rpm Version 2 - example_software -1.1-0.noarch.rpm Version 1 - example_software -1.0-0.noarch.rpm This ensures systems are designated to a specific environment but receive updates when that environment uses a new version of the Content View. The general workflow for creating Content Views for filtering and creating snapshots is as follows: Create a Content View. Add one or more repositories that you want to the Content View. Optional: Create one or more filters to refine the content of the Content View. For more information, see Section 8.10, "Content Filter Examples" . Optional: Resolve any package dependencies for a Content View. For more information, see Section 8.8, "Resolving Package Dependencies" . Publish the Content View. Optional: Promote the Content View to another environment. For more information, see Section 8.3, "Promoting a Content View" . Attach the content host to the Content View. If a repository is not associated with the Content View, the file /etc/yum.repos.d/redhat.repo remains empty and systems registered to it cannot receive updates. Hosts can only be associated with a single Content View. To associate a host with multiple Content Views, create a composite Content View. For more information, see Section 8.6, "Creating a Composite Content View" . 8.1. Creating a Content View Use this procedure to create a simple Content View. To use the CLI instead of the Satellite web UI, see the CLI procedure . Prerequisites While you can stipulate whether you want to resolve any package dependencies on a Content View by Content View basis, you might want to change the default Satellite settings to enable or disable package resolution for all Content Views. For more information, see Section 8.8, "Resolving Package Dependencies" . Procedure In the Satellite web UI, navigate to Content > Content Views and click Create content view . In the Name field, enter a name for the view. Satellite automatically completes the Label field from the name you enter. In the Description field, enter a description of the view. In the Type field, select a Content view or a Composite content view . Optional: If you want to solve dependencies automatically every time you publish this Content View, select the Solve dependencies check box. Dependency solving slows the publishing time and might ignore any Content View filters you use. This can also cause errors when resolving dependencies for errata. Optional: If you want to designate this Content View for importing from an upstream server, select the Import only check box. Import-only content views cannot be published directly. Click Create content view . Content View Steps Click Create content view to create the Content View. In the Repositories tab, select the repository from the Type list that you want to add to your Content View, select the checkbox to the available repositories you want to add, then click Add repositories . Click Publish new version and in the Description field, enter information about the version to log changes. Optional: You can enable a promotion path by clicking Promote to Select a lifecycle environment from the available promotion paths to promote new version . Click . On the Review page, you can review the environments you are trying to publish. Click Finish . Note Remove and Delete are similar but the Delete option deletes an entire Content View and the versions associated with that lifecycle environment. The Remove option allows you to choose which version you want removed from the lifecycle environment. You can view the Content View in the Content Views window. To view more information about the Content View, click the Content View name. To register a host to your Content View, see Registering Hosts in Managing Hosts . CLI procedure Obtain a list of repository IDs: Create the Content View and add repositories: For the --repository-ids option, you can find the IDs in the output of the hammer repository list command. Publish the view: Optional: To add a repository to an existing Content View, enter the following command: Satellite Server creates the new version of the view and publishes it to the Library environment. 8.2. Viewing Module Streams In Satellite, you can view the module streams of the repositories in your Content Views. Procedure In the Satellite web UI, navigate to a published version of a Content View > Module Streams to view the module streams that are available for the Content Types. Use the Search field to search for specific modules. To view the information about the module, click the module and its corresponding tabs to include Details , Repositories , Profiles , and Artifacts . 8.3. Promoting a Content View Use this procedure to promote Content Views across different lifecycle environments. To use the CLI instead of the Satellite web UI, see the CLI procedure . Permission Requirements for Content View Promotion Non-administrator users require two permissions to promote a Content View to an environment: promote_or_remove_content_views promote_or_remove_content_views_to_environment . The promote_or_remove_content_views permission restricts which Content Views a user can promote. The promote_or_remove_content_views_to_environment permission restricts the environments to which a user can promote Content Views. With these permissions you can assign users permissions to promote certain Content Views to certain environments, but not to other environments. For example, you can limit a user so that they are permitted to promote to test environments, but not to production environments. You must assign both permissions to a user to allow them to promote Content Views. Procedure In the Satellite web UI, navigate to Content > Content Views and select the Content View that you want to promote. Select the version that you want to promote, click the vertical ellipsis icon, and click Promote . Select the environment where you want to promote the Content View and click Promote . Now the repository for the Content View appears in all environments. CLI procedure Promote the Content View using the hammer content-view version promote each time: Now the database content is available in all environments. To register a host to your Content View, see Registering Hosts in the Managing Hosts guide. 8.4. Promoting a Content View Across All Life Cycle Environments within an Organization Use this procedure to promote Content Views across all lifecycle environments within an organization. Procedure To promote a selected Content View version from Library across all life cycle environments within an organization, run the following Bash script: ORG=" My_Organization " CVV_ID= 3 for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done Display information about your Content View version to verify that it is promoted to the required lifecycle environments: 8.5. Composite Content Views Overview A Composite Content View combines the content from several Content Views. For example, you might have separate Content Views to manage an operating system and an application individually. You can use a Composite Content View to merge the contents of both Content Views into a new repository. The repositories for the original Content Views still exist but a new repository also exists for the combined content. If you want to develop an application that supports different database servers. The example_application appears as: example_software Application Database Operating System Example of four separate Content Views: Red Hat Enterprise Linux (Operating System) PostgreSQL (Database) MariaDB (Database) example_software (Application) From the Content Views, you can create two Composite Content Views. Example Composite Content View for a PostgreSQL database: Composite Content View 1 - example_software on PostgreSQL example_software (Application) PostgreSQL (Database) Red Hat Enterprise Linux (Operating System) Example Composite Content View for a MariaDB: Composite Content View 2 - example_software on MariaDB example_software (Application) MariaDB (Database) Red Hat Enterprise Linux (Operating System) Each Content View is then managed and published separately. When you create a version of the application, you publish a new version of the Composite Content Views. You can also select the Auto Publish option when creating a Composite Content View, and then the Composite Content View is automatically republished when a Content View it includes is republished. Repository restrictions Docker repositories cannot be included more than once in a Composite Content View. For example, if you attempt to include two Content Views using the same docker repository in a Composite Content View, Satellite Server reports an error. 8.6. Creating a Composite Content View Use this procedure to create a composite Content View. To use the CLI instead of the Satellite web UI, see the CLI procedure . Procedure In the Satellite web UI, navigate to Content > Content Views and click Create content view . In the Create content view window, enter a name for the view in the Name field. Red Hat Satellite automatically completes the Label field from the name you enter. Optional: In the Description field, enter a description of the view. On the Type tab, select Composite content view . Optional: If you want to automatically publish a new version of the Composite Content View when a Content View is republished, select the Auto publish checkbox. Click Create content view . On the Content views tab, select the Content Views that you want to add to the Composite Content View, and then click Add content views . In the Add content views window, select the version of each Content View. Optional: If you want to automatically update the Content View to the latest version, select the Always update to latest version checkbox. Click Add , then click Publish new version . Optional: In the Description field, enter a description of the Content View. In the Publish window, set the Promote switch, then select the lifecycle environment. Click , then click Finish . CLI procedure Before you create the Composite Content Views, list the version IDs for your existing Content Views: Create a new Composite Content View. When the --auto-publish option is set to yes , the Composite Content View is automatically republished when a Content View it includes is republished: Add a Content View to the Composite Content View. You can identify Content View, Content View version, and Organization in the commands by either their ID or their name. To add multiple Content Views to the Composite Content View, repeat this step for every Content View you want to include. If you have the Always update to latest version option enabled for the Content View: If you have the Always update to latest version option disabled for the Content View: Publish the Composite Content View: Promote the Composite Content View across all environments: 8.7. Content Filter Overview Content Views also use filters to include or restrict certain RPM content. Without these filters, a Content View includes everything from the selected repositories. There are two types of content filters: Table 8.1. Filter Types Filter Type Description Include You start with no content, then select which content to add from the selected repositories. Use this filter to combine multiple content items. Exclude You start with all content from selected repositories, then select which content to remove. Use this filter when you want to use most of a particular content repository but exclude certain packages, such as blacklisted packages. The filter uses all content in the repository except for the content you select. Include and Exclude Filter Combinations If using a combination of Include and Exclude filters, publishing a Content View triggers the include filters first, then the exclude filters. In this situation, select which content to include, then which content to exclude from the inclusive subset. Content Types There are also five types of content to filter: Table 8.2. Content Types Content Type Description RPM Filter packages based on their name and version number. The RPM option filters non-modular RPM packages and errata. Package Group Filter packages based on package groups. The list of package groups is based on the repositories added to the Content View. Erratum (by ID) Select which specific errata to add to the filter. The list of Errata is based on the repositories added to the Content View. Erratum (by Date and Type) Select a issued or updated date range and errata type (Bugfix, Enhancement, or Security) to add to the filter. Module Streams Select whether to include or exclude specific module streams. The Module Streams option filters modular RPMs and errata, but does not filter non-modular content that is associated with the selected module stream. Container Image Tag Select whether to include or exclude specific container image tags. 8.8. Resolving Package Dependencies Satellite can add dependencies of packages in a Content View to the dependent repository when publishing the Content View. To configure this, you can enable dependency solving . For example, dependency solving is useful when you incrementally add a single package to a Content View version. You might need to enable dependency solving to install that package. However, dependency solving is unnecessary in most situations. For example: When incrementally adding a security errata to a Content View, dependency solving can cause significant delays to Content View publication without major benefits. Packages from a newer erratum might have dependencies that are incompatible with packages from an older Content View version. Incrementally adding the erratum using dependency solving might include unwanted packages. As an alternative, consider updating the Content View. Note Dependency solving only considers packages within the repositories of the Content View. It does not consider packages installed on clients. For more information, see Limitations to Repository Dependency Resolution in Managing Content . Dependency solving can lead to the following problems: Significant delay in Content View publication Satellite examines every repository in a Content View for dependencies. Therefore, publish time increases with more repositories. To mitigate this problem, use multiple Content Views with fewer repositories and combine them into composite Content Views. Ignored Content View filters on dependent packages Satellite prioritizes resolving package dependencies over the rules in your filter. For example, if you create a filter for security purposes but enable dependency solving, Satellite can add packages that you might consider insecure. To mitigate this problem, carefully test filtering rules to determine the required dependencies. If dependency solving includes unwanted packages, manually identify the core basic dependencies that the extra packages and errata need. 8.9. Enabling Dependency Solving for a Content View Use this procedure to enable dependency solving for a Content View. Prerequisite Dependency solving is useful only in limited contexts. Before enabling it, ensure you read and understand Section 8.8, "Resolving Package Dependencies" Procedure In the Satellite web UI, navigate to Content > Content Views . From the list of content views, select the required Content View. On the Details tab, toggle Solve dependencies . 8.10. Content Filter Examples Use any of the following examples with the procedure that follows to build custom content filters. Note Filters can significantly increase the time to publish a Content View. For example, if a Content View publish task completes in a few minutes without filters, it can take 30 minutes after adding an exclude or include errata filter. Example 1 Create a repository with the base Red Hat Enterprise Linux packages. This filter requires a Red Hat Enterprise Linux repository added to the Content View. Filter: Inclusion Type: Include Content Type: Package Group Filter: Select only the Base package group Example 2 Create a repository that excludes all errata, except for security updates, after a certain date. This is useful if you want to perform system updates on a regular basis with the exception of critical security updates, which must be applied immediately. This filter requires a Red Hat Enterprise Linux repository added to the Content View. Filter: Inclusion Type: Exclude Content Type: Erratum (by Date and Type) Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On . Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered. Example 3 A combination of Example 1 and Example 2 where you only require the operating system packages and want to exclude recent bug fix and enhancement errata. This requires two filters attached to the same Content View. The Content View processes the Include filter first, then the Exclude filter. Filter 1: Inclusion Type: Include Content Type: Package Group Filter: Select only the Base package group Filter 2: Inclusion Type: Exclude Content Type: Erratum (by Date and Type) Filter: Select only the Bugfix and Enhancement errata types, and clear the Security errata type. Set the Date Type to Updated On . Set the Start Date to the date you want to restrict errata. Leave the End Date blank to ensure any new non-security errata is filtered. Example 4 Filter a specific module stream in a Content View. Filter 1: Inclusion Type: Include Content Type: Module Stream Filter: Select only the specific module stream that you want for the Content View, for example ant , and click Add Module Stream . Filter 2: Inclusion Type: Exclude Content Type: Package Filter: Add a rule to filter any non-modular packages that you want to exclude from the Content View. If you do not filter the packages, the Content View filter includes all non-modular packages associated with the module stream ant . Add a rule to exclude all * packages, or specify the package names that you want to exclude. For another example of how content filters work, see the following article: "How do content filters work in Satellite 6" . 8.11. Creating a Content Filter for Yum Content You can filter Content Views containing Yum content to include or exclude specific packages, package groups, errata, or module streams. Filters are based on a combination of the name , version , and architecture . To use the CLI instead of the Satellite web UI, see the CLI procedure . For examples of how to build a filter, see Section 8.10, "Content Filter Examples" . Procedure In the Satellite web UI, navigate to Content > Content Views and select a Content View. On the Filters tab, click Create filter . Enter a name. From the Content type list, select a content type. From the Inclusion Type list, select either Include filter or Exclude filter . Optional: In the Description field, enter a description for the filter. Click Create filter to create your content filter. Depending on what you enter for Content Type , add rules to create the filter that you want. Select if you want the filter to Apply to subset of repositories or Apply to all repositories . Click Publish New Version to publish the filtered repository. Optional: In the Description field, enter a description of the changes. Click Create filter to publish a new version of the Content View. You can promote this Content View across all environments. CLI procedure Add a filter to the Content View. Use the --inclusion false option to set the filter to an Exclude filter: Add a rule to the filter: Publish the Content View: Promote the view across all environments: 8.12. Deleting A Content View Version Use this procedure to delete a Content View version. Procedure In the Satellite web UI, navigate to Content > Content Views . Select the Content View. On the Versions tab, select the version you want to delete and click on the vertical ellipsis on the right side of the version line. Click Delete to open the deletion wizard that shows any affected environments. Optional: If there are any affected environments, reassign any hosts or activation keys before deletion. Click and review the details of the action. Click Delete .
[ "hammer repository list --organization \" My_Organization \"", "hammer content-view create --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \" --repository-ids 1,2", "hammer content-view publish --description \" My_Content_View \" --name \" My_Content_View \" --organization \" My_Organization \"", "# hammer content-view add-repository --name \" My_Content_View \" --organization \" My_Organization \" --repository-id repository_ID", "hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \"Database\" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"", "ORG=\" My_Organization \" CVV_ID= 3 for i in USD(hammer --no-headers --csv lifecycle-environment list --organization USDORG | awk -F, {'print USD1'} | sort -n) do hammer content-view version promote --organization USDORG --to-lifecycle-environment-id USDi --id USDCVV_ID done", "# hammer content-view version info --id 3", "hammer content-view version list --organization \" My_Organization \"", "hammer content-view create --composite --auto-publish yes --name \" Example_Composite_Content_View \" --description \"Example Composite Content View\" --organization \" My_Organization \"", "hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --latest --organization \" My_Organization \"", "hammer content-view component add --component-content-view-id Content_View_ID --composite-content-view \" Example_Composite_Content_View \" --component-content-view-version-id Content_View_Version_ID --organization \" My_Organization \"", "hammer content-view publish --name \" Example_Composite_Content_View \" --description \"Initial version of Composite Content View\" --organization \" My_Organization \"", "hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Composite_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"", "hammer content-view filter create --name \" Errata Filter \" --type erratum --content-view \" Example_Content_View \" --description \" My latest filter \" --inclusion false --organization \" My_Organization \"", "hammer content-view filter rule create --content-view \" Example_Content_View \" --content-view-filter \" Errata Filter \" --start-date \" YYYY-MM-DD \" --types enhancement,bugfix --date-type updated --organization \" My_Organization \"", "hammer content-view publish --name \" Example_Content_View \" --description \"Adding errata filter\" --organization \" My_Organization \"", "hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Development\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Testing\" --organization \" My_Organization \" hammer content-view version promote --content-view \" Example_Content_View \" --version 1 --to-lifecycle-environment \"Production\" --organization \" My_Organization \"" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.11/html/managing_content/Managing_Content_Views_content-management
Appendix B. Using Red Hat Maven repositories
Appendix B. Using Red Hat Maven repositories This section describes how to use Red Hat-provided Maven repositories in your software. B.1. Using the online repository Red Hat maintains a central Maven repository for use with your Maven-based projects. For more information, see the repository welcome page . There are two ways to configure Maven to use the Red Hat repository: Add the repository to your Maven settings Add the repository to your POM file Adding the repository to your Maven settings This method of configuration applies to all Maven projects owned by your user, as long as your POM file does not override the repository configuration and the included profile is enabled. Procedure Locate the Maven settings.xml file. It is usually inside the .m2 directory in the user home directory. If the file does not exist, use a text editor to create it. On Linux or UNIX: /home/ <username> /.m2/settings.xml On Windows: C:\Users\<username>\.m2\settings.xml Add a new profile containing the Red Hat repository to the profiles element of the settings.xml file, as in the following example: Example: A Maven settings.xml file containing the Red Hat repository <settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings> For more information about Maven configuration, see the Maven settings reference . Adding the repository to your POM file To configure a repository directly in your project, add a new entry to the repositories element of your POM file, as in the following example: Example: A Maven pom.xml file containing the Red Hat repository <project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project> For more information about POM file configuration, see the Maven POM reference . B.2. Using a local repository Red Hat provides file-based Maven repositories for some of its components. These are delivered as downloadable archives that you can extract to your local filesystem. To configure Maven to use a locally extracted repository, apply the following XML in your Maven settings or POM file: <repository> <id>red-hat-local</id> <url>USD{repository-url}</url> </repository> USD{repository-url} must be a file URL containing the local filesystem path of the extracted repository. Table B.1. Example URLs for local Maven repositories Operating system Filesystem path URL Linux or UNIX /home/alice/maven-repository file:/home/alice/maven-repository Windows C:\repos\red-hat file:C:\repos\red-hat
[ "/home/ <username> /.m2/settings.xml", "C:\\Users\\<username>\\.m2\\settings.xml", "<settings> <profiles> <profile> <id>red-hat</id> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> <pluginRepositories> <pluginRepository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> <releases> <enabled>true</enabled> </releases> <snapshots> <enabled>false</enabled> </snapshots> </pluginRepository> </pluginRepositories> </profile> </profiles> <activeProfiles> <activeProfile>red-hat</activeProfile> </activeProfiles> </settings>", "<project> <modelVersion>4.0.0</modelVersion> <groupId>com.example</groupId> <artifactId>example-app</artifactId> <version>1.0.0</version> <repositories> <repository> <id>red-hat-ga</id> <url>https://maven.repository.redhat.com/ga</url> </repository> </repositories> </project>", "<repository> <id>red-hat-local</id> <url>USD{repository-url}</url> </repository>" ]
https://docs.redhat.com/en/documentation/red_hat_amq_core_protocol_jms/7.11/html/using_amq_core_protocol_jms/using_red_hat_maven_repositories
Chapter 2. Upgrading your broker
Chapter 2. Upgrading your broker 2.1. About upgrades Red Hat releases new versions of AMQ Broker to the Customer Portal . Update your brokers to the newest version to ensure that you have the latest enhancements and fixes. In general, Red Hat releases a new version of AMQ Broker in one of three ways: Major Release A major upgrade or migration is required when an application is transitioned from one major release to the , for example, from AMQ Broker 6 to AMQ Broker 7. This type of upgrade is not addressed in this guide. Minor Release AMQ Broker periodically provides minor releases, which are updates that include new features, as well as bug and security fixes. If you plan to upgrade from one AMQ Broker minor release to another, for example, from AMQ Broker 7.0 to AMQ Broker 7.1, code changes should not be required for applications that do not use private, unsupported, or tech preview components. Micro Release AMQ Broker also periodically provides micro releases that contain minor enhancements and fixes. Micro releases increment the minor release version by the last digit, for example from 7.0.1 to 7.0.2. A micro release should not require code changes, however, some releases may require configuration changes. 2.2. Upgrading older 7.x versions 2.2.1. Upgrading a broker instance from 7.0.x to 7.0.y The procedure for upgrading AMQ Broker from one version of 7.0 to another is similar to the one for installation: you download an archive from the Customer Portal and then extract it. The following subsections describe how to upgrade a 7.0.x broker for different operating systems. Upgrading from 7.0.x to 7.0.y on Linux Upgrading from 7.0.x to 7.0.y on Windows 2.2.1.1. Upgrading from 7.0.x to 7.0.y on Linux The name of the archive that you download could differ from what is used in the following examples. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.0 Release Notes . Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. The archive is kept in a compressed format. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. 2.2.1.2. Upgrading from 7.0.x to 7.0.y on Windows Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.0 Release Notes . Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. 2.2.2. Upgrading a broker instance from 7.0.x to 7.1.0 AMQ Broker 7.1.0 includes configuration files and settings that were not included with versions. Upgrading a broker instance from 7.0.x to 7.1.0 requires adding these new files and settings to your existing 7.0.x broker instances. The following subsections describe how to upgrade a 7.0.x broker instance to 7.1.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.0.x to 7.1.0 on Linux Upgrading from 7.0.x to 7.1.0 on Windows 2.2.2.1. Upgrading from 7.0.x to 7.1.0 on Linux Before you can upgrade a 7.0.x broker, you need to install Red Hat AMQ Broker 7.1.0 and create a temporary broker instance. This will generate the 7.1.0 configuration files required to upgrade a 7.0.x broker. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.1 Release Notes . Before upgrading your 7.0.x brokers, you must first install version 7.1. For steps on installing 7.1 on Linux, see Installing AMQ Broker . Procedure If it is running, stop the 7.0.x broker you want to upgrade: Back up the instance directory of the broker by copying it to the home directory of the current user. Open the file artemis.profile in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Update the ARTEMIS_HOME property so that its value refers to the installation directory for AMQ Broker 7.1.0: On the line below the one you updated, add the property ARTEMIS_INSTANCE_ URI and assign it a value that refers to the 7.0.x broker instance directory: Update the JAVA_ARGS property by adding the jolokia.policyLocation parameter and assigning it the following value: Create a 7.1.0 broker instance. The creation procedure generates the configuration files required to upgrade from 7.0.x to 7.1.0. In the following example, note that the instance is created in the directory upgrade_tmp : Copy configuration files from the etc directory of the temporary 7.1.0 instance into the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Copy the management.xml file: Copy the jolokia-access.xml file: Open up the bootstrap.xml file in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Comment out or delete the following two lines: Add the following to replace the two lines removed in the step: Start the broker that you upgraded: Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . 2.2.2.2. Upgrading from 7.0.x to 7.1.0 on Windows Before you can upgrade a 7.0.x broker, you need to install Red Hat AMQ Broker 7.1.0 and create a temporary broker instance. This will generate the 7.1.0 configuration files required to upgrade a 7.0.x broker. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.1 Release Notes . Before upgrading your 7.0.x brokers, you must first install version 7.1. For steps on installing 7.1 on Windows, see Installing AMQ Broker . Procedure If it is running, stop the 7.0.x broker you want to upgrade: Back up the instance directory of the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . Open the file artemis.profile in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Update the ARTEMIS_HOME property so that its value refers to the installation directory for AMQ Broker 7.1.0: On the line below the one you updated, add the property ARTEMIS_INSTANCE_ URI and assign it a value that refers to the 7.0.x broker instance directory: Update the JAVA_ARGS property by adding the jolokia.policyLocation parameter and assigning it the following value: Create a 7.1.0 broker instance. The creation procedure generates the configuration files required to upgrade from 7.0.x to 7.1.0. In the following example, note that the instance is created in the directory upgrade_tmp : Copy configuration files from the etc directory of the temporary 7.1.0 instance into the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Copy the management.xml file: Copy the jolokia-access.xml file: Open up the bootstrap.xml file in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Comment out or delete the following two lines: Add the following to replace the two lines removed in the step: Start the broker that you upgraded: Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . 2.2.3. Upgrading a broker instance from 7.1.x to 7.2.0 AMQ Broker 7.2.0 includes configuration files and settings that were not included with 7.0.x versions. If you are running 7.0.x instances, you must first upgrade those broker instances from 7.0.x to 7.1.0 before upgrading to 7.2.0. The following subsections describe how to upgrade a 7.1.x broker instance to 7.2.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.1.x to 7.2.0 on Linux Upgrading from 7.1.x to 7.2.0 on Windows 2.2.3.1. Upgrading from 7.1.x to 7.2.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.3.2. Upgrading from 7.1.x to 7.2.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.2.4. Upgrading a broker instance from 7.2.x to 7.3.0 The following subsections describe how to upgrade a 7.2.x broker instance to 7.3.0 for different operating systems. 2.2.4.1. Resolve exception due to deprecated dispatch console Starting in version 7.3.0, AMQ Broker no longer ships with the Hawtio dispatch console plugin dispatch-hawtio-console.war . Previously, the dispatch console was used to manage AMQ Interconnect. However, AMQ Interconnect now uses its own, standalone web console. This change affects the upgrade procedures in the sections that follow. If you take no further action before upgrading your broker instance to 7.3.0, the upgrade process produces an exception that looks like the following: You can safely ignore the preceding exception without affecting the success of your upgrade. However, if you would prefer not to see this exception during your upgrade, you must first remove a reference to the Hawtio dispatch console plugin in the bootstrap.xml file of your existing broker instance. The bootstrap.xml file is in the {instance_directory}/etc/ directory of your broker instance. The following example shows some of the contents of the bootstrap.xml file for a AMQ Broker 7.2.4 instance: To avoid an exception when upgrading AMQ Broker to version 7.3.0, delete the line <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> , as shown in the preceding example. Then, save the modified bootstrap file and start the upgrade process, as described in the sections that follow. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.2.x to 7.3.0 on Linux Upgrading from 7.2.x to 7.3.0 on Windows 2.2.4.2. Upgrading from 7.2.x to 7.3.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.4.3. Upgrading from 7.2.x to 7.3.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file to set the JAVA_ARGS environment variable to reference the correct log manager version. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file to set the bootstrap class path start argument to reference the correct log manager version. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.2.5. Upgrading a broker instance from 7.3.0 to 7.4.0 The following subsections describe how to upgrade a 7.3.0 broker instance to 7.4.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.3.0 to 7.4.0 on Linux Upgrading from 7.3.0 to 7.4.0 on Windows 2.2.5.1. Upgrading from 7.3.0 to 7.4.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the <web> configuration element, add a reference to the metrics plugin file for AMQ Broker. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.5.2. Upgrading from 7.3.0 to 7.4.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the <web> configuration element, add a reference to the metrics plugin file for AMQ Broker. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.3. Upgrading a broker instance from 7.4.0 to 7.4.x Important AMQ Broker 7.4 has been designated as a Long Term Support (LTS) release version. Bug fixes and security advisories will be made available for AMQ Broker 7.4 in a series of micro releases (7.4.1, 7.4.2, and so on) for a period of at least 12 months. This means that you will be able to get recent bug fixes and security advisories for AMQ Broker without having to upgrade to a new minor release. For more information, see Long Term Support for AMQ Broker . Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . The following subsections describe how to upgrade a 7.4.0 broker instance to 7.4.x for different operating systems. Upgrading from 7.4.0 to 7.4.x on Linux Upgrading from 7.4.0 to 7.4.x on Windows 2.3.1. Upgrading from 7.4.0 to 7.4.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.3.2. Upgrading from 7.4.0 to 7.4.x on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.4. Upgrading a broker instance from 7.4.x to 7.5.0 The following subsections describe how to upgrade a 7.4.x broker instance to 7.5.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.4.x to 7.5.0 on Linux Upgrading from 7.4.x to 7.5.0 on Windows 2.4.1. Upgrading from 7.4.x to 7.5.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.4.2. Upgrading from 7.4.x to 7.5.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.5. Upgrading a broker instance from 7.5.0 to 7.6.0 The following subsections describe how to upgrade a 7.5.0 broker instance to 7.6.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.5.0 to 7.6.0 on Linux Upgrading from 7.5.0 to 7.6.0 on Windows 2.5.1. Upgrading from 7.5.0 to 7.6.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.5.2. Upgrading from 7.5.0 to 7.6.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.6. Upgrading a broker instance from 7.6.0 to 7.7.0 The following subsections describe how to upgrade a 7.6.0 broker instance to 7.7.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.6.0 to 7.7.0 on Linux Upgrading from 7.6.0 to 7.7.0 on Windows 2.6.1. Upgrading from 7.6.0 to 7.7.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/logging.properties configuration file. On the list of additional loggers to be configured, include the org.apache.activemq.audit.resource resource logger that was added in AMQ Broker 7.7.0. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource Before the Console handler configuration section, add a default configuration for the resource logger. .. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false # Console handler configuration .. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.6.2. Upgrading from 7.6.0 to 7.7.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\logging.properties configuration file. On the list of additional loggers to be configured, include the org.apache.activemq.audit.resource resource logger that was added in AMQ Broker 7.7.0. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource Before the Console handler configuration section, add a default configuration for the resource logger. .. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false # Console handler configuration .. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.7. Upgrading a broker instance from 7.7.0 to 7.8.0 The following subsections describe how to upgrade a 7.7.0 broker instance to 7.8.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.7.0 to 7.8.0 on Linux Upgrading from 7.7.0 to 7.8.0 on Windows 2.7.1. Upgrading from 7.7.0 to 7.8.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.8. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.7.2. Upgrading from 7.7.0 to 7.8.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.8. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.8. Upgrading a broker instance from 7.8.x to 7.9.x The following subsections describe how to upgrade a 7.8.x broker instance to 7.9.x for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.8.x to 7.9.x on Linux Upgrading from 7.8.x to 7.9.x on Windows 2.8.1. Upgrading from 7.8.x to 7.9.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.8.2. Upgrading from 7.8.x to 7.9.x on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder amd select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.9. Upgrading a broker instance from 7.9.x to 7.10.x The following subsections describe how to upgrade a 7.9.x broker instance to 7.10.x for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.9.x to 7.10.x on Linux Upgrading from 7.9.x to 7.10.x on Windows 2.9.1. Upgrading from 7.9.x to 7.10.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Prerequisites At a minimum, AMQ Broker 7.11 requires Java version 11 to run. Ensure that each AMQ Broker host is running Java version 11 or higher. For more information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . If AMQ Broker 7.9 is configured to persist message data in a database, the data type of the HOLDER_EXPIRATION_TIME column is timestamp in the node manager database table. In AMQ Broker 7.11, the data type of the column changed to number . Before you upgrade to AMQ Broker 7.11, you must drop the node manager table, that is, remove it from the database. After you drop the table, it is recreated with the new schema when you restart the upgraded broker. In a shared store high availability (HA) configuration, the node manager table is shared between brokers. Therefore, you must ensure that all brokers that share the table are stopped before you drop the table. The following example drops a node manager table called NODE_MANAGER_TABLE : Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.10. <web path="web"> <binding uri="https://localhost:8161" ... <app url="console" war="hawtio.war"/> ... </web> + In the broker xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . + <broker xmlns="http://activemq.apache.org/schema"> Edit the <broker_instance_dir> /etc/management.xml file. In the management-context xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <management-context xmlns="http://activemq.apache.org/schema"> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.9.2. Upgrading from 7.9.x to 7.10.x on Windows Prerequisites At a minimum, AMQ Broker 7.11 requires Java version 11 to run. Ensure that each AMQ Broker host is running Java version 11 or higher. For more information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . If AMQ Broker 7.9 is configured to persist message data in a database, the data type of the HOLDER_EXPIRATION_TIME column is timestamp in the node manager database table. In AMQ Broker 7.11, the data type of the column changed to number . Before you upgrade to AMQ Broker 7.11, you must drop the node manager table, that is, remove it from the database. After you drop the table, it is recreated with the new schema when you restart the upgraded broker. In a shared store high availability (HA) configuration, the node manager table is shared between brokers. Therefore, you must ensure that all brokers that share the table are stopped before you drop the table. The following example drops a node manager table called NODE_MANAGER_TABLE : Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.10. <web path="web"> <binding uri="https://localhost:8161" ... <app url="console" war="hawtio.war"/> ... </web> In the broker xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <broker xmlns="http://activemq.apache.org/schema"> Edit the <broker_instance_dir> /etc/management.xml file. In the management-context xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <management-context xmlns="http://activemq.apache.org/schema"> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.10. Upgrading a broker instance from 7.10.x to 7.11.x The following subsections describe how to upgrade a 7.10.x broker instance to 7.11.x for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.10.x to 7.11.x on Linux Upgrading from 7.10.x to 7.11.x on Windows 2.10.1. Upgrading from 7.10.x to 7.11.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Prerequisites At a minimum, AMQ Broker 7.11 requires Java version 11 to run. Ensure that each AMQ Broker host is running Java version 11 or higher. For more information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive that you downloaded to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. Note The contents of the archive are extracted to a directory called apache-artemis-2.28.0.redhat-00019 in your current directory. If the broker is running, stop it. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Back up the instance directory of the broker by copying it to the home directory of the current user. Change to directory to which you extracted the contents of the compressed archive. Run the artemis upgrade command to upgrade your existing broker. The following example upgrades a broker instance in the /var/opt/amq-broker/mybroker directory. The artemis upgrade command completes the following steps to upgrade the broker. Makes a backup of each file it modifies in an old-config-bkp.< n > subdirectory of the broker instance directory for the broker that you are upgrading. Sets the ARTEMIS_HOME property in the <broker_instance_dir> /etc/artemis.profile file to the new directory created when you extracted the archive. Updates the <broker_instance_dir> bin/artemis script to use the Apache Log4j 2 logging utility, which is bundled with AMQ Broker 7.11, instead of the the JBoss Logging framework used in versions. Deletes the existing <broker_instance_dir> /etc/logging.properties file used by JBoss and creates a new <broker_instance_dir> /etc/log4j2.properties file for the Apache Log4j 2 logging utility. If the Prometheus metrics plugin included with AMQ Broker is enabled in 7.10.x, change the class name of the plugin from org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin to com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin , which is the new class name of the plugin in AMQ Broker 7.11. Open the <broker_instance_dir> /etc/broker.xml configuration file. In the <plugin> sub-element of the <metrics> element, update the plugin class name to com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin . <metrics> <plugin class-name="com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin"/> </metrics> Save the broker.xml configuration file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find lines similar to the ones below. Note the new version number that appears in the log after the broker starts. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.10.2. Upgrading from 7.10.x to 7.11.x on Windows Prerequisites At a minimum, AMQ Broker 7.11 requires Java version 11 to run. Ensure that each AMQ Broker host is running Java version 11 or higher. For more information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . Procedure Follow the instructions provided in Downloading the AMQ Broker archive to download the AMQ Broker archive. Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Note The contents of the archive are extracted to a folder called apache-artemis-2.28.0.redhat-00019 in the current folder. If the broker is running, stop it. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . Change to directory to which you extracted the contents of the compressed archive. For example: Run the artemis upgrade command to upgrade your existing broker. The following example upgrades the broker instance in the C:\redhat\amq-broker\mybroker directory. The artemis upgrade command completes the following steps to upgrade the broker. Makes a backup of each file it modifies in an old-config-bkp.< n > subdirectory of the broker instance directory for the broker that you are upgrading. Sets the ARTEMIS_HOME property in the <broker_instance_dir> \etc\artemis.cmd.profile file to the new directory created when you extracted the archive. Updates the <broker_instance_dir> \bin\artemis.cmd script to use the Apache Log4j 2 logging utility, which is bundled with AMQ Broker 7.11, instead of the the JBoss Logging framework used in versions. Deletes the existing <broker_instance_dir> \etc\logging.properties file used by JBoss and creates a new <broker_instance_dir> \etc\log4j2.properties file for the Apache Log4j 2 logging utility. If the Prometheus metrics plugin included with AMQ Broker was enabled in 7.10.x, change the class name of the plugin from org.apache.activemq.artemis.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin to com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin , which is the new class name of the plugin in 7.11. Open the <broker_instance_dir> \etc\broker.xml configuration file. In the <plugin> sub-element of the <metrics> element, update the plugin class name to com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin . <metrics> <plugin class-name="com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin"/> </metrics> Save the broker.xml configuration file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory.
[ "sudo chown amq-broker:amq-broker jboss-amq-7.x.x.redhat-1.zip", "sudo mv jboss-amq-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME='/opt/redhat/jboss-amq-7.x.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.1.0.amq-700005-redhat-1 [0.0.0.0, nodeID=4782d50d-47a2-11e7-a160-9801a793ea45]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.1.0.amq-700005-redhat-1 [0.0.0.0, nodeID=4782d50d-47a2-11e7-a160-9801a793ea45]", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "ARTEMIS_HOME=\" <7.1.0_install_dir> \"", "ARTEMIS_INSTANCE_URI=\"file:// <7.0.x_broker_instance_dir> \"", "-Djolokia.policyLocation=USD{ARTEMIS_INSTANCE_URI}/etc/jolokia-access.xml", "<7.1.0_install_dir> /bin/artemis create --allow-anonymous --user admin --password admin upgrade_tmp", "cp <temporary_7.1.0_broker_instance_dir> /etc/management.xml <7.0_broker_instance_dir> /etc/", "cp <temporary_7.1.0_broker_instance_dir> /etc/jolokia-access.xml <7.0_broker_instance_dir> /etc/", "<app url=\"jolokia\" war=\"jolokia.war\"/> <app url=\"hawtio\" war=\"hawtio-no-slf4j.war\"/>", "<app url=\"console\" war=\"console.war\"/>", "<broker_instance_dir> /bin/artemis run", "> <broker_instance_dir> \\bin\\artemis-service.exe stop", "ARTEMIS_HOME=\" <7.1.0_install_dir> \"", "ARTEMIS_INSTANCE_URI=\"file:// <7.0.x_broker_instance_dir> \"", "-Djolokia.policyLocation=USD{ARTEMIS_INSTANCE_URI}/etc/jolokia-access.xml", "> <7.1.0_install_dir> /bin/artemis create --allow-anonymous --user admin --password admin upgrade_tmp", "> cp <temporary_7.1.0_broker_instance_dir> /etc/management.xml <7.0_broker_instance_dir> /etc/", "> cp <temporary_7.1.0_broker_instance_dir> /etc/jolokia-access.xml <7.0_broker_instance_dir> /etc/", "<app url=\"jolokia\" war=\"jolokia.war\"/> <app url=\"hawtio\" war=\"hawtio-no-slf4j.war\"/>", "<app url=\"console\" war=\"console.war\"/>", "> <broker_instance_dir> \\bin\\artemis-service.exe start", "sudo chown amq-broker:amq-broker amq-7.x.x.redhat-1.zip", "sudo mv amq-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-7.x.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "2019-04-11 18:00:41,334 WARN [org.eclipse.jetty.webapp.WebAppContext] Failed startup of context o.e.j.w.WebAppContext@1ef3efa8{/dispatch-hawtio-console,null,null}{/opt/amqbroker/amq-broker-7.3.0/web/dispatch-hawtio-console.war}: java.io.FileNotFoundException: /opt/amqbroker/amq-broker-7.3.0/web/dispatch-hawtio-console.war.", "<broker xmlns=\"http://activemq.org/schema\"> . <!-- The web server is only bound to localhost by default --> <web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </web> </broker>", "sudo chown amq-broker:amq-broker amq-7.x.x.redhat-1.zip", "sudo mv amq-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.3.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-7.x.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.3.amq-720001-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS= <install_dir> \\lib\\jboss-logmanager-2.0.3.Final-redhat-1.jar", "<startargument>Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.0.3.Final-redhat-1.jar</startargument>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.x.x.redhat-1.zip", "sudo mv amq-broker-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.1.Final-redhat-00001.jar", "<app url=\"metrics\" war=\"metrics.war\"/>", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS= -Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.1.Final-redhat-00001.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.1.Final-redhat-00001.jar</startargument>", "<app url=\"metrics\" war=\"metrics.war\"/>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.4.x.redhat-1.zip", "sudo mv amq-broker-7.4.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.4.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.4.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.5.0.redhat-1.zip", "sudo mv amq-broker-7.5.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.5.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.5.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00001.jar", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00001.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00001.jar</startargument>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.6.0.redhat-1.zip", "sudo mv amq-broker-7.6.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.6.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.6.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.7.0.redhat-1.zip", "sudo mv amq-broker-7.7.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.7.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.7.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource", ".. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false Console handler configuration ..", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mesq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false sage Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource", ".. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false Console handler configuration ..", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.8.0.redhat-1.zip", "sudo mv amq-broker-7.8.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.8.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.8.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mesq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false sage Broker version 2.16.0.redhat-00007 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.16.0.redhat-00007 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip", "sudo mv amq-broker-7.x.x-bin.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x-bin.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-bin'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mes INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live sage Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "DROP TABLE NODE_MANAGER_TABLE", "sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip", "sudo mv amq-broker-7.x.x-bin.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x-bin.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-bin'", "<web path=\"web\"> <binding uri=\"https://localhost:8161\" <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker xmlns=\"http://activemq.apache.org/schema\">", "<management-context xmlns=\"http://activemq.apache.org/schema\">", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mes INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live sage Broker version 2.21.0.redhat-00025 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "DROP TABLE NODE_MANAGER_TABLE", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010[4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<web path=\"web\"> <binding uri=\"https://localhost:8161\" <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker xmlns=\"http://activemq.apache.org/schema\">", "<management-context xmlns=\"http://activemq.apache.org/schema\">", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.21.0.redhat-00025 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip", "sudo mv amq-broker-7.x.x-bin.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x-bin.zip", "<broker_instance_dir> /bin/artemis stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "cp -r <broker_instance_dir> ~/", "cd /opt/redhat/apache-artemis-2.28.0.redhat-00019/bin", "./artemis upgrade /var/opt/amq-broker/mybroker", "<metrics> <plugin class-name=\"com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin\"/> </metrics>", "<broker_instance_dir> /bin/artemis run", "2023-02-08 20:53:50,128 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server version {upstreamversion}.redhat-{build} 2023-02-08 20:53:51,077 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version {upstreamversion}.redhat-{build} [0.0.0.0, nodeID=be02a2b2-3e42-11ec-9b8a-4c796e887ecb]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010[4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "cd \\redhat\\amq-broker\\apache-artemis-2.28.0.redhat-00019\\bin", "artemis upgrade C:\\redhat\\amq-broker\\mybroker", "<metrics> <plugin class-name=\"com.redhat.amq.broker.core.server.metrics.plugins.ArtemisPrometheusMetricsPlugin\"/> </metrics>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "2023-02-08 20:53:50,128 INFO [org.apache.activemq.artemis.integration.bootstrap] AMQ101000: Starting ActiveMQ Artemis Server version {upstreamversion}.redhat-{build} 2023-02-08 20:53:51,077 INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version {upstreamversion}.redhat-{build} [0.0.0.0, nodeID=be02a2b2-3e42-11ec-9b8a-4c796e887ecb]" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.11/html/managing_amq_broker/patching
7.174. pcre
7.174. pcre 7.174.1. RHBA-2012:1240 - pcre bug fix release Updated pcre packages that fix four bugs are now available for Red Hat Enterprise Linux 6. The pcre packages provide the Perl-compatible regular expression (PCRE) library. Bug Fixes BZ# 756105 Prior to this update, matching patterns with repeated forward reference failed to match if the first character was not repeated at the start of the matching text. This update modifies the matching algorithm not to expect the first character again. Now, patterns with repeated forward references match as expected. BZ# 759475 Prior to this update, case-less patterns in UTF-8 mode did not match characters at the end of input text with encoding length that was shorter than the encoding length of character in the pattern, for example "/a/8i".This update modifies the pcre library to count the length of matched characters correctly. Now, case-less patterns match characters with different encoding length correctly even at the end of an input string. BZ# 799003 Prior to this update, manual pages for the pcre library contained misprints. This update modifies the manual pages. BZ#842000 Prior to this update, applications that were compiled with the libpcrecpp library from the pcre version 6 could not been executed against libpcrecpp library from the pcre version 7 because the application binary interface (ABI) was mismatched. This update adds the compat RE::Init() function for the pcre version 6 to the pcre version 7 libpcrecpp library. Applications that were compiled on Red Hat Enterprise Linux 5 and use the RE::Init function can now be executed on Red Hat Enterprise Linux 6. All users of pcre are advised to upgrade to these updated packages, which fix these bugs.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/pcre
Chapter 3. Release information
Chapter 3. Release information These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality that you should consider when you deploy this release of Red Hat OpenStack Platform. Notes for updates released during the support lifecycle of this Red Hat OpenStack Platform release appear in the advisory text associated with each update. 3.1. Red Hat OpenStack Platform 17.0 GA - September 21, 2022 These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. 3.1.1. Advisory list This release includes the following advisories: RHEA-2022:6543 Release of components for Red Hat OpenStack Platform 17.0 (Wallaby) RHEA-2022:6544 Release of containers for Red Hat OpenStack Platform 17.0 (Wallaby) RHEA-2022:6545 Red Hat OpenStack Platform 17.0 RHEL 9 deployment images (qcow2 tarballs) RHEA-2022:6546 Red Hat OpenStack Platform 17.0 (Wallaby) RHEL 9 deployment images (RPMs) 3.1.2. Bug Fix These bugs were fixed in this release of Red Hat OpenStack Platform: BZ# 1374002 Before this update, a misconfiguration of communication parameters between the DNS service (designate) worker and deployed BIND instances caused Red Hat OpenStack Platform (RHOSP) 17.0 Beta deployments that have more than one Controller node to fail. With this update, this issue has been resolved, and you can now use the DNS service in a deployment with more than one Controller node. BZ# 1801931 Before this update, the help text for the max_disk_devices_to_attach parameter did not state that 0 is an invalid value. Also, when the max_disk_devices_to_attach parameter was set to 0 , the nova-compute service started when it should have failed. With this update, the max_disk_devices_to_attach parameter help option text states that a value of 0 is invalid, and if max_disk_devices_to_attach is set to 0 , the nova-compute service will now log an error and fail to start. BZ# 1883326 Before this update, an issue existed with PowerFlex storage-assisted volume migration when volume migration was performed without conversion of volume type in cases where it should have been converted to thin from thick provisioned. With this update, this issue is fixed. BZ# 1888069 Before this update, Supermicro servers in UEFI mode would reboot from the network instead of from the local hard disk, causing a failed boot. With this update, Ironic sends the correct raw IPMI commands that request UEFI "boot from hard disk." Booting Supermicro nodes in UEFI mode with IPMI now works as expected. BZ# 1944586 This update fixes a bug that incorrectly redirected registered non-stdout callback output from various Ansible processes to the validations logging directory. Output of other processes is no longer stored in validations logging directory. VF callbacks no longer receive information about plays, unless requested. BZ# 1984556 The collectd smart plugin requires the CAP_SYS_RAWIO capability. CAP_SYS_RAWIO is not present by default in the configuration, and before this update, you could not add it. With this update, you can use the CollectdContainerAdditionalCapAdd parameter to add CAP_SYS_RAWIO. Enter the following parameter value assignment in an environment file. Example BZ# 1991657 Before this update, baremetal node introspection failed with an error and did not retry, when the node had a transient lock on it. With this update, you can perform introspection even when the node has a lock. BZ# 2050773 Before this update, if an operator defined a custom value for the volume:accept_transfer policy that referred to the project_id of the user making the volume transfer accept request, the request would fail. This update removes a duplicate policy check that incorrectly compared the project_id of the requestor to the project_id associated with the volume before transfer. The check done at the Block Storage API layer will now function as expected. BZ# 2064019 Before this update, network interruptions caused a bare metal node's power state to become None , and enter the maintenance state. This is due to Ironic's connection cache of Redfish node sessions entering a stale state and not being retried. This state cannot be recovered without restarting the Ironic service. With this update, the underlying REST client has been enhanced to return specific error messages. These error messages are used by Ironic to invalidate cached sessions. BZ# 2101937 With this fix, traffic is distributed on VLAN provider networks in ML2/OVN deployments. Previously, traffic on VLAN provider networks was centralized even with the Distributed Virtual Router (DVR) feature enabled. BZ# 2121098 Before this update in Red Hat OpenStack Platform (RHOSP) 17.0 Beta, Networking service (neutron) requests could fail with a 504 Gateway Time-out if they occurred when the Networking service reconnected to ovsdb-server . These reconnections could happen during failovers or through ovsdb-server leader transfers during database compaction. If neutron debugging was enabled, the Networking service rapidly logged a large number of OVSDB transaction returned TRY_AGAIN" DEBUG messages, until the transaction timed out with an exception. With this update, the reconnection behavior is fixed to handle this condition, with a single retry of the transaction until a successful reconnection. 3.1.3. Enhancements This release of Red Hat OpenStack Platform features the following enhancements: BZ# 1689706 This enhancement includes OpenStack CLI (OSC) support for Block Storage service (cinder) API 3.42. This allows OSC to extend an online volume. BZ# 1699454 With this update, you can restore snapshots with the CephFS Native and CephFS with NFS backends of the Shared File Systems service (manila) by creating a new share from a snapshot. BZ# 1752776 In Red Hat OpenStack Platform (RHOSP) 17.0 GA, non-admin users have access to new parameters when they run the openstack server list command: --availability-zone <az_name> --config-drive --key-name <key_name> --power-state <state> --task-state <state> --vm-state <state> --progress <percent_value> --user <name_or_ID> For more information, see server list . BZ# 1758161 With this update, Red Hat OpenStack Platform director deployed Ceph includes the RGW daemon, replacing the Object Storage service (swift) for object storage. To keep the Object Storage service, use the cephadm-rbd-only.yaml file instead of cephadm.yaml . BZ# 1813560 With this update, the Red Hat OpenStack Platform (RHOSP) 17 Octavia amphora image now includes HAProxy 2.4.x as distributed in Red Hat Enterprise Linux (RHEL) 9. This improves the performance of Octavia load balancers; including load balancers using flavors with more than one vCPU core. BZ# 1839169 With this update, cephadm and orchestrator replace ceph-ansible. You can use director with cephadm to deploy the ceph cluster and additional daemons, and use a new `tripleo-ansible`role to configure and enable the Ceph backend. BZ# 1848153 With this update, you can now use Red Hat OpenStack Platform director to configure the etcd service to use TLS endpoints when deploying TLS-everywhere. BZ# 1903610 This enhancement adds the MemcachedMaxConnections parameter. You can use MemcachedMaxConnections to control the maximum number of memcache connections. BZ# 1904086 With this enhancement, you can view a volume Encryption Key ID using the cinder client command 'cinder --os-volume-api-version 3.64 volume show <volume_name>'. You must specify microversion 3.64 to view the value. BZ# 1944872 This enhancement adds the '--limit' argument to the 'openstack tripleo validator show history' command. You can use this argument to show only a specified number of the most recent validations. BZ# 1946956 This enhancement changes the default machine type for each host architecture to Q35 ( pc-q35-rhel9.0.0 ) for new Red Hat OpenStack Platform 17.0 deployments. The Q35 machine type provides several benefits and improvements, including live migration of instances between different RHEL 9.x minor releases, and the native PCIe hotplug that is faster than the ACPI hotplug used by the i440fx machine type. BZ# 1946978 With this update, the default machine type is RHEL9.0-based Q35 pc-q35-rhel9.0.0 , with the following enhancements: Live migration across RHEL minor releases. Native PCIe hotplug. This is also ACPI-based like the i440fx machine type. Intel input-output memory management unit (IOMMU) emulation helps protect guest memory from untrusted devices that are directly assigned to the guest. Faster SATA emulation. Secure boot. BZ# 1954103 With this enhancement you can use the PluginInstanceFormat parameter for collectd to specify more than one value. BZ# 1954274 This enhancement improves the operating performance of the Bare Metal Provisioning service (ironic) to optimize the performance of large workloads. BZ# 1959707 In Red Hat OpenStack Platform (RHOSP) 17.0 GA, the openstack tripleo validator show command has a new parameter, --limit <number> , that enables you to limit the number of validations that TripleO displays. The default value is to display the last 15 validations. For more information, see tripleo validator show history . BZ# 1971607 With this update, the Validation Framework provides a configuration file in which you can set parameters for particular use. You can find an example of this file at the root of the code source or in the default location: /etc/validation.cfg . You can use the default file in /etc/ or use your own file and provide it to the CLI with the argument --config . When you use a configuration file there is an order for the variables precedence. The following order is the order of variable precedence: User's CLI arguments Configuration file Default interval values BZ# 1973356 This security enhancement reduces the user privilege level required by the OpenStack Shared File System service (manila). You no longer need permissions to create and manipulate Ceph users, because the Shared File Systems service now uses the APIs exposed by the Ceph Manager service for this purpose. BZ# 2041429 You can now pre-provision bare metal nodes in your application by using the overcloud node [un]provision command. 3.1.4. Technology Preview The items listed in this section are provided as Technology Previews. For further information on the scope of Technology Preview status, and the associated support implications, refer to https://access.redhat.com/support/offerings/techpreview/ . BZ# 1884782 In Red Hat OpenStack Platform (RHOSP) 17.0 GA, a technology preview is available for integration between the RHOSP Networking service (neutron) ML2/OVN and the RHOSP DNS service (designate). As a result, the DNS service does not automatically add DNS entries for newly created VMs. BZ# 1896551 In Red Hat OpenStack Platform (RHOSP) 17.0, a technology preview is available for Border Gateway Protocol (BGP) to route the control plane, floating IPs, and workloads in provider networks. By using BGP advertisements, you do not need to configure static routes in the fabric, and RHOSP can be deployed in a pure Layer 3 data center. RHOSP uses Free Range Routing (FRR) as the dynamic routing solution to advertise and withdraw routes to control plane endpoints as well as to VMs in provider networks and Floating IPs. BZ# 1901686 In Red Hat OpenStack Platform 17.0, secure role-based access control (RBAC) is available for the Load-balancing service (octavia) as a technology preview. BZ# 1901687 In Red Hat OpenStack Platform 17.0, Secure RBAC is available for the DNS service (designate) as a technology preview. BZ# 2008274 In Red Hat OpenStack Platform 17.0, a technology preview is available for integrating the DNS service (designate) with a pre-existing DNS infrastructure that uses BIND 9. For more information, see Deploying the DNS service with pre-existing BIND 9 servers BZ# 2120392 In Red Hat OpenStack Platform 17.0, a technology preview is available for creating single NUMA node instances that have both pinned and floating CPUs. BZ# 2120407 In Red Hat OpenStack Platform 17.0, a technology preview is available for live migrating, unshelving and evacuating an instance that uses a port that has resource requests, such as a guaranteed minimum bandwidth QoS policy. BZ# 2120410 In Red Hat OpenStack Platform 17.0, a technology preview is available for Compute service scheduling based on routed networks. Network segments are reported to the Placement service as host aggregates. The Compute service includes the network segment information in the Placement service query to ensure that the selected host is connected to the correct network segment. This feature enables more accurate scheduling through better tracking of IP availability and locality, and more accurate instance migration, resizing, or unshelving through awareness of the routed network IP subnets. BZ# 2120743 In Red Hat OpenStack Platform 17.0, a technology preview is available for rescuing an instance booted from a volume. BZ# 2120746 In Red Hat OpenStack Platform 17.0, a technology preview is available to define custom inventories and traits in a declarative provider.yaml configuration file. Cloud operators can model the availability of physical host features by using custom traits, such as CUSTOM_DIESEL_BACKUP_POWER , CUSTOM_FIPS_COMPLIANT , and CUSTOM_HPC_OPTIMIZED . They can also model the availability of consumable resources by using resource class inventories, such as CUSTOM_DISK_IOPS , and CUSTOM_POWER_WATTS . Cloud operators can use the ability to report specific host information to define custom flavors that optimize instance scheduling, particularly when used in collaboration with reserving hosts by using isolated aggregates. Defining a custom inventory prevents oversubscription of Power IOPS and other custom resources that an instance consumes. BZ# 2120756 In Red Hat OpenStack Platform 17.0, a technology preview is available to configure counting of quota usage of cores and ram by querying placement for resource usage and instances from instance mappings in the API database, instead of counting resources from separate cell databases. This makes quota usage counting resilient to temporary cell outages or poor cell performance in a multi-cell environment. Set the following configuration option to count quota usage from placement: BZ# 2120757 In Red Hat OpenStack Platform 17.0, a technology preview is available for requesting that images are pre-cached on Compute nodes in a host aggregate, when using microversion 2.81. To reduce boot time, you can request that a group of hosts within an aggregate fetch and cache a list of images. BZ# 2120761 In Red Hat OpenStack Platform 17.0, a technology preview is available to use traits and the Placement service to prefilter hosts by using the supported device model traits declared by the virt drivers. BZ# 2128042 In Red Hat OpenStack Platform 17.0, a technology preview is available for Compute node support of multiple NVIDIA vGPU types for each physical GPU. BZ# 2128056 In Red Hat OpenStack Platform 17.0, a technology preview is available for cold migrating and resizing instances that have vGPUs. For a known issue affecting the vGPU Technology Preview, see https://bugzilla.redhat.com/show_bug.cgi?id=2116979 . BZ# 2128070 In Red Hat OpenStack Platform 17.0, a technology preview is available for creating an instance with a VirtIO data path acceleration (VDPA) interface. 3.1.5. Release Notes This section outlines important details about the release, including recommended practices and notable changes to Red Hat OpenStack Platform. You must take this information into account to ensure the best possible outcomes for your deployment. BZ# 1767084 With this update, the CephFS drivers in the OpenStack Shared File Systems service (manila) are updated so that you can manage provisioning and storage lifecycle operations by using the Ceph Manager API. When you create new file shares, the shares are created in a new format that is quicker for creating, deleting and operations. This transition does not affect pre-existing file shares. BZ# 1813573 This enhancement includes Octavia support for object tags. This allows users to add metadata to load balancer resources and filter query results based on tags. BZ# 2013120 With this update, you can supply a new argument --skiplist to the validation run command. Use this command with a yaml file containing services to skip when running validations. BZ# 2090813 The data collection service (Ceilometer) is supported for collection of Red Hat OpenStack Platform (RHOSP) telemetry and events. Ceilometer is also supported for the transport of those data points to the metrics storage service (gnocchi) for the purposes of autoscaling, and delivery of metrics and events to Service Telemetry Framework (STF) for RHOSP monitoring. BZ# 2111015 In an ML2/OVS deployment, Open vSwitch (OVS) does not support offloading OpenFlow rules that have the skb_priority , skb_mark , or output queue fields set. Those fields are needed to provide quality-of-service (QoS) support for virtio ports. If you set a minimum bandwidth rule for a virtio port, the Neutron Open vSwitch agent marks the traffic of this port with a Packet Mark Field. As a result, this traffic cannot be offloaded, and it affects the traffic in other ports. If you set a bandwidth limit rule, all traffic is marked with the default 0 queue, which means no traffic can be offloaded. As a workaround, if your environment includes OVS hardware offload ports, disable the packet marking in the nodes that require hardware offloading. After you disable the packet marking, it will not be possible to set rate limiting rules for virtio ports. However, differentiated services code point (DSCP) marking rules will still be available. In the configuration file, set the disable_packet_marking flag to true . After you edit the configuration file, you must restart the neutron_ovs_agent container. For example: BZ# 2111527 In RHOSP 17.0 you must use Ceph containers based on RHCSv5.2 GA content. BZ# 2117229 Previously, the collectd processes plugin was enabled by default, without a list of processes to watch. This would cause messages in collectd logs like "procs_running not found". With this update, the collectd processes plugin is removed from the list of collectd plugins that are installed and enabled by default. You can enable the plugin by adding it to the configuration. 3.1.6. Known Issues These known issues exist in Red Hat OpenStack Platform at this time: BZ# 2126476 NFV is not supported in RHOSP 17.0. Do not deploy NFV use cases in RHOSP 17.0. BZ# 1966157 There is a limitation when using ML2/OVN with provider:network_type geneve with a Mellanox adapter on a Compute node that has more than one instance on the geneve network. The floating IP of only one of the instances will be reachable. You can track the progress of the resolution on this Bugzilla ticket. BZ# 2085583 There is currently a known issue wherein long-running operations can cause the ovsdb connection to time out causing reconnects. These time outs can then cause the nova-compute agent to become unresponsive. Workaround: You can use the command-line client instead of the default native python bindings. Use the following parameters in your heat templates to use the command-line client: BZ# 2091076 Before this update, the health check status script failed because it relied on the podman log content that was no longer available. Now the health check script uses the podman socket instead of the podman log. BZ# 2105291 There is currently a known issue where 'undercloud-heat-purge-deleted' validation fails. This is because it is not compatible with Red Hat OpenStack Platform 17. Workaround: Skip 'undercloud-heat-purge-deleted' with '--skip-list' to skip this validation. BZ# 2104979 A known issue in RHOSP 17.0 prevents the default mechanism for selecting the hypervisor fully qualified domain name (FQDN) from being set properly if the resource_provider_hypervisors heat parameter is not set. This causes the SRIOV or OVS agent to fail to start. Workaround: Specify the hypervisor FQDN explicitly in the heat template. The following is an example of setting this parameter for the SRIOV agent: ExtraConfig: neutron::agents::ml2::sriov::resource_provider_hypervisors: "enp7s0f3:%{hiera('fqdn_canonical')},enp5s0f0:%{hiera('fqdn_canonical')}". BZ# 2107896 There is currently a known issue that causes tuned kernel configurations to not be applied after initial provisioning. Workaround: You can use the following custom playbook to ensure that the tuned kernel command line arguments are applied. Save the following playbook as /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-reset-blscfg.yaml on the undercloud node: Configure the role in the node definition file, overcloud-baremetal-deploy.yaml , to run the cli-overcloud-node-reset-blscfg.yaml playbook before the playbook that sets the kernelargs : BZ# 2109597 There is a hardware (HW) limitation with CX-5. Every network traffic flow has a direction in HW, either transmit (TX) or receive (RX). If the source port of the flow is a virtual function (VF), then it is also TX flow in HW. CX-5 cannot pop VLAN on TX path, which prevents offloading the flow with pop_vlan to the HW. BZ# 2112988 There is currently a known issue where the Swift API does not work and returns a 401 error when multiple Controller nodes are deployed and Ceph is enabled. A workaround is available at https://access.redhat.com/solutions/6970061 . BZ# 2116529 Live migration fails when executing the QEMU command migrate-set-capabilities . This is because the post-copy feature that is enabled by default is not supported. Choose one of the following workaround options: Workaround Option 1: Set vm.unprivileged_userfaultfd = 1 on Compute nodes to enable post-copy on the containerized libvirt: Make a new file: USD touch /etc/sysctl.d/50-userfault.conf . Add vm.unprivileged_userfaultfd = 1 to /etc/sysctl.d/50-userfault.conf . Load the file: USD sysctl -p /etc/sysctl.d/50-userfault.conf . Workaround Option 2: Set the sysctl flag through director, by setting the ExtraSysctlSettings parameter. Workaround Option 3: Disable the post-copy feature completely, by setting the NovaLiveMigrationPermitPostCopy parameter to false . BZ# 2116979 When using the Technology Preview vGPU support features, a known issue prevents mdev devices from being freed when stopping, moving, or deleting vGPU instances in RHOSP 17. Eventually, all mdev devices become consumed, and additional instances with vGPUs cannot be created on the compute host. BZ# 2116980 If you launch a vGPU instance in RHOSP 17 you cannot delete it, stop it, or move it. When an instance with a vGPU is deleted, migrated off its compute host, or stopped, the vGPU's underlying mdev device is not cleaned up. If this happens to enough instances, all available mdev devices will be consumed, and no further instances with vGPUs can be created on that compute host. BZ# 2120383 There is currently a known issue when creating instances that have an emulated Trusted Platform Module (TPM) device. Workaround: Disable Security-Enhanced Linux (SELinux). BZ# 2120398 There is currently a known issue with deploying multi-cell and multi-stack overclouds on RHOSP 17. This is a regression with no workaround, therefore the multi-cell and multi-stack overcloud features are not available in RHOSP 17.0. BZ# 2120766 There is currently a known issue with the RHEL firmware definition file missing from some machine types, which causes the booting of instances with an image firmware of UEFI to fail with a UEFINotSupported exception. This issue is being addressed by https://bugzilla.redhat.com/show_bug.cgi?id=2109644 . There is also a known issue when mem_encryption=on in the kernel args of an AMD SEV Compute node, that results in the Compute node kernel hanging after a reboot and not restarting. There is no workaround for these issues, therefore the AMD SEV feature is not available in RHOSP 17.0. BZ# 2120773 There is currently a known issue with shutting down and restarting instances after a Compute node reboot on RHOSP 17. When a Compute node is rebooted, the automated process for gracefully shutting down the instance fails, which causes the instance to have less time to shut down before the system forces them to stop. The results of the forced stop may vary. Ensure you have fresh backups for all critical workloads before rebooting Compute nodes. BZ# 2121752 Because of a performance issue with the new socket NUMA affinity policy for PCI passthrough devices and SR-IOV interfaces, the socket NUMA affinity policy is not supported in RHOSP 17.0. BZ# 2124294 Sensubility does not have permission to access /run/podman/podman.sock , which causes the container health check to fail to send the service container status data to Service Telemetry Framework (STF). Workaround: Run the following command on all overcloud nodes after deployment: sudo podman exec -it collectd setfacl -R -m u:collectd:rwx /run/podman Result: User collectd gets access to /run/podman path recursively allowing sensubility to connect to podman. BZ# 2125159 In Red Hat OpenStack Platform (RHOSP) 17.0 GA, there is a known issue where ML2/OVN deployments fail to automatically create DNS records with the RHOSP DNS service (designate). The cause for this problem is that the required Networking service (neutron) extension, dns_domain_ports , is not present. Workaround: currently there is no workaround, but the fix has been targeted for a future RHOSP release. BZ# 2126810 In Red Hat OpenStack Platform (RHOSP) 17.0, the DNS service (designate) and the Load-balancing service (octavia) are misconfigured for high availability. The RHOSP Orchestration service (heat) templates for these services use the non-Pacemaker version of the Redis template. Workaround: include environments/ha-redis.yaml in the overcloud deploy command after the enable-designate.yaml and octavia.yaml environment files. BZ# 2127965 In Red Hat OpenStack Platform (RHOSP) 17.0 GA, there is a known issue where the Free Range Router (FRR) container does not start after the host on which it resides is rebooted. This issue is caused by a missing file in the BGP configuration. Workaround: create the file, /etc/tmpfiles.d/run-frr.conf , and add the following line: After you make this change, tmpfiles recreates /run/frr after each reboot and the FRR container can start. BZ# 2128928 Integration with Red Hat Satellite is not supported in RHOSP 17.0. Only Red Hat CDN is supported as a package repository and container registry. Satellite support will resume in a future release. BZ# 2120377 You cannot use the UEFI Secure Boot feature because there is currently a known issue with UEFI boot for instances. This is due to an underlying RHEL issue. BZ# 2120384 You cannot create Windows Server 2022 instances on RHOSP because they require vTPM support, which is not currently available. BZ# 2152218 There is currently a known issue when attaching a volume to an instance, or detaching a volume from an instance, when the instance is in the process of booting up or shutting down. You must wait until the instance is fully operational, or fully stopped, before attaching or detaching a volume. BZ# 2153815 There is currently a known issue with creating instances when the instance flavor includes resource usage extra specs, quota:cpu_* . On RHOSP 17.0, attempts to create an instance with a flavor that limits the CPU quotas encounter the following error: "Requested CPU control policy not supported by host". This error is raised on RHOSP 17.0 on RHEL 9 because the Compute service assumes that the host is running cgroups instead of cgroups-v2 , therefore it incorrectly detects that the host does not support resource usage extra specs. BZ# 2162242 There is currently a known issue with CPU pinning on RHEL 9 kernels older than kernel-5.14.0-70.43.1.el9_0 that causes soft and hard CPU affinity on all existing cgroups to be reset when a new cgroup is created. This issue is being addressed in https://bugzilla.redhat.com/show_bug.cgi?id=2143767 . To use CPU pinning, update your kernel to kernel-5.14.0-70.43.1.el9_0 or newer and reboot the host. 3.1.7. Deprecated Functionality The items in this section are either no longer supported, or will no longer be supported in a future release. BZ# 1874778 In Red Hat OpenStack Platform 17.0, the iscsi deployment interface has been deprecated. The default deployment interface is now direct . Bug fixes and support are provided while the feature is deprecated but Red Hat will not implement new feature enhancements. In a future release, the interface will be removed. BZ# 1946898 In Red Hat OpenStack Platform 17.0, the QEMU i440fx machine type has been deprecated. The default machine type is now Q35, pc-q35-rhel9.0.0 . While the pc-i440fx-* machine types are still available, do not use these machine types for new workloads. Ensure that you convert all workloads that use the QEMU i440fx machine type to the Q35 machine type before you upgrade to RHOSP 18.0, which requires VM downtime. Bug fixes and support are provided while the feature is deprecated, but Red Hat will not implement new feature enhancements. BZ# 2084206 The use of the QPID Dispatch Router (QDR) for transport of RHOSP telemetry towards Service Telemetry Framework (STF) is deprecated in RHOSP 17.0. BZ# 2090811 The metrics data storage service (gnocchi) has been deprecated since RHOSP 15. Gnocchi is fully supported for storage of metrics when used with the autoscaling use case. For a supported monitoring solution for RHOSP, see Service Telemetry Framework (STF) . Use of gnocchi for telemetry storage as a general monitoring solution is not supported. BZ# 2090812 The Alarming service (aodh) has been deprecated since Red Hat OpenStack Platform(RHOSP) 15. The Alarming service is fully supported for delivery of alarms when you use it with the autoscaling use case. For delivery of metrics-based alarms for RHOSP, see Service Telemetry Framework (STF). Use of the Alarming service as part of a general monitoring solution is not supported. BZ# 2100222 The snmp service was introduced to allow the data collection service (Ceilometer) on the undercloud to gather metrics via the snmpd daemon deployed to the overcloud nodes. Telemetry services were previously removed from the undercloud, so the snmp service is no longer necessary or usable in the current state. BZ# 2103869 The Derived Parameters feature is deprecated. It will be removed in a future release. The Derived Parameters feature is configured using the --plan-environment-file option of the openstack overcloud deploy command. Workaround / Migration Instructions HCI overclouds require system tuning. There are many different options for system tuning. The Derived Parameters functionality tuned systems with director by using hardware inspection data and set tuning parameters using the --plan-environment-file option of the openstack overcloud deploy command. The Derived Parameters functionality is deprecated in Release 17.0 and is removed in 17.1. The following parameters were tuned by this functionality: IsolCpusList KernelArgs NeutronPhysnetNUMANodesMapping NeutronTunnelNUMANodes NovaCPUAllocationRatio NovaComputeCpuDedicatedSet NovaComputeCpuSharedSet NovaReservedHostMemory OvsDpdkCoreList OvsDpdkSocketMemory OvsPmdCoreList To set and tune these parameters starting in 17.0, observe their values using the available command line tools and set them using a standard heat template. BZ# 2128697 The ML2/OVS mechanism driver is deprecated in RHOSP 17.0. Over several releases, Red Hat is replacing ML2/OVS with ML2/OVN. For instance, starting with RHOSP 15, ML2/OVN became the default mechanism driver. Support is available for the deprecated ML2/OVS mechanism driver through the RHOSP 17 releases. During this time, the ML2/OVS driver remains in maintenance mode, receiving bug fixes and normal support, and most new feature development happens in the ML2/OVN mechanism driver. In RHOSP 18.0, Red Hat plans to completely remove the ML2/OVS mechanism driver and stop supporting it. If your existing Red Hat OpenStack Platform (RHOSP) deployment uses the ML2/OVS mechanism driver, start now to evaluate a plan to migrate to the mechanism driver. Migration is supported in RHOSP 16.2 and will be supported in RHOSP 17.1. Migration tools are available in RHOSP 17.0 for test purposes only. Red Hat requires that you file a proactive support case before attempting a migration from ML2/OVS to ML2/OVN. Red Hat does not support migrations without the proactive support case. See How to submit a Proactive Case. 3.1.8. Removed Functionality BZ# 1918403 Technology preview support was added in RHOSP 16.1 for configuring NVDIMM Compute nodes to provide persistent memory for instances. Red Hat has removed support for persistent memory from RHOSP 17.0 and future releases in response to the announcement by the Intel Corporation on July 28, 2022 that they are discontinuing investment in their Intel(R) OptaneTM business: Intel(R) OptaneTM Business Update: What Does This Mean for Warranty and Support Intel(R) Product Change Notification #119311-00 Cloud operators must ensure that no instances use the vPMEM feature before upgrading to 17.1. BZ# 1966898 In Red Hat OpenStack Platform 17.0, panko and its API were removed from the distribution. BZ# 1984889 In this release, Block Storage service (cinder) backup support for Google Cloud Services (GCS) has been removed due to a reliance on libraries that are not FIPS compliant. BZ# 2022714 In Red Hat OpenStack Platform 17.0, the collectd-write_redis plugin was removed. BZ# 2023893 In Red Hat OpenStack Platform 17.0, a dependency has been removed from the distribution so that the subpackage collectd-memcachec cannot be built anymore. The collectd- memcached plugin provides similar functionality to that of collectd-memcachec . BZ# 2065540 In Red Hat OpenStack Platform 17.0, the ability to deliver metrics from collectd to gnocchi was removed. BZ# 2094409 In Red Hat OpenStack Platform 17.0, the deprecated dbi and notify_email collectd plugins were removed. BZ# 2101948 In Red Hat OpenStack Platform 17.0, the collectd processes plugin has been removed from the default list of plugins. Loading the collectd processes plugin can cause logs to flood with messages, such as "procs_running not found". BZ# 2127184 In Red Hat OpenStack Platform 17.0, support for POWER (ppc64le) architectures has been removed. Only the x86_64 architecture is supported. 3.2. Red Hat OpenStack Platform 17.0.1 Maintenance Release - January 25, 2023 These release notes highlight technology preview items, recommended practices, known issues, and deprecated functionality to be taken into consideration when deploying this release of Red Hat OpenStack Platform. 3.2.1. Advisory list This release includes the following advisories: RHBA-2023:0271 Red Hat OpenStack Platform 17.0.1 bug fix and enhancement advisory RHBA-2023:0277 Red Hat OpenStack Platform 17.0.1 director images RHBA-2023:0278 Red Hat OpenStack Platform 17.0.1 director image RPMs RHBA-2023:0279 Updated Red Hat OpenStack Platform 17.0.1 container images RHSA-2023:0274 Moderate: Red Hat OpenStack Platform 17.0 (python-XStatic-Angular) security update RHSA-2023:0275 Moderate: Red Hat OpenStack Platform 17.0 (openstack-neutron) security update RHSA-2023:0276 Moderate: Red Hat OpenStack Platform 17.0 (python-scciclient) security update 3.2.2. Bug Fix These bugs were fixed in this release of Red Hat OpenStack Platform: BZ# 2085583 Before this update, ovsdb connection time-outs caused the nova-compute agent to become unresponsive. With this update, the issue has been fixed. BZ# 2091076 Before this update, unavailability of the Podman log content caused the health check status script to fail. With this update, an update to the health check status script resolves the issue by using the Podman socket instead of the Podman log. As a result, API health checks, provided through sensubility for Service Telemetry Framework, are now operational. BZ# 2106763 Before this update, an underlying RHEL issue caused a known issue with UEFI boot for instances. With this update, the underlying RHEL issue has now been fixed and the UEFI Secure Boot feature for instances is now available. BZ# 2121098 Before this update, in Red Hat OpenStack Platform (RHOSP) 17.0, Networking service (neutron) requests sometimes failed with a 504 Gateway Time-out if the request was made when the Networking service reconnected to ovsdb-server . These reconnections sometimes happened during failovers or through ovsdb-server leader transfers during database compaction. If neutron debugging was enabled, the Networking service rapidly logged a large number of OVSDB transaction-returned "TRY_AGAIN" DEBUG messages, until the transaction timed out with an exception. With this update, the reconnection behavior is fixed to handle this condition, with a single retry of the transaction until a successful reconnection. BZ# 2121634 Before this update, the Red Hat OpenStack Platform (RHOSP) DNS service (designate) was unable to start its central process when TLS-everywhere was enabled. This was caused by an inability to connect to Redis over TLS. With this update in RHOSP 17.0.1, this issue has been resolved. BZ# 2122926 Before this update, adding a member without subnet information when the subnet of the member is different than the subnet of the load balancer Virtual IP (VIP) caused the ovn-octavia provider to wrongly use the VIP subnet for the subnet_id , which resulted in no error but no connectivity to the member. With this update, a check that the actual IP of the member belongs to the same CIDR that the VIP belongs to when there is no subnet information resolves the issue. If the two IP addresses do not match, the action is rejected, asking for the subnet_id . BZ# 2133029 Before this update, the Alarming service (aodh) used a deprecated gnocchi API to aggregate metrics. This resulted in incorrect metric measures of CPU use in the gnocchi results. With this update, use of dynamic aggregation in gnocchi, which supports the ability to make reaggregations of existing metrics and the ability to make and transform metrics as required, resolves the issue. CPU use in gnocchi is computed correctly. BZ# 2135549 Before this update, deploying RHEL 8.6 images in UEFI mode caused a failure when using the ironic-python-agent service because the ironic-python-agent service did not understand the RHEL 8.6 UEFI boot loader hint file. With this update, you can now deploy RHEL 8.6 in UEFI mode. BZ# 2138046 Before this update, when you used the whole disk image overcloud-hardened-uefi-full to boot overcloud nodes, nodes that used the Legacy BIOS boot mode failed to boot because the lvmid of the root volume was different to the lvmid referenced in grub.cfg . With this update, the virt-sysprep task to reset the lvmid has been disabled, and nodes with Legacy BIOS boot mode can now be booted with the whole disk image. BZ# 2140881 Before this update, the network_config schema in the bare-metal provisioning definition did not allow setting the num_dpdk_interface_rx_queues parameter, which caused a schema validation error that blocked the bare-metal node provisioning process. With this update, the schema validation error no longer occurs when the 'num_dpdk_interface_rx_queues' parameter is used. 3.2.3. Known Issues These known issues exist in Red Hat OpenStack Platform at this time: BZ# 2058518 There is currently a known issue when the Object Storage service (swift) client blocks a Telemetry service (ceilometer) user from fetching object details under the condition of the Telemetry service user having inadequate privileges to poll objects from the Object Storage service. Workaround: Associate the ResellerAdmin role with the Telemetry service user by using the command openstack role add --user ceilometer --project service ResellerAdmin . BZ# 2104979 A known issue in RHOSP 17.0 prevents the default mechanism for selecting the hypervisor fully qualified domain name (FQDN) from being set properly if the resource_provider_hypervisors heat parameter is not set. This causes the single root I/O virtualization (SR-IOV) or Open vSwitch (OVS) agent to fail to start. Workaround: Specify the hypervisor FQDN explicitly in the heat template. The following is an example of setting this parameter for the SRIOV agent: ExtraConfig: neutron::agents::ml2::sriov::resource_provider_hypervisors: "enp7s0f3:%{hiera('fqdn_canonical')},enp5s0f0:%{hiera('fqdn_canonical')}". BZ# 2105312 There is currently a known issue where the ovn/ovsdb_probe_interval value is not configured in the file ml2_conf.ini with the value specified by OVNOvsdbProbeInterval because a patch required to configure the neutron server based on OVNOvsdbProbeInterval is not included in 17.0.1. Workaround: Deployments that use OVNOvsdbProbeInterval must use ExtraConfig hooks in the following manner to configure the neutron server: BZ# 2107896 There is currently a known issue that causes tuned kernel configurations to not be applied after initial provisioning. Workaround: You can use the following custom playbook to ensure that the tuned kernel command line arguments are applied. Save the following playbook as /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-reset-blscfg.yaml on the undercloud node: Configure the role in the node definition file, overcloud-baremetal-deploy.yaml , to run the cli-overcloud-node-reset-blscfg.yaml playbook before the playbook that sets the kernelargs : BZ# 2125159 There is currently a known issue in RHOSP 17.0 where ML2/OVN deployments fail to automatically create DNS records with the RHOSP DNS service (designate) because the required Networking service (neutron) extension, dns_domain_ports , is not present. There is currently no workaround. A fix is planned for a future RHOSP release. BZ# 2127965 There is currently a known issue in RHOSP 17.0 where the Free Range Router (FRR) container does not start after the host on which it resides is rebooted. This issue is caused by a missing file in the BGP configuration. Workaround: Create the file, /etc/tmpfiles.d/run-frr.conf , and add the following line: After you make this change, tmpfiles recreates /run/frr after each reboot and the FRR container can start.
[ "parameter_defaults: CollectdExtraPlugins: - smart CollectdContainerAdditionalCapAdd: \"CAP_SYS_RAWIO\"", "parameter_defaults: ControllerExtraConfig: nova::config::nova_config: quota/count_usage_from_placement: value: 'True'", "cat `/var/lib/config-data/puppet-generated/neutron/etc/neutron/plugins/ml2/openvswitch_agent.ini` [ovs] disable_packet_marking=True", "parameter_defaults: ComputeExtraConfig: nova:os_vif_ovs:ovsdb_interface => 'vsctl'", "- name: Reset BLSCFG of compute node(s) meant for NFV deployments hosts: allovercloud any_errors_fatal: true gather_facts: true pre_tasks: - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 tasks: - name: Reset BLSCFG flag in grub file, if it is enabled become: true lineinfile: path: /etc/default/grub line: \"GRUB_ENABLE_BLSCFG=false\" regexp: \"^GRUB_ENABLE_BLSCFG=.*\" insertafter: '^GRUB_DISABLE_RECOVERY.*'", "- name: ComputeOvsDpdkSriov count: 2 hostname_format: computeovsdpdksriov-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/osp17_ref/nic-configs/computeovsdpdksriov.j2 config_drive: cloud_config: ssh_pwauth: true disable_root: false chpasswd: list: |- root:12345678 expire: False ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-reset-blscfg.yaml - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: reboot_wait_timeout: 600 kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-11,13-23' tuned_profile: 'cpu-partitioning' tuned_isolated_cores: '1-11,13-23' - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: memory_channels: '4' lcore: '0,12' pmd: '1,13,2,14,3,15' socket_mem: '4096' disable_emc: false enable_tso: false revalidator: '' handler: '' pmd_auto_lb: false pmd_load_threshold: '' pmd_improvement_threshold: '' pmd_rebal_interval: '' nova_postcopy: true", "d /run/frr 0750 root root - -", "parameter_defaults: OVNOvsdbProbeInterval: <probe interval in milliseconds> ControllerExtraConfig: neutron::config::plugin_ml2_config: ovn/ovsdb_probe_interval: value: <probe interval in milliseconds>", "- name: Reset BLSCFG of compute node(s) meant for NFV deployments hosts: allovercloud any_errors_fatal: true gather_facts: true pre_tasks: - name: Wait for provisioned nodes to boot wait_for_connection: timeout: 600 delay: 10 tasks: - name: Reset BLSCFG flag in grub file, if it is enabled become: true lineinfile: path: /etc/default/grub line: \"GRUB_ENABLE_BLSCFG=false\" regexp: \"^GRUB_ENABLE_BLSCFG=.*\" insertafter: '^GRUB_DISABLE_RECOVERY.*'", "- name: ComputeOvsDpdkSriov count: 2 hostname_format: computeovsdpdksriov-%index% defaults: networks: - network: internal_api subnet: internal_api_subnet - network: tenant subnet: tenant_subnet - network: storage subnet: storage_subnet network_config: template: /home/stack/osp17_ref/nic-configs/computeovsdpdksriov.j2 config_drive: cloud_config: ssh_pwauth: true disable_root: false chpasswd: list: |- root:12345678 expire: False ansible_playbooks: - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-reset-blscfg.yaml - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-node-kernelargs.yaml extra_vars: reboot_wait_timeout: 600 kernel_args: 'default_hugepagesz=1GB hugepagesz=1G hugepages=32 iommu=pt intel_iommu=on isolcpus=1-11,13-23' tuned_profile: 'cpu-partitioning' tuned_isolated_cores: '1-11,13-23' - playbook: /usr/share/ansible/tripleo-playbooks/cli-overcloud-openvswitch-dpdk.yaml extra_vars: memory_channels: '4' lcore: '0,12' pmd: '1,13,2,14,3,15' socket_mem: '4096' disable_emc: false enable_tso: false revalidator: '' handler: '' pmd_auto_lb: false pmd_load_threshold: '' pmd_improvement_threshold: '' pmd_rebal_interval: '' nova_postcopy: true", "d /run/frr 0750 root root - -" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.0/html/release_notes/chap-release-info_rhosp-relnotes
2.4. The Clustered Logical Volume Manager (CLVM)
2.4. The Clustered Logical Volume Manager (CLVM) The Clustered Logical Volume Manager (CLVM) is a set of clustering extensions to LVM. These extensions allow a cluster of computers to manage shared storage (for example, on a SAN) using LVM. CLVM is part of the Resilient Storage Add-On. Whether you should use CLVM depends on your system requirements: If only one node of your system requires access to the storage you are configuring as logical volumes, then you can use LVM without the CLVM extensions and the logical volumes created with that node are all local to the node. If you are using a clustered system for failover where only a single node that accesses the storage is active at any one time, you should use High Availability Logical Volume Management agents (HA-LVM). If more than one node of your cluster will require access to your storage which is then shared among the active nodes, then you must use CLVM. CLVM allows a user to configure logical volumes on shared storage by locking access to physical storage while a logical volume is being configured, and uses clustered locking services to manage the shared storage. In order to use CLVM, the High Availability Add-On and Resilient Storage Add-On software, including the clvmd daemon, must be running. The clvmd daemon is the key clustering extension to LVM. The clvmd daemon runs in each cluster computer and distributes LVM metadata updates in a cluster, presenting each cluster computer with the same view of the logical volumes. For information on installing and administering the High Availability Add-On see Cluster Administration . To ensure that clvmd is started at boot time, you can execute a chkconfig ... on command on the clvmd service, as follows: If the clvmd daemon has not been started, you can execute a service ... start command on the clvmd service, as follows: Creating LVM logical volumes in a cluster environment is identical to creating LVM logical volumes on a single node. There is no difference in the LVM commands themselves, or in the LVM graphical user interface, as described in Chapter 5, LVM Administration with CLI Commands and Chapter 8, LVM Administration with the LVM GUI . In order to enable the LVM volumes you are creating in a cluster, the cluster infrastructure must be running and the cluster must be quorate. By default, logical volumes created with CLVM on shared storage are visible to all systems that have access to the shared storage. It is possible to create volume groups in which all of the storage devices are visible to only one node in the cluster. It is also possible to change the status of a volume group from a local volume group to a clustered volume group. For information, see Section 5.3.3, "Creating Volume Groups in a Cluster" and Section 5.3.8, "Changing the Parameters of a Volume Group" . Warning When you create volume groups with CLVM on shared storage, you must ensure that all nodes in the cluster have access to the physical volumes that constitute the volume group. Asymmetric cluster configurations in which some nodes have access to the storage and others do not are not supported. Figure 2.2, "CLVM Overview" shows a CLVM overview in a cluster. Figure 2.2. CLVM Overview Note CLVM requires changes to the lvm.conf file for cluster-wide locking. Information on configuring the lvm.conf file to support clustered locking is provided within the lvm.conf file itself. For information about the lvm.conf file, see Appendix B, The LVM Configuration Files .
[ "chkconfig clvmd on", "service clvmd start" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/logical_volume_manager_administration/LVM_Cluster_Overview
1.4. Red Hat Documentation Site
1.4. Red Hat Documentation Site Red Hat's official documentation site is available at https://access.redhat.com/site/documentation/ . There you will find the latest version of every book, including this one.
null
https://docs.redhat.com/en/documentation/red_hat_jboss_data_virtualization/6.4/html/user_guide_volume_2_modeshape_tools/red_hat_documentation_site
API overview
API overview OpenShift Container Platform 4.17 Overview content for the OpenShift Container Platform API Red Hat OpenShift Documentation Team
[ "oc debug node/<node>", "chroot /host", "systemctl cat kubelet", "/etc/systemd/system/kubelet.service.d/20-logging.conf [Service] Environment=\"KUBELET_LOG_LEVEL=2\"", "echo -e \"[Service]\\nEnvironment=\\\"KUBELET_LOG_LEVEL=8\\\"\" > /etc/systemd/system/kubelet.service.d/30-logging.conf", "systemctl daemon-reload", "systemctl restart kubelet", "rm -f /etc/systemd/system/kubelet.service.d/30-logging.conf", "systemctl daemon-reload", "systemctl restart kubelet", "apiVersion: machineconfiguration.openshift.io/v1 kind: MachineConfig metadata: labels: machineconfiguration.openshift.io/role: master name: 99-master-kubelet-loglevel spec: config: ignition: version: 3.2.0 systemd: units: - name: kubelet.service enabled: true dropins: - name: 30-logging.conf contents: | [Service] Environment=\"KUBELET_LOG_LEVEL=2\"", "oc adm node-logs --role master -u kubelet", "oc adm node-logs --role worker -u kubelet", "journalctl -b -f -u kubelet.service", "sudo tail -f /var/log/containers/*", "- for n in USD(oc get node --no-headers | awk '{print USD1}'); do oc adm node-logs USDn | gzip > USDn.log.gz; done" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html-single/api_overview/index
8.4.3. Using Yum Variables
8.4.3. Using Yum Variables You can use and reference the following built-in variables in yum commands and in all Yum configuration files (that is, /etc/yum.conf and all .repo files in the /etc/yum.repos.d/ directory): USDreleasever You can use this variable to reference the release version of Red Hat Enterprise Linux. Yum obtains the value of USDreleasever from the distroverpkg= value line in the /etc/yum.conf configuration file. If there is no such line in /etc/yum.conf , then yum infers the correct value by deriving the version number from the redhat-release-server package. The value of USDreleasever typically consists of the major release number and the variant of Red Hat Enterprise Linux, for example 6Client , or 6Server . USDarch You can use this variable to refer to the system's CPU architecture as returned when calling Python's os.uname() function. Valid values for USDarch include i686 and x86_64 . USDbasearch You can use USDbasearch to reference the base architecture of the system. For example, i686 machines have a base architecture of i386 , and AMD64 and Intel 64 machines have a base architecture of x86_64 . USDYUM0-9 These ten variables are each replaced with the value of any shell environment variables with the same name. If one of these variables is referenced (in /etc/yum.conf for example) and a shell environment variable with the same name does not exist, then the configuration file variable is not replaced. To define a custom variable or to override the value of an existing one, create a file with the same name as the variable (without the " USD " sign) in the /etc/yum/vars/ directory, and add the desired value on its first line. For example, repository descriptions often include the operating system name. To define a new variable called USDosname , create a new file with " Red Hat Enterprise Linux " on the first line and save it as /etc/yum/vars/osname : Instead of " Red Hat Enterprise Linux 6 " , you can now use the following in the .repo files:
[ "~]# echo \"Red Hat Enterprise Linux\" > /etc/yum/vars/osname", "name=USDosname USDreleasever" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/sec-Using_Yum_Variables
Appendix A. Broker configuration parameters
Appendix A. Broker configuration parameters zookeeper.connect Type: string Importance: high Dynamic update: read-only Specifies the ZooKeeper connection string in the form hostname:port where host and port are the host and port of a ZooKeeper server. To allow connecting through other ZooKeeper nodes when that ZooKeeper machine is down you can also specify multiple hosts in the form hostname1:port1,hostname2:port2,hostname3:port3 . The server can also have a ZooKeeper chroot path as part of its ZooKeeper connection string which puts its data under some path in the global ZooKeeper namespace. For example to give a chroot path of /chroot/path you would give the connection string as hostname1:port1,hostname2:port2,hostname3:port3/chroot/path . advertised.host.name Type: string Default: null Importance: high Dynamic update: read-only DEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. Hostname to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, it will use the value for host.name if configured. Otherwise it will use the value returned from java.net.InetAddress.getCanonicalHostName(). advertised.listeners Type: string Default: null Importance: high Dynamic update: per-broker Listeners to publish to ZooKeeper for clients to use, if different than the listeners config property. In IaaS environments, this may need to be different from the interface to which the broker binds. If this is not set, the value for listeners will be used. Unlike listeners it is not valid to advertise the 0.0.0.0 meta-address. advertised.port Type: int Default: null Importance: high Dynamic update: read-only DEPRECATED: only used when advertised.listeners or listeners are not set. Use advertised.listeners instead. The port to publish to ZooKeeper for clients to use. In IaaS environments, this may need to be different from the port to which the broker binds. If this is not set, it will publish the same port that the broker binds to. auto.create.topics.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enable auto creation of topic on the server. auto.leader.rebalance.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables auto leader balancing. A background thread checks the distribution of partition leaders at regular intervals, configurable by leader.imbalance.check.interval.seconds . If the leader imbalance exceeds leader.imbalance.per.broker.percentage , leader rebalance to the preferred leader for partitions is triggered. background.threads Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads to use for various background processing tasks. broker.id Type: int Default: -1 Importance: high Dynamic update: read-only The broker id for this server. If unset, a unique broker id will be generated.To avoid conflicts between zookeeper generated broker id's and user configured broker id's, generated broker ids start from reserved.broker.max.id + 1. compression.type Type: string Default: producer Importance: high Dynamic update: cluster-wide Specify the final compression type for a given topic. This configuration accepts the standard compression codecs ('gzip', 'snappy', 'lz4', 'zstd'). It additionally accepts 'uncompressed' which is equivalent to no compression; and 'producer' which means retain the original compression codec set by the producer. control.plane.listener.name Type: string Default: null Importance: high Dynamic update: read-only Name of listener used for communication between controller and brokers. Broker will use the control.plane.listener.name to locate the endpoint in listeners list, to listen for connections from the controller. For example, if a broker's config is : listeners = INTERNAL://192.1.1.8:9092, EXTERNAL://10.1.1.5:9093, CONTROLLER://192.1.1.8:9094 listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER On startup, the broker will start listening on "192.1.1.8:9094" with security protocol "SSL". On controller side, when it discovers a broker's published endpoints through zookeeper, it will use the control.plane.listener.name to find the endpoint, which it will use to establish connection to the broker. For example, if the broker's published endpoints on zookeeper are : "endpoints" : ["INTERNAL://broker1.example.com:9092","EXTERNAL://broker1.example.com:9093","CONTROLLER://broker1.example.com:9094"] and the controller's config is : listener.security.protocol.map = INTERNAL:PLAINTEXT, EXTERNAL:SSL, CONTROLLER:SSL control.plane.listener.name = CONTROLLER then controller will use "broker1.example.com:9094" with security protocol "SSL" to connect to the broker. If not explicitly configured, the default value will be null and there will be no dedicated endpoints for controller connections. delete.topic.enable Type: boolean Default: true Importance: high Dynamic update: read-only Enables delete topic. Delete topic through the admin tool will have no effect if this config is turned off. host.name Type: string Default: "" Importance: high Dynamic update: read-only DEPRECATED: only used when listeners is not set. Use listeners instead. hostname of broker. If this is set, it will only bind to this address. If this is not set, it will bind to all interfaces. leader.imbalance.check.interval.seconds Type: long Default: 300 Importance: high Dynamic update: read-only The frequency with which the partition rebalance check is triggered by the controller. leader.imbalance.per.broker.percentage Type: int Default: 10 Importance: high Dynamic update: read-only The ratio of leader imbalance allowed per broker. The controller would trigger a leader balance if it goes above this value per broker. The value is specified in percentage. listeners Type: string Default: null Importance: high Dynamic update: per-broker Listener List - Comma-separated list of URIs we will listen on and the listener names. If the listener name is not a security protocol, listener.security.protocol.map must also be set. Specify hostname as 0.0.0.0 to bind to all interfaces. Leave hostname empty to bind to default interface. Examples of legal listener lists: PLAINTEXT://myhost:9092,SSL://:9091 CLIENT://0.0.0.0:9092,REPLICATION://localhost:9093. log.dir Type: string Default: /tmp/kafka-logs Importance: high Dynamic update: read-only The directory in which the log data is kept (supplemental for log.dirs property). log.dirs Type: string Default: null Importance: high Dynamic update: read-only The directories in which the log data is kept. If not set, the value in log.dir is used. log.flush.interval.messages Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of messages accumulated on a log partition before messages are flushed to disk. log.flush.interval.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time in ms that a message in any topic is kept in memory before flushed to disk. If not set, the value in log.flush.scheduler.interval.ms is used. log.flush.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of the last flush which acts as the log recovery point. log.flush.scheduler.interval.ms Type: long Default: 9223372036854775807 Importance: high Dynamic update: read-only The frequency in ms that the log flusher checks whether any log needs to be flushed to disk. log.flush.start.offset.checkpoint.interval.ms Type: int Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: read-only The frequency with which we update the persistent record of log start offset. log.retention.bytes Type: long Default: -1 Importance: high Dynamic update: cluster-wide The maximum size of the log before deleting it. log.retention.hours Type: int Default: 168 Importance: high Dynamic update: read-only The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property. log.retention.minutes Type: int Default: null Importance: high Dynamic update: read-only The number of minutes to keep a log file before deleting it (in minutes), secondary to log.retention.ms property. If not set, the value in log.retention.hours is used. log.retention.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.minutes is used. If set to -1, no time limit is applied. log.roll.hours Type: int Default: 168 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum time before a new log segment is rolled out (in hours), secondary to log.roll.ms property. log.roll.jitter.hours Type: int Default: 0 Valid Values: [0,... ] Importance: high Dynamic update: read-only The maximum jitter to subtract from logRollTimeMillis (in hours), secondary to log.roll.jitter.ms property. log.roll.jitter.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum jitter to subtract from logRollTimeMillis (in milliseconds). If not set, the value in log.roll.jitter.hours is used. log.roll.ms Type: long Default: null Importance: high Dynamic update: cluster-wide The maximum time before a new log segment is rolled out (in milliseconds). If not set, the value in log.roll.hours is used. log.segment.bytes Type: int Default: 1073741824 (1 gibibyte) Valid Values: [14,... ] Importance: high Dynamic update: cluster-wide The maximum size of a single log file. log.segment.delete.delay.ms Type: long Default: 60000 (1 minute) Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The amount of time to wait before deleting a file from the filesystem. message.max.bytes Type: int Default: 1048588 Valid Values: [0,... ] Importance: high Dynamic update: cluster-wide The largest record batch size allowed by Kafka (after compression if compression is enabled). If this is increased and there are consumers older than 0.10.2, the consumers' fetch size must also be increased so that they can fetch record batches this large. In the latest message format version, records are always grouped into batches for efficiency. In message format versions, uncompressed records are not grouped into batches and this limit only applies to a single record in that case.This can be set per topic with the topic level max.message.bytes config. min.insync.replicas Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). When used together, min.insync.replicas and acks allow you to enforce greater durability guarantees. A typical scenario would be to create a topic with a replication factor of 3, set min.insync.replicas to 2, and produce with acks of "all". This will ensure that the producer raises an exception if a majority of replicas do not receive a write. num.io.threads Type: int Default: 8 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for processing requests, which may include disk I/O. num.network.threads Type: int Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads that the server uses for receiving requests from the network and sending responses to the network. num.recovery.threads.per.data.dir Type: int Default: 1 Valid Values: [1,... ] Importance: high Dynamic update: cluster-wide The number of threads per data directory to be used for log recovery at startup and flushing at shutdown. num.replica.alter.log.dirs.threads Type: int Default: null Importance: high Dynamic update: read-only The number of threads that can move replicas between log directories, which may include disk I/O. num.replica.fetchers Type: int Default: 1 Importance: high Dynamic update: cluster-wide Number of fetcher threads used to replicate messages from a source broker. Increasing this value can increase the degree of I/O parallelism in the follower broker. offset.metadata.max.bytes Type: int Default: 4096 (4 kibibytes) Importance: high Dynamic update: read-only The maximum size for a metadata entry associated with an offset commit. offsets.commit.required.acks Type: short Default: -1 Importance: high Dynamic update: read-only The required acks before the commit can be accepted. In general, the default (-1) should not be overridden. offsets.commit.timeout.ms Type: int Default: 5000 (5 seconds) Valid Values: [1,... ] Importance: high Dynamic update: read-only Offset commit will be delayed until all replicas for the offsets topic receive the commit or this timeout is reached. This is similar to the producer request timeout. offsets.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the offsets segments when loading offsets into the cache (soft-limit, overridden if records are too large). offsets.retention.check.interval.ms Type: long Default: 600000 (10 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only Frequency at which to check for stale offsets. offsets.retention.minutes Type: int Default: 10080 Valid Values: [1,... ] Importance: high Dynamic update: read-only After a consumer group loses all its consumers (i.e. becomes empty) its offsets will be kept for this retention period before getting discarded. For standalone consumers (using manual assignment), offsets will be expired after the time of last commit plus this retention period. offsets.topic.compression.codec Type: int Default: 0 Importance: high Dynamic update: read-only Compression codec for the offsets topic - compression may be used to achieve "atomic" commits. offsets.topic.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the offset commit topic (should not change after deployment). offsets.topic.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the offsets topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. offsets.topic.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The offsets topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. port Type: int Default: 9092 Importance: high Dynamic update: read-only DEPRECATED: only used when listeners is not set. Use listeners instead. the port to listen and accept connections on. queued.max.requests Type: int Default: 500 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of queued requests allowed for data-plane, before blocking the network threads. quota.consumer.default Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: read-only DEPRECATED: Used only when dynamic default quotas are not configured for <user, <client-id> or <user, client-id> in Zookeeper. Any consumer distinguished by clientId/consumer group will get throttled if it fetches more bytes than this value per-second. quota.producer.default Type: long Default: 9223372036854775807 Valid Values: [1,... ] Importance: high Dynamic update: read-only DEPRECATED: Used only when dynamic default quotas are not configured for <user>, <client-id> or <user, client-id> in Zookeeper. Any producer distinguished by clientId will get throttled if it produces more bytes than this value per-second. replica.fetch.min.bytes Type: int Default: 1 Importance: high Dynamic update: read-only Minimum bytes expected for each fetch response. If not enough bytes, wait up to replicaMaxWaitTimeMs. replica.fetch.wait.max.ms Type: int Default: 500 Importance: high Dynamic update: read-only max wait time for each fetcher request issued by follower replicas. This value should always be less than the replica.lag.time.max.ms at all times to prevent frequent shrinking of ISR for low throughput topics. replica.high.watermark.checkpoint.interval.ms Type: long Default: 5000 (5 seconds) Importance: high Dynamic update: read-only The frequency with which the high watermark is saved out to disk. replica.lag.time.max.ms Type: long Default: 30000 (30 seconds) Importance: high Dynamic update: read-only If a follower hasn't sent any fetch requests or hasn't consumed up to the leaders log end offset for at least this time, the leader will remove the follower from isr. replica.socket.receive.buffer.bytes Type: int Default: 65536 (64 kibibytes) Importance: high Dynamic update: read-only The socket receive buffer for network requests. replica.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The socket timeout for network requests. Its value should be at least replica.fetch.wait.max.ms. request.timeout.ms Type: int Default: 30000 (30 seconds) Importance: high Dynamic update: read-only The configuration controls the maximum amount of time the client will wait for the response of a request. If the response is not received before the timeout elapses the client will resend the request if necessary or fail the request if retries are exhausted. socket.receive.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_RCVBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. socket.request.max.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of bytes in a socket request. socket.send.buffer.bytes Type: int Default: 102400 (100 kibibytes) Importance: high Dynamic update: read-only The SO_SNDBUF buffer of the socket server sockets. If the value is -1, the OS default will be used. transaction.max.timeout.ms Type: int Default: 900000 (15 minutes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum allowed timeout for transactions. If a client's requested transaction time exceed this, then the broker will return an error in InitProducerIdRequest. This prevents a client from too large of a timeout, which can stall consumers reading from topics included in the transaction. transaction.state.log.load.buffer.size Type: int Default: 5242880 Valid Values: [1,... ] Importance: high Dynamic update: read-only Batch size for reading from the transaction log segments when loading producer ids and transactions into the cache (soft-limit, overridden if records are too large). transaction.state.log.min.isr Type: int Default: 2 Valid Values: [1,... ] Importance: high Dynamic update: read-only Overridden min.insync.replicas config for the transaction topic. transaction.state.log.num.partitions Type: int Default: 50 Valid Values: [1,... ] Importance: high Dynamic update: read-only The number of partitions for the transaction topic (should not change after deployment). transaction.state.log.replication.factor Type: short Default: 3 Valid Values: [1,... ] Importance: high Dynamic update: read-only The replication factor for the transaction topic (set higher to ensure availability). Internal topic creation will fail until the cluster size meets this replication factor requirement. transaction.state.log.segment.bytes Type: int Default: 104857600 (100 mebibytes) Valid Values: [1,... ] Importance: high Dynamic update: read-only The transaction topic segment bytes should be kept relatively small in order to facilitate faster log compaction and cache loads. transactional.id.expiration.ms Type: int Default: 604800000 (7 days) Valid Values: [1,... ] Importance: high Dynamic update: read-only The time in ms that the transaction coordinator will wait without receiving any transaction status updates for the current transaction before expiring its transactional id. This setting also influences producer id expiration - producer ids are expired once this time has elapsed after the last write with the given producer id. Note that producer ids may expire sooner if the last write from the producer id is deleted due to the topic's retention settings. unclean.leader.election.enable Type: boolean Default: false Importance: high Dynamic update: cluster-wide Indicates whether to enable replicas not in the ISR set to be elected as leader as a last resort, even though doing so may result in data loss. zookeeper.connection.timeout.ms Type: int Default: null Importance: high Dynamic update: read-only The max time that the client waits to establish a connection to zookeeper. If not set, the value in zookeeper.session.timeout.ms is used. zookeeper.max.in.flight.requests Type: int Default: 10 Valid Values: [1,... ] Importance: high Dynamic update: read-only The maximum number of unacknowledged requests the client will send to Zookeeper before blocking. zookeeper.session.timeout.ms Type: int Default: 18000 (18 seconds) Importance: high Dynamic update: read-only Zookeeper session timeout. zookeeper.set.acl Type: boolean Default: false Importance: high Dynamic update: read-only Set client to use secure ACLs. broker.id.generation.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable automatic broker id generation on the server. When enabled the value configured for reserved.broker.max.id should be reviewed. broker.rack Type: string Default: null Importance: medium Dynamic update: read-only Rack of the broker. This will be used in rack aware replication assignment for fault tolerance. Examples: RACK1 , us-east-1d . connections.max.idle.ms Type: long Default: 600000 (10 minutes) Importance: medium Dynamic update: read-only Idle connections timeout: the server socket processor threads close the connections that idle more than this. connections.max.reauth.ms Type: long Default: 0 Importance: medium Dynamic update: read-only When explicitly set to a positive number (the default is 0, not a positive number), a session lifetime that will not exceed the configured value will be communicated to v2.2.0 or later clients when they authenticate. The broker will disconnect any such connection that is not re-authenticated within the session lifetime and that is then subsequently used for any purpose other than re-authentication. Configuration names can optionally be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.oauthbearer.connections.max.reauth.ms=3600000. controlled.shutdown.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable controlled shutdown of the server. controlled.shutdown.max.retries Type: int Default: 3 Importance: medium Dynamic update: read-only Controlled shutdown can fail for multiple reasons. This determines the number of retries when such failure happens. controlled.shutdown.retry.backoff.ms Type: long Default: 5000 (5 seconds) Importance: medium Dynamic update: read-only Before each retry, the system needs time to recover from the state that caused the failure (Controller fail over, replica lag etc). This config determines the amount of time to wait before retrying. controller.socket.timeout.ms Type: int Default: 30000 (30 seconds) Importance: medium Dynamic update: read-only The socket timeout for controller-to-broker channels. default.replication.factor Type: int Default: 1 Importance: medium Dynamic update: read-only default replication factors for automatically created topics. delegation.token.expiry.time.ms Type: long Default: 86400000 (1 day) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token validity time in miliseconds before the token needs to be renewed. Default value 1 day. delegation.token.master.key Type: password Default: null Importance: medium Dynamic update: read-only Master/secret key to generate and verify delegation tokens. Same key must be configured across all the brokers. If the key is not set or set to empty string, brokers will disable the delegation token support. delegation.token.max.lifetime.ms Type: long Default: 604800000 (7 days) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The token has a maximum lifetime beyond which it cannot be renewed anymore. Default value 7 days. delete.records.purgatory.purge.interval.requests Type: int Default: 1 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the delete records request purgatory. fetch.max.bytes Type: int Default: 57671680 (55 mebibytes) Valid Values: [1024,... ] Importance: medium Dynamic update: read-only The maximum number of bytes we will return for a fetch request. Must be at least 1024. fetch.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the fetch request purgatory. group.initial.rebalance.delay.ms Type: int Default: 3000 (3 seconds) Importance: medium Dynamic update: read-only The amount of time the group coordinator will wait for more consumers to join a new group before performing the first rebalance. A longer delay means potentially fewer rebalances, but increases the time until processing begins. group.max.session.timeout.ms Type: int Default: 1800000 (30 minutes) Importance: medium Dynamic update: read-only The maximum allowed session timeout for registered consumers. Longer timeouts give consumers more time to process messages in between heartbeats at the cost of a longer time to detect failures. group.max.size Type: int Default: 2147483647 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The maximum number of consumers that a single consumer group can accommodate. group.min.session.timeout.ms Type: int Default: 6000 (6 seconds) Importance: medium Dynamic update: read-only The minimum allowed session timeout for registered consumers. Shorter timeouts result in quicker failure detection at the cost of more frequent consumer heartbeating, which can overwhelm broker resources. inter.broker.listener.name Type: string Default: null Importance: medium Dynamic update: read-only Name of listener used for communication between brokers. If this is unset, the listener name is defined by security.inter.broker.protocol. It is an error to set this and security.inter.broker.protocol properties at the same time. inter.broker.protocol.version Type: string Default: 2.6-IV0 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0] Importance: medium Dynamic update: read-only Specify which version of the inter-broker protocol will be used. This is typically bumped after all brokers were upgraded to a new version. Example of some valid values are: 0.8.0, 0.8.1, 0.8.1.1, 0.8.2, 0.8.2.0, 0.8.2.1, 0.9.0.0, 0.9.0.1 Check ApiVersion for the full list. log.cleaner.backoff.ms Type: long Default: 15000 (15 seconds) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The amount of time to sleep when there are no logs to clean. log.cleaner.dedupe.buffer.size Type: long Default: 134217728 Importance: medium Dynamic update: cluster-wide The total memory used for log deduplication across all cleaner threads. log.cleaner.delete.retention.ms Type: long Default: 86400000 (1 day) Importance: medium Dynamic update: cluster-wide How long are delete records retained? log.cleaner.enable Type: boolean Default: true Importance: medium Dynamic update: read-only Enable the log cleaner process to run on the server. Should be enabled if using any topics with a cleanup.policy=compact including the internal offsets topic. If disabled those topics will not be compacted and continually grow in size. log.cleaner.io.buffer.load.factor Type: double Default: 0.9 Importance: medium Dynamic update: cluster-wide Log cleaner dedupe buffer load factor. The percentage full the dedupe buffer can become. A higher value will allow more log to be cleaned at once but will lead to more hash collisions. log.cleaner.io.buffer.size Type: int Default: 524288 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The total memory used for log cleaner I/O buffers across all cleaner threads. log.cleaner.io.max.bytes.per.second Type: double Default: 1.7976931348623157E308 Importance: medium Dynamic update: cluster-wide The log cleaner will be throttled so that the sum of its read and write i/o will be less than this value on average. log.cleaner.max.compaction.lag.ms Type: long Default: 9223372036854775807 Importance: medium Dynamic update: cluster-wide The maximum time a message will remain ineligible for compaction in the log. Only applicable for logs that are being compacted. log.cleaner.min.cleanable.ratio Type: double Default: 0.5 Importance: medium Dynamic update: cluster-wide The minimum ratio of dirty log to total log for a log to eligible for cleaning. If the log.cleaner.max.compaction.lag.ms or the log.cleaner.min.compaction.lag.ms configurations are also specified, then the log compactor considers the log eligible for compaction as soon as either: (i) the dirty ratio threshold has been met and the log has had dirty (uncompacted) records for at least the log.cleaner.min.compaction.lag.ms duration, or (ii) if the log has had dirty (uncompacted) records for at most the log.cleaner.max.compaction.lag.ms period. log.cleaner.min.compaction.lag.ms Type: long Default: 0 Importance: medium Dynamic update: cluster-wide The minimum time a message will remain uncompacted in the log. Only applicable for logs that are being compacted. log.cleaner.threads Type: int Default: 1 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The number of background threads to use for log cleaning. log.cleanup.policy Type: list Default: delete Valid Values: [compact, delete] Importance: medium Dynamic update: cluster-wide The default cleanup policy for segments beyond the retention window. A comma separated list of valid policies. Valid policies are: "delete" and "compact". log.index.interval.bytes Type: int Default: 4096 (4 kibibytes) Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The interval with which we add an entry to the offset index. log.index.size.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [4,... ] Importance: medium Dynamic update: cluster-wide The maximum size in bytes of the offset index. log.message.format.version Type: string Default: 2.6-IV0 Valid Values: [0.8.0, 0.8.1, 0.8.2, 0.9.0, 0.10.0-IV0, 0.10.0-IV1, 0.10.1-IV0, 0.10.1-IV1, 0.10.1-IV2, 0.10.2-IV0, 0.11.0-IV0, 0.11.0-IV1, 0.11.0-IV2, 1.0-IV0, 1.1-IV0, 2.0-IV0, 2.0-IV1, 2.1-IV0, 2.1-IV1, 2.1-IV2, 2.2-IV0, 2.2-IV1, 2.3-IV0, 2.3-IV1, 2.4-IV0, 2.4-IV1, 2.5-IV0, 2.6-IV0] Importance: medium Dynamic update: read-only Specify the message format version the broker will use to append messages to the logs. The value should be a valid ApiVersion. Some examples are: 0.8.2, 0.9.0.0, 0.10.0, check ApiVersion for more details. By setting a particular message format version, the user is certifying that all the existing messages on disk are smaller or equal than the specified version. Setting this value incorrectly will cause consumers with older versions to break as they will receive messages with a format that they don't understand. log.message.timestamp.difference.max.ms Type: long Default: 9223372036854775807 Importance: medium Dynamic update: cluster-wide The maximum difference allowed between the timestamp when a broker receives a message and the timestamp specified in the message. If log.message.timestamp.type=CreateTime, a message will be rejected if the difference in timestamp exceeds this threshold. This configuration is ignored if log.message.timestamp.type=LogAppendTime.The maximum timestamp difference allowed should be no greater than log.retention.ms to avoid unnecessarily frequent log rolling. log.message.timestamp.type Type: string Default: CreateTime Valid Values: [CreateTime, LogAppendTime] Importance: medium Dynamic update: cluster-wide Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . log.preallocate Type: boolean Default: false Importance: medium Dynamic update: cluster-wide Should pre allocate file when create new segment? If you are using Kafka on Windows, you probably need to set it to true. log.retention.check.interval.ms Type: long Default: 300000 (5 minutes) Valid Values: [1,... ] Importance: medium Dynamic update: read-only The frequency in milliseconds that the log cleaner checks whether any log is eligible for deletion. max.connections Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow in the broker at any time. This limit is applied in addition to any per-ip limits configured using max.connections.per.ip. Listener-level limits may also be configured by prefixing the config name with the listener prefix, for example, listener.name.internal.max.connections . Broker-wide limit should be configured based on broker capacity while listener limits should be configured based on application requirements. New connections are blocked if either the listener or broker limit is reached. Connections on the inter-broker listener are permitted even if broker-wide limit is reached. The least recently used connection on another listener will be closed in this case. max.connections.per.ip Type: int Default: 2147483647 Valid Values: [0,... ] Importance: medium Dynamic update: cluster-wide The maximum number of connections we allow from each ip address. This can be set to 0 if there are overrides configured using max.connections.per.ip.overrides property. New connections from the ip address are dropped if the limit is reached. max.connections.per.ip.overrides Type: string Default: "" Importance: medium Dynamic update: cluster-wide A comma-separated list of per-ip or hostname overrides to the default maximum number of connections. An example value is "hostName:100,127.0.0.1:200". max.incremental.fetch.session.cache.slots Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only The maximum number of incremental fetch sessions that we will maintain. num.partitions Type: int Default: 1 Valid Values: [1,... ] Importance: medium Dynamic update: read-only The default number of log partitions per topic. password.encoder.old.secret Type: password Default: null Importance: medium Dynamic update: read-only The old secret that was used for encoding dynamically configured passwords. This is required only when the secret is updated. If specified, all dynamically encoded passwords are decoded using this old secret and re-encoded using password.encoder.secret when broker starts up. password.encoder.secret Type: password Default: null Importance: medium Dynamic update: read-only The secret used for encoding dynamically configured passwords for this broker. principal.builder.class Type: class Default: null Importance: medium Dynamic update: per-broker The fully qualified name of a class that implements the KafkaPrincipalBuilder interface, which is used to build the KafkaPrincipal object used during authorization. This config also supports the deprecated PrincipalBuilder interface which was previously used for client authentication over SSL. If no principal builder is defined, the default behavior depends on the security protocol in use. For SSL authentication, the principal will be derived using the rules defined by ssl.principal.mapping.rules applied on the distinguished name from the client certificate if one is provided; otherwise, if client authentication is not required, the principal name will be ANONYMOUS. For SASL authentication, the principal will be derived using the rules defined by sasl.kerberos.principal.to.local.rules if GSSAPI is in use, and the SASL authentication ID for other mechanisms. For PLAINTEXT, the principal will be ANONYMOUS. producer.purgatory.purge.interval.requests Type: int Default: 1000 Importance: medium Dynamic update: read-only The purge interval (in number of requests) of the producer request purgatory. queued.max.request.bytes Type: long Default: -1 Importance: medium Dynamic update: read-only The number of queued bytes allowed before no more requests are read. replica.fetch.backoff.ms Type: int Default: 1000 (1 second) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The amount of time to sleep when fetch partition error occurs. replica.fetch.max.bytes Type: int Default: 1048576 (1 mebibyte) Valid Values: [0,... ] Importance: medium Dynamic update: read-only The number of bytes of messages to attempt to fetch for each partition. This is not an absolute maximum, if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.fetch.response.max.bytes Type: int Default: 10485760 (10 mebibytes) Valid Values: [0,... ] Importance: medium Dynamic update: read-only Maximum bytes expected for the entire fetch response. Records are fetched in batches, and if the first record batch in the first non-empty partition of the fetch is larger than this value, the record batch will still be returned to ensure that progress can be made. As such, this is not an absolute maximum. The maximum record batch size accepted by the broker is defined via message.max.bytes (broker config) or max.message.bytes (topic config). replica.selector.class Type: string Default: null Importance: medium Dynamic update: read-only The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, we use an implementation that returns the leader. reserved.broker.max.id Type: int Default: 1000 Valid Values: [0,... ] Importance: medium Dynamic update: read-only Max number that can be used for a broker.id. sasl.client.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL client callback handler class that implements the AuthenticateCallbackHandler interface. sasl.enabled.mechanisms Type: list Default: GSSAPI Importance: medium Dynamic update: per-broker The list of SASL mechanisms enabled in the Kafka server. The list may contain any mechanism for which a security provider is available. Only GSSAPI is enabled by default. sasl.jaas.config Type: password Default: null Importance: medium Dynamic update: per-broker JAAS login context parameters for SASL connections in the format used by JAAS configuration files. JAAS configuration file format is described here . The format for the value is: 'loginModuleClass controlFlag (optionName=optionValue)*;'. For brokers, the config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.jaas.config=com.example.ScramLoginModule required;. sasl.kerberos.kinit.cmd Type: string Default: /usr/bin/kinit Importance: medium Dynamic update: per-broker Kerberos kinit command path. sasl.kerberos.min.time.before.relogin Type: long Default: 60000 Importance: medium Dynamic update: per-broker Login thread sleep time between refresh attempts. sasl.kerberos.principal.to.local.rules Type: list Default: DEFAULT Importance: medium Dynamic update: per-broker A list of rules for mapping from principal names to short names (typically operating system usernames). The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, principal names of the form {username}/{hostname}@{REALM} are mapped to {username}. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. sasl.kerberos.service.name Type: string Default: null Importance: medium Dynamic update: per-broker The Kerberos principal name that Kafka runs as. This can be defined either in Kafka's JAAS config or in Kafka's config. sasl.kerberos.ticket.renew.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker Percentage of random jitter added to the renewal time. sasl.kerberos.ticket.renew.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login thread will sleep until the specified window factor of time from last refresh to ticket's expiry has been reached, at which time it will try to renew the ticket. sasl.login.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL login callback handler class that implements the AuthenticateCallbackHandler interface. For brokers, login callback handler config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.callback.handler.class=com.example.CustomScramLoginCallbackHandler. sasl.login.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a class that implements the Login interface. For brokers, login config must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.scram-sha-256.sasl.login.class=com.example.CustomScramLogin. sasl.login.refresh.buffer.seconds Type: short Default: 300 Importance: medium Dynamic update: per-broker The amount of buffer time before credential expiration to maintain when refreshing a credential, in seconds. If a refresh would otherwise occur closer to expiration than the number of buffer seconds then the refresh will be moved up to maintain as much of the buffer time as possible. Legal values are between 0 and 3600 (1 hour); a default value of 300 (5 minutes) is used if no value is specified. This value and sasl.login.refresh.min.period.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.min.period.seconds Type: short Default: 60 Importance: medium Dynamic update: per-broker The desired minimum time for the login refresh thread to wait before refreshing a credential, in seconds. Legal values are between 0 and 900 (15 minutes); a default value of 60 (1 minute) is used if no value is specified. This value and sasl.login.refresh.buffer.seconds are both ignored if their sum exceeds the remaining lifetime of a credential. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.factor Type: double Default: 0.8 Importance: medium Dynamic update: per-broker Login refresh thread will sleep until the specified window factor relative to the credential's lifetime has been reached, at which time it will try to refresh the credential. Legal values are between 0.5 (50%) and 1.0 (100%) inclusive; a default value of 0.8 (80%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.login.refresh.window.jitter Type: double Default: 0.05 Importance: medium Dynamic update: per-broker The maximum amount of random jitter relative to the credential's lifetime that is added to the login refresh thread's sleep time. Legal values are between 0 and 0.25 (25%) inclusive; a default value of 0.05 (5%) is used if no value is specified. Currently applies only to OAUTHBEARER. sasl.mechanism.inter.broker.protocol Type: string Default: GSSAPI Importance: medium Dynamic update: per-broker SASL mechanism used for inter-broker communication. Default is GSSAPI. sasl.server.callback.handler.class Type: class Default: null Importance: medium Dynamic update: read-only The fully qualified name of a SASL server callback handler class that implements the AuthenticateCallbackHandler interface. Server callback handlers must be prefixed with listener prefix and SASL mechanism name in lower-case. For example, listener.name.sasl_ssl.plain.sasl.server.callback.handler.class=com.example.CustomPlainCallbackHandler. security.inter.broker.protocol Type: string Default: PLAINTEXT Importance: medium Dynamic update: read-only Security protocol used to communicate between brokers. Valid values are: PLAINTEXT, SSL, SASL_PLAINTEXT, SASL_SSL. It is an error to set this and inter.broker.listener.name properties at the same time. ssl.cipher.suites Type: list Default: "" Importance: medium Dynamic update: per-broker A list of cipher suites. This is a named combination of authentication, encryption, MAC and key exchange algorithm used to negotiate the security settings for a network connection using TLS or SSL network protocol. By default all the available cipher suites are supported. ssl.client.auth Type: string Default: none Valid Values: [required, requested, none] Importance: medium Dynamic update: per-broker Configures kafka broker to request client authentication. The following settings are common: ssl.client.auth=required If set to required client authentication is required. ssl.client.auth=requested This means client authentication is optional. unlike requested , if this option is set client can choose not to provide authentication information about itself ssl.client.auth=none This means client authentication is not needed. ssl.enabled.protocols Type: list Default: TLSv1.2,TLSv1.3 Importance: medium Dynamic update: per-broker The list of protocols enabled for SSL connections. The default is 'TLSv1.2,TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. With the default value for Java 11, clients and servers will prefer TLSv1.3 if both support it and fallback to TLSv1.2 otherwise (assuming both support at least TLSv1.2). This default should be fine for most cases. Also see the config documentation for ssl.protocol . ssl.key.password Type: password Default: null Importance: medium Dynamic update: per-broker The password of the private key in the key store file. This is optional for client. ssl.keymanager.algorithm Type: string Default: SunX509 Importance: medium Dynamic update: per-broker The algorithm used by key manager factory for SSL connections. Default value is the key manager factory algorithm configured for the Java Virtual Machine. ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the key store file. This is optional for client and can be used for two-way authentication for client. ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: per-broker The store password for the key store file. This is optional for client and only needed if ssl.keystore.location is configured. ssl.keystore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the key store file. This is optional for client. ssl.protocol Type: string Default: TLSv1.3 Importance: medium Dynamic update: per-broker The SSL protocol used to generate the SSLContext. The default is 'TLSv1.3' when running with Java 11 or newer, 'TLSv1.2' otherwise. This value should be fine for most use cases. Allowed values in recent JVMs are 'TLSv1.2' and 'TLSv1.3'. 'TLS', 'TLSv1.1', 'SSL', 'SSLv2' and 'SSLv3' may be supported in older JVMs, but their usage is discouraged due to known security vulnerabilities. With the default value for this config and 'ssl.enabled.protocols', clients will downgrade to 'TLSv1.2' if the server does not support 'TLSv1.3'. If this config is set to 'TLSv1.2', clients will not use 'TLSv1.3' even if it is one of the values in ssl.enabled.protocols and the server only supports 'TLSv1.3'. ssl.provider Type: string Default: null Importance: medium Dynamic update: per-broker The name of the security provider used for SSL connections. Default value is the default security provider of the JVM. ssl.trustmanager.algorithm Type: string Default: PKIX Importance: medium Dynamic update: per-broker The algorithm used by trust manager factory for SSL connections. Default value is the trust manager factory algorithm configured for the Java Virtual Machine. ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: per-broker The location of the trust store file. ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: per-broker The password for the trust store file. If a password is not set access to the truststore is still available, but integrity checking is disabled. ssl.truststore.type Type: string Default: JKS Importance: medium Dynamic update: per-broker The file format of the trust store file. zookeeper.clientCnxnSocket Type: string Default: null Importance: medium Dynamic update: read-only Typically set to org.apache.zookeeper.ClientCnxnSocketNetty when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the same-named zookeeper.clientCnxnSocket system property. zookeeper.ssl.client.enable Type: boolean Default: false Importance: medium Dynamic update: read-only Set client to use TLS when connecting to ZooKeeper. An explicit value overrides any value set via the zookeeper.client.secure system property (note the different name). Defaults to false if neither is set; when true, zookeeper.clientCnxnSocket must be set (typically to org.apache.zookeeper.ClientCnxnSocketNetty ); other values to set may include zookeeper.ssl.cipher.suites , zookeeper.ssl.crl.enable , zookeeper.ssl.enabled.protocols , zookeeper.ssl.endpoint.identification.algorithm , zookeeper.ssl.keystore.location , zookeeper.ssl.keystore.password , zookeeper.ssl.keystore.type , zookeeper.ssl.ocsp.enable , zookeeper.ssl.protocol , zookeeper.ssl.truststore.location , zookeeper.ssl.truststore.password , zookeeper.ssl.truststore.type . zookeeper.ssl.keystore.location Type: string Default: null Importance: medium Dynamic update: read-only Keystore location when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.location system property (note the camelCase). zookeeper.ssl.keystore.password Type: password Default: null Importance: medium Dynamic update: read-only Keystore password when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.password system property (note the camelCase). Note that ZooKeeper does not support a key password different from the keystore password, so be sure to set the key password in the keystore to be identical to the keystore password; otherwise the connection attempt to Zookeeper will fail. zookeeper.ssl.keystore.type Type: string Default: null Importance: medium Dynamic update: read-only Keystore type when using a client-side certificate with TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.keyStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the keystore. zookeeper.ssl.truststore.location Type: string Default: null Importance: medium Dynamic update: read-only Truststore location when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.location system property (note the camelCase). zookeeper.ssl.truststore.password Type: password Default: null Importance: medium Dynamic update: read-only Truststore password when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.password system property (note the camelCase). zookeeper.ssl.truststore.type Type: string Default: null Importance: medium Dynamic update: read-only Truststore type when using TLS connectivity to ZooKeeper. Overrides any explicit value set via the zookeeper.ssl.trustStore.type system property (note the camelCase). The default value of null means the type will be auto-detected based on the filename extension of the truststore. alter.config.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The alter configs policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.AlterConfigPolicy interface. alter.log.dirs.replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for alter log dirs replication quotas. alter.log.dirs.replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for alter log dirs replication quotas. authorizer.class.name Type: string Default: "" Importance: low Dynamic update: read-only The fully qualified name of a class that implements sorg.apache.kafka.server.authorizer.Authorizer interface, which is used by the broker for authorization. This config also supports authorizers that implement the deprecated kafka.security.auth.Authorizer trait which was previously used for authorization. client.quota.callback.class Type: class Default: null Importance: low Dynamic update: read-only The fully qualified name of a class that implements the ClientQuotaCallback interface, which is used to determine quota limits applied to client requests. By default, <user, client-id>, <user> or <client-id> quotas stored in ZooKeeper are applied. For any given request, the most specific quota that matches the user principal of the session and the client-id of the request is applied. connection.failed.authentication.delay.ms Type: int Default: 100 Valid Values: [0,... ] Importance: low Dynamic update: read-only Connection close delay on failed authentication: this is the time (in milliseconds) by which connection close will be delayed on authentication failure. This must be configured to be less than connections.max.idle.ms to prevent connection timeout. create.topic.policy.class.name Type: class Default: null Importance: low Dynamic update: read-only The create topic policy class that should be used for validation. The class should implement the org.apache.kafka.server.policy.CreateTopicPolicy interface. delegation.token.expiry.check.interval.ms Type: long Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only Scan interval to remove expired delegation tokens. kafka.metrics.polling.interval.secs Type: int Default: 10 Valid Values: [1,... ] Importance: low Dynamic update: read-only The metrics polling interval (in seconds) which can be used in kafka.metrics.reporters implementations. kafka.metrics.reporters Type: list Default: "" Importance: low Dynamic update: read-only A list of classes to use as Yammer metrics custom reporters. The reporters should implement kafka.metrics.KafkaMetricsReporter trait. If a client wants to expose JMX operations on a custom reporter, the custom reporter needs to additionally implement an MBean trait that extends kafka.metrics.KafkaMetricsReporterMBean trait so that the registered MBean is compliant with the standard MBean convention. listener.security.protocol.map Type: string Default: PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL Importance: low Dynamic update: per-broker Map between listener names and security protocols. This must be defined for the same security protocol to be usable in more than one port or IP. For example, internal and external traffic can be separated even if SSL is required for both. Concretely, the user could define listeners with names INTERNAL and EXTERNAL and this property as: INTERNAL:SSL,EXTERNAL:SSL . As shown, key and value are separated by a colon and map entries are separated by commas. Each listener name should only appear once in the map. Different security (SSL and SASL) settings can be configured for each listener by adding a normalised prefix (the listener name is lowercased) to the config name. For example, to set a different keystore for the INTERNAL listener, a config with name listener.name.internal.ssl.keystore.location would be set. If the config for the listener name is not set, the config will fallback to the generic config (i.e. ssl.keystore.location ). log.message.downconversion.enable Type: boolean Default: true Importance: low Dynamic update: cluster-wide This configuration controls whether down-conversion of message formats is enabled to satisfy consume requests. When set to false , broker will not perform down-conversion for consumers expecting an older message format. The broker responds with UNSUPPORTED_VERSION error for consume requests from such older clients. This configurationdoes not apply to any message format conversion that might be required for replication to followers. metric.reporters Type: list Default: "" Importance: low Dynamic update: cluster-wide A list of classes to use as metrics reporters. Implementing the org.apache.kafka.common.metrics.MetricsReporter interface allows plugging in classes that will be notified of new metric creation. The JmxReporter is always included to register JMX statistics. metrics.num.samples Type: int Default: 2 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples maintained to compute metrics. metrics.recording.level Type: string Default: INFO Importance: low Dynamic update: read-only The highest recording level for metrics. metrics.sample.window.ms Type: long Default: 30000 (30 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The window of time a metrics sample is computed over. password.encoder.cipher.algorithm Type: string Default: AES/CBC/PKCS5Padding Importance: low Dynamic update: read-only The Cipher algorithm used for encoding dynamically configured passwords. password.encoder.iterations Type: int Default: 4096 Valid Values: [1024,... ] Importance: low Dynamic update: read-only The iteration count used for encoding dynamically configured passwords. password.encoder.key.length Type: int Default: 128 Valid Values: [8,... ] Importance: low Dynamic update: read-only The key length used for encoding dynamically configured passwords. password.encoder.keyfactory.algorithm Type: string Default: null Importance: low Dynamic update: read-only The SecretKeyFactory algorithm used for encoding dynamically configured passwords. Default is PBKDF2WithHmacSHA512 if available and PBKDF2WithHmacSHA1 otherwise. quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for client quotas. quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for client quotas. replication.quota.window.num Type: int Default: 11 Valid Values: [1,... ] Importance: low Dynamic update: read-only The number of samples to retain in memory for replication quotas. replication.quota.window.size.seconds Type: int Default: 1 Valid Values: [1,... ] Importance: low Dynamic update: read-only The time span of each sample for replication quotas. security.providers Type: string Default: null Importance: low Dynamic update: read-only A list of configurable creator classes each returning a provider implementing security algorithms. These classes should implement the org.apache.kafka.common.security.auth.SecurityProviderCreator interface. ssl.endpoint.identification.algorithm Type: string Default: https Importance: low Dynamic update: per-broker The endpoint identification algorithm to validate server hostname using server certificate. ssl.engine.factory.class Type: class Default: null Importance: low Dynamic update: per-broker The class of type org.apache.kafka.common.security.auth.SslEngineFactory to provide SSLEngine objects. Default value is org.apache.kafka.common.security.ssl.DefaultSslEngineFactory. ssl.principal.mapping.rules Type: string Default: DEFAULT Importance: low Dynamic update: read-only A list of rules for mapping from distinguished name from the client certificate to short name. The rules are evaluated in order and the first rule that matches a principal name is used to map it to a short name. Any later rules in the list are ignored. By default, distinguished name of the X.500 certificate will be the principal. For more details on the format please see security authorization and acls . Note that this configuration is ignored if an extension of KafkaPrincipalBuilder is provided by the principal.builder.class configuration. ssl.secure.random.implementation Type: string Default: null Importance: low Dynamic update: per-broker The SecureRandom PRNG implementation to use for SSL cryptography operations. transaction.abort.timed.out.transaction.cleanup.interval.ms Type: int Default: 10000 (10 seconds) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to rollback transactions that have timed out. transaction.remove.expired.transaction.cleanup.interval.ms Type: int Default: 3600000 (1 hour) Valid Values: [1,... ] Importance: low Dynamic update: read-only The interval at which to remove transactions that have expired due to transactional.id.expiration.ms passing. zookeeper.ssl.cipher.suites Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled cipher suites to be used in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.ciphersuites system property (note the single word "ciphersuites"). The default value of null means the list of enabled cipher suites is determined by the Java runtime being used. zookeeper.ssl.crl.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Certificate Revocation List in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.crl system property (note the shorter name). zookeeper.ssl.enabled.protocols Type: list Default: null Importance: low Dynamic update: read-only Specifies the enabled protocol(s) in ZooKeeper TLS negotiation (csv). Overrides any explicit value set via the zookeeper.ssl.enabledProtocols system property (note the camelCase). The default value of null means the enabled protocol will be the value of the zookeeper.ssl.protocol configuration property. zookeeper.ssl.endpoint.identification.algorithm Type: string Default: HTTPS Importance: low Dynamic update: read-only Specifies whether to enable hostname verification in the ZooKeeper TLS negotiation process, with (case-insensitively) "https" meaning ZooKeeper hostname verification is enabled and an explicit blank value meaning it is disabled (disabling it is only recommended for testing purposes). An explicit value overrides any "true" or "false" value set via the zookeeper.ssl.hostnameVerification system property (note the different name and values; true implies https and false implies blank). zookeeper.ssl.ocsp.enable Type: boolean Default: false Importance: low Dynamic update: read-only Specifies whether to enable Online Certificate Status Protocol in the ZooKeeper TLS protocols. Overrides any explicit value set via the zookeeper.ssl.ocsp system property (note the shorter name). zookeeper.ssl.protocol Type: string Default: TLSv1.2 Importance: low Dynamic update: read-only Specifies the protocol to be used in ZooKeeper TLS negotiation. An explicit value overrides any value set via the same-named zookeeper.ssl.protocol system property. zookeeper.sync.time.ms Type: int Default: 2000 (2 seconds) Importance: low Dynamic update: read-only How far a ZK follower can be behind a ZK leader.
null
https://docs.redhat.com/en/documentation/red_hat_amq/2020.q4/html/using_amq_streams_on_rhel/broker-configuration-parameters-str
22.4. JBoss Operations Network Agent
22.4. JBoss Operations Network Agent The JBoss Operations Network Agent is a standalone Java application. Only one agent is required per machine, regardless of how many resources you require the agent to manage. The JBoss Operations Network Agent does not ship fully configured. Once the agent has been installed and configured it can be run as a Windows service from a console, or run as a daemon or init.d script in a UNIX environment. A JBoss Operations Network Agent must be installed on each of the machines being monitored in order to collect data. The JBoss Operations Network Agent is typically installed on the same machine on which Red Hat JBoss Data Grid is running, however where there are multiple machines an agent must be installed on each machine. Note For more detailed information about configuring JBoss Operations Network agents, see the JBoss Operations Network Installation Guide . Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/administration_and_configuration_guide/About_the_JBoss_Operations_Network_Agent
2.3. Clustered expiration events
2.3. Clustered expiration events JDG 6.6 features support for listeners to view clustered, lifespan-based expiration events. This enables you to implement custom logic triggered by expiration of an entry when its lifespan has elapsed. This feature is available in both Library and Client-Server modes. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/6.6.0_release_notes/clustered_expiration_events
Part VII. Designing a decision service using guided rule templates
Part VII. Designing a decision service using guided rule templates As a business analyst or business rules developer, you can define business rule templates using the guided rule templates designer in Business Central. These guided rule templates provide a reusable rule structure for multiple rules that are compiled into Drools Rule Language (DRL) and form the core of the decision service for your project. Note You can also design your decision service using Decision Model and Notation (DMN) models instead of rule-based or table-based assets. For information about DMN support in Red Hat Decision Manager 7.13, see the following resources: Getting started with decision services (step-by-step tutorial with a DMN decision service example) Designing a decision service using DMN models (overview of DMN support and capabilities in Red Hat Decision Manager) Prerequisites The space and project for the guided rule templates have been created in Business Central. Each asset is associated with a project assigned to a space. For details, see Getting started with decision services .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/assembly-guided-rule-templates
Chapter 2. Requirements
Chapter 2. Requirements 2.1. Red Hat Virtualization Manager Requirements 2.1.1. Hardware Requirements The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load. Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Red Hat certified hardware . Table 2.1. Red Hat Virtualization Manager Hardware Requirements Resource Minimum Recommended CPU A dual core x86_64 CPU. A quad core x86_64 CPU or multiple dual core x86_64 CPUs. Memory 4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. 16 GB of system RAM. Hard Disk 25 GB of locally accessible, writable disk space. 50 GB of locally accessible, writable disk space. You can use the RHV Manager History Database Size Calculator to calculate the appropriate disk space for the Manager history database size. Network Interface 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. 2.1.2. Browser Requirements The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal. Browser support is divided into tiers: Tier 1: Browser and operating system combinations that are fully tested and fully supported. Red Hat Engineering is committed to fixing issues with browsers on this tier. Tier 2: Browser and operating system combinations that are partially tested, and are likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with browsers on this tier. Tier 3: Browser and operating system combinations that are not tested, but may work. Minimal support is provided for this tier. Red Hat Engineering will attempt to fix only minor issues with browsers on this tier. Table 2.2. Browser Requirements Support Tier Operating System Family Browser Tier 1 Red Hat Enterprise Linux Mozilla Firefox Extended Support Release (ESR) version Any Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge Tier 2 Tier 3 Any Earlier versions of Google Chrome or Mozilla Firefox Any Other browsers 2.1.3. Client Requirements Virtual machine consoles can only be accessed using supported Remote Viewer ( virt-viewer ) clients on Red Hat Enterprise Linux and Windows. To install virt-viewer , see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide . Installing virt-viewer requires Administrator privileges. You can access virtual machine consoles using the SPICE, VNC, or RDP (Windows only) protocols. You can install the QXLDOD graphical driver in the guest operating system to improve the functionality of SPICE. SPICE currently supports a maximum resolution of 2560x1600 pixels. Client Operating System SPICE Support Supported QXLDOD drivers are available on Red Hat Enterprise Linux 7.2 and later, and Windows 10. Note SPICE may work with Windows 8 or 8.1 using QXLDOD drivers, but it is neither certified nor tested. 2.1.4. Operating System Requirements The Red Hat Virtualization Manager must be installed on a base installation of Red Hat Enterprise Linux 8.6. Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Manager. Do not enable additional repositories other than those required for the Manager installation. 2.2. Host Requirements Hardware certification for Red Hat Virtualization is covered by the hardware certification for Red Hat Enterprise Linux. For more information, see Does Red Hat Virtualization also have hardware certification? . To confirm whether specific hardware items are certified for use with Red Hat Enterprise Linux, see Find a certified solution . For more information on the requirements and limitations that apply to guests see Red Hat Enterprise Linux Technology Capabilities and Limits and Supported Limits for Red Hat Virtualization . 2.2.1. CPU Requirements All CPUs must have support for the Intel(R) 64 or AMD64 CPU extensions, and the AMD-VTM or Intel VT(R) hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required. The following CPU models are supported: AMD Opteron G4 Opteron G5 EPYC Intel Nehalem Westmere SandyBridge IvyBridge Haswell Broadwell Skylake Client Skylake Server Cascadelake Server IBM POWER8 POWER9 For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: Intel Cascadelake Server Family Secure Intel Cascadelake Server Family The Secure CPU type contains the latest updates. For details, see BZ# 1731395 2.2.1.1. Checking if a Processor Supports the Required Flags You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied. Procedure At the Red Hat Enterprise Linux or Red Hat Virtualization Host boot screen, press any key and select the Boot or Boot with serial console entry from the list. Press Tab to edit the kernel parameters for the selected option. Ensure there is a space after the last kernel parameter listed, and append the parameter rescue . Press Enter to boot into rescue mode. At the prompt, determine that your processor has the required extensions and that they are enabled by running this command: If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system's BIOS and the motherboard manual provided by the manufacturer. 2.2.2. Memory Requirements The minimum required RAM is 2 GB. For cluster levels 4.2 to 4.5, the maximum supported RAM per VM in Red Hat Virtualization Host is 6 TB. For cluster levels 4.6 to 4.7, the maximum supported RAM per VM in Red Hat Virtualization Host is 16 TB. However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap. 2.2.3. Storage Requirements Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. Red Hat Virtualization Host (RHVH) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If RHVH boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade. The minimum storage requirements of RHVH are documented in this section. The storage requirements for Red Hat Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of RHVH. The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space. / (root) - 6 GB /home - 1 GB /tmp - 1 GB /boot - 1 GB /var - 5 GB /var/crash - 10 GB /var/log - 8 GB /var/log/audit - 2 GB /var/tmp - 10 GB swap - 1 GB. See What is the recommended swap size for Red Hat platforms? for details. Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported. Minimum Total - 64 GiB If you are also installing the RHV-M Appliance for self-hosted engine installation, /var/tmp must be at least 10 GB. If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization . 2.2.4. PCI Device Requirements Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available. For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine . 2.2.5. Device Assignment Requirements If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met: CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default. Firmware must support IOMMU. CPU root ports used must support ACS or ACS-equivalent capability. PCIe devices must support ACS or ACS-equivalent capability. All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine. For GPU support, Red Hat Enterprise Linux 8 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card. Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v command can be used to print information for PCI devices already installed on a system. 2.2.6. vGPU Requirements A host must meet the following requirements in order for virtual machines on that host to use a vGPU: vGPU-compatible GPU GPU-enabled host kernel Installed GPU with correct drivers Select a vGPU type and the number of instances that you would like to use with this virtual machine using the Manage vGPU dialog in the Administration Portal Host Devices tab of the virtual machine. vGPU-capable drivers installed on each host in the cluster vGPU-supported virtual machine operating system with vGPU drivers installed 2.3. Networking requirements 2.3.1. General requirements Red Hat Virtualization requires IPv6 to remain enabled on the physical or virtual machine running the Manager. Do not disable IPv6 on the Manager machine, even if your systems do not use it. 2.3.2. Firewall Requirements for DNS, NTP, and IPMI Fencing The firewall requirements for all of the following topics are special cases that require individual consideration. DNS and NTP Red Hat Virtualization does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers. Important The Red Hat Virtualization Manager and all hosts (Red Hat Virtualization Host and Red Hat Enterprise Linux host) must have a fully qualified domain name and full, perfectly-aligned forward and reverse name resolution. Running a DNS service as a virtual machine in the Red Hat Virtualization environment is not supported. All DNS services the Red Hat Virtualization environment uses must be hosted outside of the environment. Use DNS instead of the /etc/hosts file for name resolution. Using a hosts file typically requires more work and has a greater chance for errors. IPMI and Other Fencing Mechanisms (optional) For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic. By default, Red Hat Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers. Each Red Hat Virtualization Host and Red Hat Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error... ) and cannot function as hosts, they must be able to connect to other hosts in the data center. The specific port number depends on the type of the fence agent you are using and how it is configured. The firewall requirement tables in the following sections do not represent this option. 2.3.3. Red Hat Virtualization Manager Firewall Requirements The Red Hat Virtualization Manager requires that a number of ports be opened to allow network traffic through the system's firewall. The engine-setup script can configure the firewall automatically. The firewall configuration documented here assumes a default configuration. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.3. Red Hat Virtualization Manager Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default M1 - ICMP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Optional. May help in diagnosis. No M2 22 TCP System(s) used for maintenance of the Manager including backend configuration, and software upgrades. Red Hat Virtualization Manager Secure Shell (SSH) access. Optional. Yes M3 2222 TCP Clients accessing virtual machine serial consoles. Red Hat Virtualization Manager Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes M4 80, 443 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts REST API clients Red Hat Virtualization Manager Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Manager. HTTP redirects connections to HTTPS. Yes M5 6100 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Manager Provides websocket proxy access for a web-based console client, noVNC , when the websocket proxy is running on the Manager. No M6 7410 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Manager. See fence_kdump Advanced Configuration . fence_kdump doesn't provide a way to encrypt the connection. However, you can manually configure this port to block access from hosts that are not eligible. No M7 54323 TCP Administration Portal clients Red Hat Virtualization Manager ( ovirt-imageio service) Required for communication with the ovirt-imageo service. Yes M8 6642 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Open Virtual Network (OVN) southbound database Connect to Open Virtual Network (OVN) database Yes M9 9696 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Networking API Yes, with configuration generated by engine-setup. M10 35357 TCP Clients of external network provider for OVN External network provider for OVN OpenStack Identity API Yes, with configuration generated by engine-setup. M11 53 TCP, UDP Red Hat Virtualization Manager DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. No M12 123 UDP Red Hat Virtualization Manager NTP Server NTP requests from ports above 1023 to port 123, and responses. Open by default. No Note A port for the OVN northbound database (6641) is not listed because, in the default configuration, the only client for the OVN northbound database (6641) is ovirt-provider-ovn . Because they both run on the same host, their communication is not visible to the network. By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Manager to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.4. Host Firewall Requirements Red Hat Enterprise Linux hosts and Red Hat Virtualization Hosts (RHVH) require a number of ports to be opened to allow network traffic through the system's firewall. The firewall rules are automatically configured by default when adding a new host to the Manager, overwriting any pre-existing firewall configuration. To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters . To customize the host firewall rules, see RHV: How to customize the Host's firewall rules? . Note A diagram of these firewall requirements is available at Red Hat Virtualization: Firewall Requirements Diagram . You can use the IDs in the table to look up connections in the diagram. Table 2.4. Virtualization Host Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default H1 22 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access. Optional. Yes H2 2223 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Secure Shell (SSH) access to enable connection to virtual machine serial consoles. Yes H3 161 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Manager Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. No H4 111 TCP NFS storage server Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NFS connections. Optional. No H5 5900 - 6923 TCP Administration Portal clients VM Portal clients Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. Yes (optional) H6 5989 TCP, UDP Common Information Model Object Manager (CIMOM) Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. No H7 9090 TCP Red Hat Virtualization Manager Client machines Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required to access the Cockpit web interface, if installed. Yes H8 16514 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration using libvirt . Yes H9 49152 - 49215 TCP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. Yes. Depending on agent for fencing, migration is done through libvirt. H10 54321 TCP Red Hat Virtualization Manager Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts VDSM communications with the Manager and other virtualization hosts. Yes H11 54322 TCP Red Hat Virtualization Manager ovirt-imageio service Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required for communication with the ovirt-imageo service. Yes H12 6081 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. No H13 53 TCP, UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts DNS Server DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. No H14 123 UDP Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts NTP Server NTP requests from ports above 1023 to port 123, and responses. This port is required and open by default. H15 4500 TCP, UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H16 500 UDP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes H17 - AH, ESP Red Hat Virtualization Hosts Red Hat Virtualization Hosts Internet Security Protocol (IPSec) Yes Note By default, Red Hat Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the Red Hat Virtualization Hosts Red Hat Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. 2.3.5. Database Server Firewall Requirements Red Hat Virtualization supports the use of a remote database server for the Manager database ( engine ) and the Data Warehouse database ( ovirt-engine-history ). If you plan to use a remote database server, it must allow connections from the Manager and the Data Warehouse service (which can be separate from the Manager). Similarly, if you plan to access a local or remote Data Warehouse database from an external system, the database must allow connections from that system. Important Accessing the Manager database from external systems is not supported. Note A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211 . You can use the IDs in the table to look up connections in the diagram. Table 2.5. Database Server Firewall Requirements ID Port(s) Protocol Source Destination Purpose Encrypted by default D1 5432 TCP, UDP Red Hat Virtualization Manager Data Warehouse service Manager ( engine ) database server Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. No, but can be enabled . D2 5432 TCP, UDP External systems Data Warehouse ( ovirt-engine-history ) database server Default port for PostgreSQL database connections. Disabled by default. No, but can be enabled . 2.3.6. Maximum Transmission Unit Requirements The recommended Maximum Transmission Units (MTU) setting for Hosts during deployment is 1500. It is possible to update this setting after the environment is set up to a different MTU. For more information on changing the MTU setting, see How to change the Hosted Engine VM network MTU .
[ "grep -E 'svm|vmx' /proc/cpuinfo | grep nx" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/RHV_requirements
Appendix A. Using your subscription
Appendix A. Using your subscription AMQ is provided through a software subscription. To manage your subscriptions, access your account at the Red Hat Customer Portal. A.1. Accessing your account Procedure Go to access.redhat.com . If you do not already have an account, create one. Log in to your account. A.2. Activating a subscription Procedure Go to access.redhat.com . Navigate to My Subscriptions . Navigate to Activate a subscription and enter your 16-digit activation number. A.3. Downloading release files To access .zip, .tar.gz, and other release files, use the customer portal to find the relevant files for download. If you are using RPM packages or the Red Hat Maven repository, this step is not required. Procedure Open a browser and log in to the Red Hat Customer Portal Product Downloads page at access.redhat.com/downloads . Locate the Red Hat AMQ entries in the INTEGRATION AND AUTOMATION category. Select the desired AMQ product. The Software Downloads page opens. Click the Download link for your component. A.4. Registering your system for packages To install RPM packages for this product on Red Hat Enterprise Linux, your system must be registered. If you are using downloaded release files, this step is not required. Procedure Go to access.redhat.com . Navigate to Registration Assistant . Select your OS version and continue to the page. Use the listed command in your system terminal to complete the registration. For more information about registering your system, see one of the following resources: Red Hat Enterprise Linux 7 - Registering the system and managing subscriptions Red Hat Enterprise Linux 8 - Registering the system and managing subscriptions
null
https://docs.redhat.com/en/documentation/red_hat_amq/2021.q1/html/using_the_amq_openwire_jms_client/using_your_subscription
probe::netdev.hard_transmit
probe::netdev.hard_transmit Name probe::netdev.hard_transmit - Called when the devices is going to TX (hard) Synopsis Values protocol The protocol used in the transmission dev_name The device scheduled to transmit length The length of the transmit buffer. truesize The size of the data to be transmitted.
[ "netdev.hard_transmit" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/systemtap_tapset_reference/api-netdev-hard-transmit
Chapter 17. @timestamp
Chapter 17. @timestamp A UTC value that marks when the log payload was created or, if the creation time is not known, when the log payload was first collected. The "@" prefix denotes a field that is reserved for a particular use. By default, most tools look for "@timestamp" with ElasticSearch. Data type date Example value 2015-01-24 14:06:05.071000000 Z
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/logging/timestamp
7.3. Creating a Template
7.3. Creating a Template Create a template from an existing virtual machine to use as a blueprint for creating additional virtual machines. Note In RHV 4.4, to seal a RHEL 8 virtual machine for a template, its cluster level must be 4.4 and all hosts in the cluster must be based on RHEL 8. You cannot seal a RHEL 8 virtual machine if you have set its cluster level to 4.3 so it can run on RHEL 7 hosts. When you create a template, you specify the format of the disk to be raw or QCOW2: QCOW2 disks are thin provisioned. Raw disks on file storage are thin provisioned. Raw disks on block storage are preallocated. Creating a Template Click Compute Virtual Machines and select the source virtual machine. Ensure the virtual machine is powered down and has a status of Down . Click More Actions ( ), then click Make Template . For more details on all fields in the New Template window, see Explanation of Settings in the New Template and Edit Template Windows . Enter a Name , Description , and Comment for the template. Select the cluster with which to associate the template from the Cluster drop-down list. By default, this is the same as that of the source virtual machine. Optionally, select a CPU profile for the template from the CPU Profile drop-down list. Optionally, select the Create as a Template Sub-Version check box, select a Root Template , and enter a Sub-Version Name to create the new template as a sub-template of an existing template. In the Disks Allocation section, enter an alias for the disk in the Alias text field. Select the disk format in the Format drop-down, the storage domain on which to store the disk from the Target drop-down, and the disk profile in the Disk Profile drop-down. By default, these are the same as those of the source virtual machine. Select the Allow all users to access this Template check box to make the template public. Select the Copy VM permissions check box to copy the permissions of the source virtual machine to the template. Select the Seal Template check box (Linux only) to seal the template. Note Sealing, which uses the virt-sysprep command, removes system-specific details from a virtual machine before creating a template based on that virtual machine. This prevents the original virtual machine's details from appearing in subsequent virtual machines that are created using the same template. It also ensures the functionality of other features, such as predictable vNIC order. See virt-sysprep operations for more information. Click OK . The virtual machine displays a status of Image Locked while the template is being created. The process of creating a template may take up to an hour depending on the size of the virtual disk and the capabilities of your storage hardware. When complete, the template is added to the Templates tab. You can now create new virtual machines based on the template. Note When a template is made, the virtual machine is copied so that both the existing virtual machine and its template are usable after template creation.
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/Creating_a_template_from_an_existing_virtual_machine
Chapter 11. Using hub templates in PolicyGenerator or PolicyGenTemplate CRs
Chapter 11. Using hub templates in PolicyGenerator or PolicyGenTemplate CRs Topology Aware Lifecycle Manager supports Red Hat Advanced Cluster Management (RHACM) hub cluster template functions in configuration policies used with GitOps Zero Touch Provisioning (ZTP). Hub-side cluster templates allow you to define configuration policies that can be dynamically customized to the target clusters. This reduces the need to create separate policies for many clusters with similar configurations but with different values. Important Policy templates are restricted to the same namespace as the namespace where the policy is defined. This means you must create the objects referenced in the hub template in the same namespace where the policy is created. Important Using PolicyGenTemplate CRs to manage and deploy polices to managed clusters will be deprecated in an upcoming OpenShift Container Platform release. Equivalent and improved functionality is available using Red Hat Advanced Cluster Management (RHACM) and PolicyGenerator CRs. For more information about PolicyGenerator resources, see the RHACM Policy Generator documentation. Additional resources Configuring managed cluster policies by using PolicyGenerator resources Comparing RHACM PolicyGenerator and PolicyGenTemplate resource patching RHACM support for template processing in configuration policies 11.1. Specifying group and site configurations in group PolicyGenerator or PolicyGentemplate CRs You can manage the configuration of fleets of clusters with ConfigMap CRs by using hub templates to populate the group and site values in the generated policies that get applied to the managed clusters. Using hub templates in site PolicyGenerator or PolicyGentemplate CRs means that you do not need to create a policy CR for each site. You can group the clusters in a fleet in various categories, depending on the use case, for example hardware type or region. Each cluster should have a label corresponding to the group or groups that the cluster is in. If you manage the configuration values for each group in different ConfigMap CRs, then you require only one group policy CR to apply the changes to all the clusters in the group by using hub templates. The following example shows you how to use three ConfigMap CRs and one PolicyGenerator CR to apply both site and group configuration to clusters grouped by hardware type and region. Note There is a 1 MiB size limit (Kubernetes documentation) for ConfigMap CRs. The effective size for the ConfigMap CRs is further limited by the last-applied-configuration annotation. To avoid the last-applied-configuration limitation, add the following annotation to the template ConfigMap : argocd.argoproj.io/sync-options: Replace=true Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a Git repository where you manage your custom site configuration data. The repository must be accessible from the hub cluster and be defined as a source repository for the GitOps ZTP ArgoCD application. Procedure Create three ConfigMap CRs that contain the group and site configuration: Create a ConfigMap CR named group-hardware-types-configmap to hold the hardware-specific configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: "[\"ens5f0\"]" hardware-type-1-sriov-node-policy-pfNames-2: "[\"ens7f0\"]" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: "2-31,34-63" hardware-type-1-cpu-reserved: "0-1,32-33" hardware-type-1-hugepages-default: "1G" hardware-type-1-hugepages-size: "1G" hardware-type-1-hugepages-count: "32" 1 The argocd.argoproj.io/sync-options annotation is required only if the ConfigMap is larger than 1 MiB in size. Create a ConfigMap CR named group-zones-configmap to hold the regional configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: "[{\"type\":\"kafka\", \"name\":\"kafka-open\", \"url\":\"tcp://10.46.55.190:9092/test\"}]" zone-1-cluster-log-fwd-pipelines: "[{\"inputRefs\":[\"audit\", \"infrastructure\"], \"labels\": {\"label1\": \"test1\", \"label2\": \"test2\", \"label3\": \"test3\", \"label4\": \"test4\"}, \"name\": \"all-to-default\", \"outputRefs\": [\"kafka-open\"]}]" Create a ConfigMap CR named site-data-configmap to hold the site-specific configuration. For example: apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: "140" du-sno-1-zone-1-sriov-network-vlan-2: "150" Note Each ConfigMap CR must be in the same namespace as the policy to be generated from the group PolicyGenerator CR. Commit the ConfigMap CRs in Git, and then push to the Git repository being monitored by the Argo CD application. Apply the hardware type and region labels to the clusters. The following command applies to a single cluster named du-sno-1-zone-1 and the labels chosen are "hardware-type": "hardware-type-1" and "group-du-sno-zone": "zone-1" : USD oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{"metadata":{"labels":{"hardware-type": "hardware-type-1", "group-du-sno-zone": "zone-1"}}}' Depending on your requirements, Create a group PolicyGenerator or PolicyGentemplate CR that uses hub templates to obtain the required data from the ConfigMap objects: Create a group PolicyGenerator CR. This example PolicyGenerator CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed the under policyDefaults.placement field: --- apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno-pgt placementBindingDefaults: name: group-du-sno-pgt-placement-binding policyDefaults: placement: labelSelector: matchExpressions: - key: group-du-sno-zone operator: In values: - zone-1 - key: hardware-type operator: In values: - hardware-type-1 remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-pgt-group-du-sno-cfg-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: "10" manifests: - path: source-crs/ClusterLogForwarder.yaml patches: - spec: outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' - path: source-crs/PerformanceProfile-MCP-master.yaml patches: - metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}' reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}' pages: - count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}' size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}' realTimeKernel: enabled: true - name: group-du-sno-pgt-group-du-sno-sriov-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: "100" manifests: - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh Create a group PolicyGenTemplate CR. This example PolicyGenTemplate CR configures logging, VLAN IDs, NICs and Performance Profile for the clusters that match the labels listed under spec.bindingRules : apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: "zone-1" hardware-type: "hardware-type-1" mcp: "master" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: "group-du-sno-cfg-policy" spec: outputs: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-outputs" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap "" "group-zones-configmap" (printf "%s-cluster-log-fwd-pipelines" (index .ManagedClusterLabels "group-du-sno-zone")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: "group-du-sno-cfg-policy" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-isolated" (index .ManagedClusterLabels "hardware-type")) hub}}' reserved: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-cpu-reserved" (index .ManagedClusterLabels "hardware-type")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-default" (index .ManagedClusterLabels "hardware-type")) hub}}' pages: - size: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-size" (index .ManagedClusterLabels "hardware-type")) hub}}' count: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-hugepages-count" (index .ManagedClusterLabels "hardware-type")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-1" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-1" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap "" "site-data-configmap" (printf "%s-sriov-network-vlan-2" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: "group-du-sno-sriov-policy" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap "" "group-hardware-types-configmap" (printf "%s-sriov-node-policy-pfNames-2" (index .ManagedClusterLabels "hardware-type")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh Note To retrieve site-specific configuration values, use the .ManagedClusterName field. This is a template context value set to the name of the target managed cluster. To retrieve group-specific configuration, use the .ManagedClusterLabels field. This is a template context value set to the value of the managed cluster's labels. Commit the site PolicyGenerator or PolicyGentemplate CR in Git and push to the Git repository that is monitored by the ArgoCD application. Note Subsequent changes to the referenced ConfigMap CR are not automatically synced to the applied policies. You need to manually sync the new ConfigMap changes to update existing PolicyGenerator CRs. See "Syncing new ConfigMap changes to existing PolicyGenerator or PolicyGenTemplate CRs". You can use the same PolicyGenerator or PolicyGentemplate CR for multiple clusters. If there is a configuration change, then the only modifications you need to make are to the ConfigMap objects that hold the configuration for each cluster and the labels of the managed clusters. 11.2. Syncing new ConfigMap changes to existing PolicyGenerator or PolicyGentemplate CRs Prerequisites You have installed the OpenShift CLI ( oc ). You have logged in to the hub cluster as a user with cluster-admin privileges. You have created a PolicyGenerator or PolicyGentemplate CR that pulls information from a ConfigMap CR using hub cluster templates. Procedure Update the contents of your ConfigMap CR, and apply the changes in the hub cluster. To sync the contents of the updated ConfigMap CR to the deployed policy, do either of the following: Option 1: Delete the existing policy. ArgoCD uses the PolicyGenerator or PolicyGentemplate CR to immediately recreate the deleted policy. For example, run the following command: USD oc delete policy <policy_name> -n <policy_namespace> Option 2: Apply a special annotation policy.open-cluster-management.io/trigger-update to the policy with a different value every time when you update the ConfigMap . For example: USD oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update="1" Note You must apply the updated policy for the changes to take effect. For more information, see Special annotation for reprocessing . Optional: If it exists, delete the ClusterGroupUpdate CR that contains the policy. For example: USD oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace> Create a new ClusterGroupUpdate CR that includes the policy to apply with the updated ConfigMap changes. For example, add the following YAML to the file cgr-example.yaml : apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240 Apply the updated policy: USD oc apply -f cgr-example.yaml
[ "argocd.argoproj.io/sync-options: Replace=true", "apiVersion: v1 kind: ConfigMap metadata: name: group-hardware-types-configmap namespace: ztp-group annotations: argocd.argoproj.io/sync-options: Replace=true 1 data: # SriovNetworkNodePolicy.yaml hardware-type-1-sriov-node-policy-pfNames-1: \"[\\\"ens5f0\\\"]\" hardware-type-1-sriov-node-policy-pfNames-2: \"[\\\"ens7f0\\\"]\" # PerformanceProfile.yaml hardware-type-1-cpu-isolated: \"2-31,34-63\" hardware-type-1-cpu-reserved: \"0-1,32-33\" hardware-type-1-hugepages-default: \"1G\" hardware-type-1-hugepages-size: \"1G\" hardware-type-1-hugepages-count: \"32\"", "apiVersion: v1 kind: ConfigMap metadata: name: group-zones-configmap namespace: ztp-group data: # ClusterLogForwarder.yaml zone-1-cluster-log-fwd-outputs: \"[{\\\"type\\\":\\\"kafka\\\", \\\"name\\\":\\\"kafka-open\\\", \\\"url\\\":\\\"tcp://10.46.55.190:9092/test\\\"}]\" zone-1-cluster-log-fwd-pipelines: \"[{\\\"inputRefs\\\":[\\\"audit\\\", \\\"infrastructure\\\"], \\\"labels\\\": {\\\"label1\\\": \\\"test1\\\", \\\"label2\\\": \\\"test2\\\", \\\"label3\\\": \\\"test3\\\", \\\"label4\\\": \\\"test4\\\"}, \\\"name\\\": \\\"all-to-default\\\", \\\"outputRefs\\\": [\\\"kafka-open\\\"]}]\"", "apiVersion: v1 kind: ConfigMap metadata: name: site-data-configmap namespace: ztp-group data: # SriovNetwork.yaml du-sno-1-zone-1-sriov-network-vlan-1: \"140\" du-sno-1-zone-1-sriov-network-vlan-2: \"150\"", "oc patch managedclusters.cluster.open-cluster-management.io/du-sno-1-zone-1 --type merge -p '{\"metadata\":{\"labels\":{\"hardware-type\": \"hardware-type-1\", \"group-du-sno-zone\": \"zone-1\"}}}'", "--- apiVersion: policy.open-cluster-management.io/v1 kind: PolicyGenerator metadata: name: group-du-sno-pgt placementBindingDefaults: name: group-du-sno-pgt-placement-binding policyDefaults: placement: labelSelector: matchExpressions: - key: group-du-sno-zone operator: In values: - zone-1 - key: hardware-type operator: In values: - hardware-type-1 remediationAction: inform severity: low namespaceSelector: exclude: - kube-* include: - '*' evaluationInterval: compliant: 10m noncompliant: 10s policies: - name: group-du-sno-pgt-group-du-sno-cfg-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"10\" manifests: - path: source-crs/ClusterLogForwarder.yaml patches: - spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - path: source-crs/PerformanceProfile-MCP-master.yaml patches: - metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' realTimeKernel: enabled: true - name: group-du-sno-pgt-group-du-sno-sriov-policy policyAnnotations: ran.openshift.io/ztp-deploy-wave: \"100\" manifests: - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - path: source-crs/SriovNetwork.yaml patches: - metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - path: source-crs/SriovNetworkNodePolicy-MCP-master.yaml patches: - metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh", "apiVersion: ran.openshift.io/v1 kind: PolicyGenTemplate metadata: name: group-du-sno-pgt namespace: ztp-group spec: bindingRules: # These policies will correspond to all clusters with these labels group-du-sno-zone: \"zone-1\" hardware-type: \"hardware-type-1\" mcp: \"master\" sourceFiles: - fileName: ClusterLogForwarder.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" spec: outputs: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-outputs\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' pipelines: '{{hub fromConfigMap \"\" \"group-zones-configmap\" (printf \"%s-cluster-log-fwd-pipelines\" (index .ManagedClusterLabels \"group-du-sno-zone\")) | toLiteral hub}}' - fileName: PerformanceProfile.yaml # wave 10 policyName: \"group-du-sno-cfg-policy\" metadata: name: openshift-node-performance-profile spec: additionalKernelArgs: - rcupdate.rcu_normal_after_boot=0 - vfio_pci.enable_sriov=1 - vfio_pci.disable_idle_d3=1 - efi=runtime cpu: isolated: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-isolated\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' reserved: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-cpu-reserved\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' hugepages: defaultHugepagesSize: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-default\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' pages: - size: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-size\" (index .ManagedClusterLabels \"hardware-type\")) hub}}' count: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-hugepages-count\" (index .ManagedClusterLabels \"hardware-type\")) | toInt hub}}' realTimeKernel: enabled: true - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: resourceName: du_fh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-1\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nnp-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-1\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh - fileName: SriovNetwork.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-mh spec: resourceName: du_mh vlan: '{{hub fromConfigMap \"\" \"site-data-configmap\" (printf \"%s-sriov-network-vlan-2\" .ManagedClusterName) | toInt hub}}' - fileName: SriovNetworkNodePolicy.yaml # wave 100 policyName: \"group-du-sno-sriov-policy\" metadata: name: sriov-nw-du-fh spec: deviceType: netdevice isRdma: false nicSelector: pfNames: '{{hub fromConfigMap \"\" \"group-hardware-types-configmap\" (printf \"%s-sriov-node-policy-pfNames-2\" (index .ManagedClusterLabels \"hardware-type\")) | toLiteral hub}}' numVfs: 8 priority: 10 resourceName: du_fh", "oc delete policy <policy_name> -n <policy_namespace>", "oc annotate policy <policy_name> -n <policy_namespace> policy.open-cluster-management.io/trigger-update=\"1\"", "oc delete clustergroupupgrade <cgu_name> -n <cgu_namespace>", "apiVersion: ran.openshift.io/v1alpha1 kind: ClusterGroupUpgrade metadata: name: <cgr_name> namespace: <policy_namespace> spec: managedPolicies: - <managed_policy> enable: true clusters: - <managed_cluster_1> - <managed_cluster_2> remediationStrategy: maxConcurrency: 2 timeout: 240", "oc apply -f cgr-example.yaml" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/edge_computing/ztp-using-hub-cluster-templates-pgt
Chapter 27. Relax-and-Recover (ReaR)
Chapter 27. Relax-and-Recover (ReaR) When a software or hardware failure breaks the system, the system administrator faces three tasks to restore it to the fully functioning state on a new hardware environment: booting a rescue system on the new hardware replicating the original storage layout restoring user and system files Most backup software solves only the third problem. To solve the first and second problems, use Relax-and-Recover (ReaR) , a disaster recovery and system migration utility. Backup software creates backups. ReaR complements backup software by creating a rescue system . Booting the rescue system on a new hardware allows you to issue the rear recover command, which starts the recovery process. During this process, ReaR replicates the partition layout and filesystems, prompts for restoring user and system files from the backup created by backup software, and finally installs the boot loader. By default, the rescue system created by ReaR only restores the storage layout and the boot loader, but not the actual user and system files. This chapter describes how to use ReaR. 27.1. Basic ReaR Usage 27.1.1. Installing ReaR Install the rear package by running the following command as root: 27.1.2. Configuring ReaR ReaR is configured in the /etc/rear/local.conf file. Specify the rescue system configuration by adding these lines: Substitute output format with rescue system format, for example, ISO for an ISO disk image or USB for a bootable USB. Substitute output location with where it will be put, for example, file:///mnt/rescue_system/ for a local filesystem directory or sftp://backup:[email protected]/ for an SFTP directory. Example 27.1. Configuring Rescue System Format and Location To configure ReaR to output the rescue system as an ISO image into the /mnt/rescue_system/ directory, add these lines to the /etc/rear/local.conf file: See section "Rescue Image Configuration" of the rear(8) man page for a list of all options. ISO-specific Configuration Using the configuration in Example 27.1, "Configuring Rescue System Format and Location" results into two equivalent output files in two locations: /var/lib/rear/output/ - rear 's default output location /mnt/rescue_system/ HOSTNAME /rear-localhost.iso - output location specified in OUTPUT_URL However, usually you need only one ISO image. To make ReaR create an ISO image only in the directory specified by a user, add these lines to /etc/rear/local.conf : Substitute output location with the desired location for the output. 27.1.3. Creating a Rescue System The following example shows how to create a rescue system with verbose output: With the configuration from Example 27.1, "Configuring Rescue System Format and Location" , ReaR prints the above output. The last two lines confirm that the rescue system has been successfully created and copied to the configured backup location /mnt/rescue_system/ . Because the system's host name is rhel7 , the backup location now contains directory rhel7/ with the rescue system and auxiliary files: Transfer the rescue system to an external medium to not lose it in case of a disaster. 27.1.4. Scheduling ReaR The /etc/cron.d/rear crontab file provided by the rear package runs the rear mkrescue command automatically daily at 1:30 AM to schedule the Relax-and-Recover (ReaR) utility for regularly creating a rescue system. The command only creates a rescue system and not the backup of the data. You still need to schedule a periodic backup of data by yourself. For example: You can add another crontab that will schedule the rear mkbackuponly command. You can also change the existing crontab to run the rear mkbackup command instead of the default /usr/sbin/rear checklayout || /usr/sbin/rear mkrescure command. You can schedule an external backup, if an external backup method is in use. The details depend on the backup method that you are using in ReaR. For more details, see Integrating ReaR with Backup Software . Note The /etc/cron.d/rear crontab file provided in the rear package is deprecated because, by default, it is not sufficient to perform a backup. For details, see the corresponding Deprecated functionality shells and command line tools . 27.1.5. Performing a System Rescue To perform a restore or migration: Boot the rescue system on the new hardware. For example, burn the ISO image to a DVD and boot from the DVD. In the console interface, select the "Recover" option: Figure 27.1. Rescue system: menu You are taken to the prompt: Figure 27.2. Rescue system: prompt Warning Once you have started recovery in the step, it probably cannot be undone and you may lose anything stored on the physical disks of the system. Run the rear recover command to perform the restore or migration. The rescue system then recreates the partition layout and filesystems: Figure 27.3. Rescue system: running "rear recover" Restore user and system files from the backup into the /mnt/local/ directory. Example 27.2. Restoring User and System Files In this example, the backup file is a tar archive created per instructions in Section 27.2.1.1, "Configuring the Internal Backup Method" . First, copy the archive from its storage, then unpack the files into /mnt/local/ , then delete the archive: The new storage has to have enough space both for the archive and the extracted files. Verify that the files have been restored: Figure 27.4. Rescue system: restoring user and system files from the backup Ensure that SELinux relabels the files on the boot: Otherwise you may be unable to log in the system, because the /etc/passwd file may have the incorrect SELinux context. Finish the recovery by entering exit . ReaR will then reinstall the boot loader. After that, reboot the system: Figure 27.5. Rescue system: finishing recovery Upon reboot, SELinux will relabel the whole filesystem. Then you will be able to log in to the recovered system. 27.2. Integrating ReaR with Backup Software The main purpose of ReaR is to produce a rescue system, but it can also be integrated with backup software. What integration means is different for the built-in, supported, and unsupported backup methods. 27.2.1. The Built-in Backup Method ReaR includes a built-in, or internal, backup method. This method is fully integrated with ReaR, which has these advantages: a rescue system and a full-system backup can be created using a single rear mkbackup command the rescue system restores files from the backup automatically As a result, ReaR can cover the whole process of creating both the rescue system and the full-system backup. 27.2.1.1. Configuring the Internal Backup Method To make ReaR use its internal backup method, add these lines to /etc/rear/local.conf : These lines configure ReaR to create an archive with a full-system backup using the tar command. Substitute backup location with one of the options from the "Backup Software Integration" section of the rear(8) man page. Make sure that the backup location has enough space. Example 27.3. Adding tar Backups To expand the example in Section 27.1, "Basic ReaR Usage" , configure ReaR to also output a tar full-system backup into the /srv/backup/ directory: The internal backup method allows further configuration. To keep old backup archives when new ones are created, add this line: By default, ReaR creates a full backup on each run. To make the backups incremental, meaning that only the changed files are backed up on each run, add this line: This automatically sets NETFS_KEEP_OLD_BACKUP_COPY to y . To ensure that a full backup is done regularly in addition to incremental backups, add this line: Substitute "Day" with one of the "Mon", "Tue", "Wed", "Thu". "Fri", "Sat", "Sun". ReaR can also include both the rescue system and the backup in the ISO image. To achieve this, set the BACKUP_URL directive to iso:///backup/ : This is the simplest method of full-system backup, because the rescue system does not need the user to fetch the backup during recovery. However, it needs more storage. Also, single-ISO backups cannot be incremental. Example 27.4. Configuring Single-ISO Rescue System and Backups This configuration creates a rescue system and a backup file as a single ISO image and puts it into the /srv/backup/ directory: Note The ISO image might be large in this scenario. Therefore, Red Hat recommends creating only one ISO image, not two. For details, see the section called "ISO-specific Configuration" . To use rsync instead of tar , add this line: Note that incremental backups are only supported when using tar . 27.2.1.2. Creating a Backup Using the Internal Backup Method With BACKUP=NETFS set, ReaR can create either a rescue system, a backup file, or both. To create a rescue system only , run: To create a backup only , run: To create a rescue system and a backup , run: Note that triggering backup with ReaR is only possible if using the NETFS method. ReaR cannot trigger other backup methods. Note When restoring, the rescue system created with the BACKUP=NETFS setting expects the backup to be present before executing rear recover . Hence, once the rescue system boots, copy the backup file into the directory specified in BACKUP_URL , unless using a single ISO image. Only then run rear recover . To avoid recreating the rescue system unnecessarily, you can check whether storage layout has changed since the last rescue system was created using these commands: Non-zero status indicates a change in disk layout. Non-zero status is also returned if ReaR configuration has changed. Important The rear checklayout command does not check whether a rescue system is currently present in the output location, and can return 0 even if it is not there. So it does not guarantee that a rescue system is available, only that the layout has not changed since the last rescue system has been created. Example 27.5. Using rear checklayout To create a rescue system, but only if the layout has changed, use this command: 27.2.2. Supported Backup Methods In addition to the NETFS internal backup method, ReaR supports several external backup methods. This means that the rescue system restores files from the backup automatically, but the backup creation cannot be triggered using ReaR. For a list and configuration options of the supported external backup methods, see the "Backup Software Integration" section of the rear(8) man page. 27.2.3. Unsupported Backup Methods With unsupported backup methods, there are two options: The rescue system prompts the user to manually restore the files. This scenario is the one described in "Basic ReaR Usage", except for the backup file format, which may take a different form than a tar archive. ReaR executes the custom commands provided by the user. To configure this, set the BACKUP directive to EXTERNAL . Then specify the commands to be run during backing up and restoration using the EXTERNAL_BACKUP and EXTERNAL_RESTORE directives. Optionally, also specify the EXTERNAL_IGNORE_ERRORS and EXTERNAL_CHECK directives. See /usr/share/rear/conf/default.conf for an example configuration. 27.2.4. Creating Multiple Backups With the version 2.00, ReaR supports creation of multiple backups. Backup methods that support this feature are: BACKUP=NETFS (internal method) BACKUP=BORG (external method) You can specify individual backups with the -C option of the rear command. The argument is a basename of the additional backup configuration file in the /etc/rear/ directory. The method, destination, and the options for each specific backup are defined in the specific configuration file, not in the main configuration file. To perform the basic recovery of the system: Basic recovery of the system Create the ReaR recovery system ISO image together with a backup of the files of the basic system: Back the files up in the /home directories: Note that the specified configuration file should contain the directories needed for a basic recovery of the system, such as /boot , /root , and /usr . Recovery of the system in the rear recovery shell To recover the system in the rear recovery shell, use the following sequence of commands:
[ "~]# yum install rear", "OUTPUT= output format OUTPUT_URL= output location", "OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/", "OUTPUT=ISO BACKUP=NETFS OUTPUT_URL=null BACKUP_URL=\"iso:///backup\" ISO_DIR=\" output location \"", "~]# rear -v mkrescue Relax-and-Recover 1.17.2 / Git Using log file: /var/log/rear/rear-rhel7.log mkdir: created directory '/var/lib/rear/output' Creating disk layout Creating root filesystem layout TIP: To login as root via ssh you need to set up /root/.ssh/authorized_keys or SSH_ROOT_PASSWORD in your configuration file Copying files and directories Copying binaries and libraries Copying kernel modules Creating initramfs Making ISO image Wrote ISO image: /var/lib/rear/output/rear-rhel7.iso (124M) Copying resulting files to file location", "~]# ls -lh /mnt/rescue_system/rhel7/ total 124M -rw-------. 1 root root 202 Jun 10 15:27 README -rw-------. 1 root root 166K Jun 10 15:27 rear.log -rw-------. 1 root root 124M Jun 10 15:27 rear-rhel7.iso -rw-------. 1 root root 274 Jun 10 15:27 VERSION", "~]# scp [email protected]:/srv/backup/rhel7/backup.tar.gz /mnt/local/ ~]# tar xf /mnt/local/backup.tar.gz -C /mnt/local/ ~]# rm -f /mnt/local/backup.tar.gz", "~]# ls /mnt/local/", "~]# touch /mnt/local/.autorelabel", "BACKUP=NETFS BACKUP_URL= backup location", "OUTPUT=ISO OUTPUT_URL=file:///mnt/rescue_system/ BACKUP=NETFS BACKUP_URL=file:///srv/backup/", "NETFS_KEEP_OLD_BACKUP_COPY=y", "BACKUP_TYPE=incremental", "FULLBACKUPDAY= \"Day\"", "BACKUP_URL=iso:///backup/", "OUTPUT=ISO OUTPUT_URL=file:///srv/backup/ BACKUP=NETFS BACKUP_URL=iso:///backup/", "BACKUP_PROG=rsync", "rear mkrescue", "rear mkbackuponly", "rear mkbackup", "~]# rear checklayout ~]# echo USD?", "~]# rear checklayout || rear mkrescue", "~]# rear -C basic_system mkbackup", "~]# rear -C home_backup mkbackuponly", "~]# rear -C basic_system recover", "~]# rear -C home_backup restoreonly" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/system_administrators_guide/ch-Relax-and-Recover_ReaR
2.2. Indexing Modes
2.2. Indexing Modes 2.2.1. Managing Indexes In Red Hat JBoss Data Grid's Query Module there are two options for storing indexes: Each node can maintain an individual copy of the global index. The index can be shared across all nodes. When the indexes are stored locally, by setting indexLocalOnly to true , each write to cache must be forwarded to all other nodes so that they can update their indexes. If the index is shared, by setting indexLocalOnly to false , only the node where the write originates is required to update the shared index. Lucene provides an abstraction of the directory structure called directory provider , which is used to store the index. The index can be stored, for example, as in-memory, on filesystem, or in distributed cache. Report a bug 2.2.2. Managing the Index in Local Mode In local mode, any Lucene Directory implementation may be used. The indexLocalOnly option is meaningless in local mode. Report a bug 2.2.3. Managing the Index in Replicated Mode In replication mode, each node can store its own local copy of the index. To store indexes locally on each node, set indexLocalOnly to false , so that each node will apply the required updates it receives from other nodes in addition to the updates started locally. Any Directory implementation can be used. When a new node is started it must receive an up to date copy of the index. Usually this can be done via resync, however being an external operation, this may result in a slightly out of sync index, particularly where updates are frequent. Alternatively, if a shared storage for indexes is used (see Section 2.3.3, "Infinispan Directory Provider" ), indexLocalOnly must be set to true so that each node will only apply the changes originated locally. While there is no risk of having an out of sync index, this causes contention on the node used for updating the index. The following diagram demonstrates a replicated deployment where each node has a local index. Figure 2.1. Replicated Cache Querying Report a bug 2.2.4. Managing the Index in Distribution Mode In both Distribution modes, the shared index must be used, with the indexLocalOnly set to true . The following diagram shows a deployment with a shared index. Figure 2.2. Querying with a Shared Index Report a bug 2.2.5. Managing the Index in Invalidation Mode Indexing and searching of elements in Invalidation mode is not supported. Report a bug
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/6.6/html/infinispan_query_guide/sect-indexing_modes
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/17.1/html/deploying_a_distributed_compute_node_dcn_architecture/making-open-source-more-inclusive
Chapter 1. Preparing to install on Azure Stack Hub
Chapter 1. Preparing to install on Azure Stack Hub 1.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You have installed Azure Stack Hub version 2008 or later. 1.2. Requirements for installing OpenShift Container Platform on Azure Stack Hub Before installing OpenShift Container Platform on Microsoft Azure Stack Hub, you must configure an Azure account. See Configuring an Azure Stack Hub account for details about account configuration, account limits, DNS zone configuration, required roles, and creating service principals. 1.3. Choosing a method to install OpenShift Container Platform on Azure Stack Hub You can install OpenShift Container Platform on installer-provisioned or user-provisioned infrastructure. The default installation type uses installer-provisioned infrastructure, where the installation program provisions the underlying infrastructure for the cluster. You can also install OpenShift Container Platform on infrastructure that you provision. If you do not use infrastructure that the installation program provisions, you must manage and maintain the cluster resources yourself. See Installation process for more information about installer-provisioned and user-provisioned installation processes. 1.3.1. Installing a cluster on installer-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program, by using the following method: Installing a cluster on Azure Stack Hub with an installer-provisioned infrastructure : You can install OpenShift Container Platform on Azure Stack Hub infrastructure that is provisioned by the OpenShift Container Platform installation program. 1.3.2. Installing a cluster on user-provisioned infrastructure You can install a cluster on Azure Stack Hub infrastructure that you provision, by using the following method: Installing a cluster on Azure Stack Hub using ARM templates : You can install OpenShift Container Platform on Azure Stack Hub by using infrastructure that you provide. You can use the provided Azure Resource Manager (ARM) templates to assist with an installation. 1.4. steps Configuring an Azure Stack Hub account
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/installing_on_azure_stack_hub/preparing-to-install-on-azure-stack-hub
40.4. Saving Data
40.4. Saving Data Sometimes it is useful to save samples at a specific time. For example, when profiling an executable, it may be useful to gather different samples based on different input data sets. If the number of events to be monitored exceeds the number of counters available for the processor, multiple runs of OProfile can be used to collect data, saving the sample data to different files each time. To save the current set of sample files, execute the following command, replacing <name> with a unique descriptive name for the current session. The directory /var/lib/oprofile/samples/ name / is created and the current sample files are copied to it.
[ "opcontrol --save= <name>" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/system_administration_guide/oprofile-saving_data
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code, documentation, and web properties. We are beginning with these four terms: master, slave, blacklist, and whitelist. Because of the enormity of this endeavor, these changes will be implemented gradually over several upcoming releases. For more details, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_openshift_data_foundation/4.13/html/deploying_openshift_data_foundation_using_amazon_web_services/making-open-source-more-inclusive
Chapter 108. AclRuleTopicResource schema reference
Chapter 108. AclRuleTopicResource schema reference Used in: AclRule The type property is a discriminator that distinguishes use of the AclRuleTopicResource type from AclRuleGroupResource , AclRuleClusterResource , AclRuleTransactionalIdResource . It must have the value topic for the type AclRuleTopicResource . Property Description type Must be topic . string name Name of resource for which given ACL rule applies. Can be combined with patternType field to use prefix pattern. string patternType Describes the pattern used in the resource field. The supported types are literal and prefix . With literal pattern type, the resource field will be used as a definition of a full topic name. With prefix pattern type, the resource name will be used only as a prefix. Default value is literal . string (one of [prefix, literal])
null
https://docs.redhat.com/en/documentation/red_hat_streams_for_apache_kafka/2.5/html/amq_streams_api_reference/type-aclruletopicresource-reference
3.9. Renaming Client Machines
3.9. Renaming Client Machines This section explains how to rename an IdM client. The process involves: the section called "Identifying Current Service and Keytab Configuration" . the section called "Removing the Client Machine from the IdM Domain" . the section called "Re-enrolling the Client with a New Host Name" . Warning Renaming a client is a manual procedure. Red Hat does not recommend it unless changing the host name is absolutely required. Identifying Current Service and Keytab Configuration Before uninstalling the current client, make note of certain settings for the client. You will apply this configuration after re-enrolling the machine with a new host name. Identify which services are running on the machine: Use the ipa service-find command, and identify services with certificates in the output: In addition, each host has a default host service which does not appear in the ipa service-find output. The service principal for the host service, also called a host principal , is host/client.example.com . Identify all host groups to which the machine belongs. For all service principals displayed by ipa service-find client.example.com , determine the location of the corresponding keytabs on client.example.com . Each service on the client system has a Kerberos principal in the form service_name/hostname@REALM , such as ldap/[email protected] . Removing the Client Machine from the IdM Domain Unenroll the client machine from the IdM domain. See Section 3.7, "Uninstalling a Client" . For each identified keytab other than /etc/krb5.keytab , remove the old principals: See Section 29.4, "Removing Keytabs" . On an IdM server, remove the host entry. This removes all services and revokes all certificates issued for that host: At this point, the host is completely removed from IdM. Re-enrolling the Client with a New Host Name Rename the machine as required. Re-enroll the machine as an IdM client. See Section 3.8, "Re-enrolling a Client into the IdM Domain" . On an IdM server, add a new keytab for every service identified in the section called "Identifying Current Service and Keytab Configuration" . Generate certificates for services that had a certificate assigned in the section called "Identifying Current Service and Keytab Configuration" . You can do this: Using the IdM administration tools. See Chapter 24, Managing Certificates for Users, Hosts, and Services . Using the certmonger utility. See Working with certmonger in the System-Level Authentication Guide or the certmonger (8) man page. Re-add the client to the host groups identified in the section called "Identifying Current Service and Keytab Configuration" . See Section 13.3, "Adding and Removing User or Host Group Members" .
[ "ipa service-find client.example.com", "ipa hostgroup-find client.example.com", "ipa-rmkeytab -k /path/to/keytab -r EXAMPLE.COM", "ipa host-del client.example.com", "ipa service-add service_name/new_host_name" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/linux_domain_identity_authentication_and_policy_guide/client-renaming-machines
Chapter 3. Mirroring images for a disconnected installation
Chapter 3. Mirroring images for a disconnected installation You can use the procedures in this section to ensure your clusters only use container images that satisfy your organizational controls on external content. Before you install a cluster on infrastructure that you provision in a restricted network, you must mirror the required container images into that environment. To mirror container images, you must have a registry for mirroring. Important You must have access to the internet to obtain the necessary container images. In this procedure, you place your mirror registry on a mirror host that has access to both your network and the Internet. If you do not have access to a mirror host, use the Mirroring Operator catalogs for use with disconnected clusters procedure to copy images to a device you can move across network boundaries with. 3.1. Prerequisites You must have a container image registry that supports Docker v2-2 in the location that will host the OpenShift Container Platform cluster, such as one of the following registries: Red Hat Quay JFrog Artifactory Sonatype Nexus Repository Harbor If you have an entitlement to Red Hat Quay, see the documentation on deploying Red Hat Quay for proof-of-concept purposes or by using the Quay Operator . If you need additional assistance selecting and installing a registry, contact your sales representative or Red Hat support. If you do not already have an existing solution for a container image registry, subscribers of OpenShift Container Platform are provided a mirror registry for Red Hat OpenShift . The mirror registry for Red Hat OpenShift is included with your subscription and is a small-scale container registry that can be used to mirror the required container images of OpenShift Container Platform in disconnected installations. 3.2. About the mirror registry You can mirror the images that are required for OpenShift Container Platform installation and subsequent product updates to a container mirror registry such as Red Hat Quay, JFrog Artifactory, Sonatype Nexus Repository, or Harbor. If you do not have access to a large-scale container registry, you can use the mirror registry for Red Hat OpenShift , a small-scale container registry included with OpenShift Container Platform subscriptions. You can use any container registry that supports Docker v2-2 , such as Red Hat Quay, the mirror registry for Red Hat OpenShift , Artifactory, Sonatype Nexus Repository, or Harbor. Regardless of your chosen registry, the procedure to mirror content from Red Hat hosted sites on the internet to an isolated image registry is the same. After you mirror the content, you configure each cluster to retrieve this content from your mirror registry. Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. If choosing a container registry that is not the mirror registry for Red Hat OpenShift , it must be reachable by every machine in the clusters that you provision. If the registry is unreachable, installation, updating, or normal operations such as workload relocation might fail. For that reason, you must run mirror registries in a highly available way, and the mirror registries must at least match the production availability of your OpenShift Container Platform clusters. When you populate your mirror registry with OpenShift Container Platform images, you can follow two scenarios. If you have a host that can access both the internet and your mirror registry, but not your cluster nodes, you can directly mirror the content from that machine. This process is referred to as connected mirroring . If you have no such host, you must mirror the images to a file system and then bring that host or removable media into your restricted environment. This process is referred to as disconnected mirroring . For mirrored registries, to view the source of pulled images, you must review the Trying to access log entry in the CRI-O logs. Other methods to view the image pull source, such as using the crictl images command on a node, show the non-mirrored image name, even though the image is pulled from the mirrored location. Note Red Hat does not test third party registries with OpenShift Container Platform. Additional information For information on viewing the CRI-O logs to view the image source, see Viewing the image pull source . 3.3. Preparing your mirror host Before you perform the mirror procedure, you must prepare the host to retrieve content and push it to the remote location. 3.3.1. Installing the OpenShift CLI by downloading the binary You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.9. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Linux Client entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version in the Version drop-down menu. Click Download Now to the OpenShift v4.9 MacOSX Client entry and save the file. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH After you install the OpenShift CLI, it is available using the oc command: USD oc <command> 3.4. Configuring credentials that allow images to be mirrored Create a container image registry credentials file that allows mirroring images from Red Hat to your mirror. Warning Do not use this image registry credentials file as the pull secret when you install a cluster. If you provide this file when you install cluster, all of the machines in the cluster will have write access to your mirror registry. Warning This process requires that you have write access to a container image registry on the mirror registry and adds the credentials to a registry pull secret. Prerequisites You configured a mirror registry to use in your disconnected environment. You identified an image repository location on your mirror registry to mirror images into. You provisioned a mirror registry account that allows images to be uploaded to that image repository. Procedure Complete the following steps on the installation host: Download your registry.redhat.io pull secret from the Red Hat OpenShift Cluster Manager and save it to a .json file. Generate the base64-encoded user name and password or token for your mirror registry: USD echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs= 1 For <user_name> and <password> , specify the user name and password that you configured for your registry. Make a copy of your pull secret in JSON format: USD cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1 1 Specify the path to the folder to store the pull secret in and a name for the JSON file that you create. Save the file either as ~/.docker/config.json or USDXDG_RUNTIME_DIR/containers/auth.json . The contents of the file resemble the following example: { "auths": { "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } Edit the new file and add a section that describes your registry to it: "auths": { "<mirror_registry>": { 1 "auth": "<credentials>", 2 "email": "[email protected]" } }, 1 For <mirror_registry> , specify the registry domain name, and optionally the port, that your mirror registry uses to serve content. For example, registry.example.com or registry.example.com:8443 2 For <credentials> , specify the base64-encoded user name and password for the mirror registry. The file resembles the following example: { "auths": { "registry.example.com": { "auth": "BGVtbYk3ZHAtqXs=", "email": "[email protected]" }, "cloud.openshift.com": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "quay.io": { "auth": "b3BlbnNo...", "email": "[email protected]" }, "registry.connect.redhat.com": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" }, "registry.redhat.io": { "auth": "NTE3Njg5Nj...", "email": "[email protected]" } } } 3.5. Mirror registry for Red Hat OpenShift The mirror registry for Red Hat OpenShift is a small and streamlined container registry that you can use as a target for mirroring the required container images of OpenShift Container Platform for disconnected installations. If you already have a container image registry, such as Red Hat Quay, you can skip these steps and go straight to Mirroring the OpenShift Container Platform image repository . Prerequisites An OpenShift Container Platform subscription. Red Hat Enterprise Linux (RHEL) 8 with Podman 3.3 and OpenSSL installed. Fully qualified domain name for the Red Hat Quay service, which must resolve through a DNS server. Passwordless sudo access on the target host. Key-based SSH connectivity on the target host. SSH keys are automatically generated for local installs. For remote hosts, you must generate your own SSH keys. 2 or more vCPUs. 8 GB of RAM. About 9.6 GB for OpenShift Container Platform 4.9 Release images, or about 444 GB for OpenShift Container Platform 4.9 Release images and OpenShift Container Platform 4.9 Red Hat Operator images. Up to 1 TB per stream or more is suggested. Important These requirements are based on local testing results with only Release images and Operator images tested. Storage requirements can vary based on your organization's needs. Some users might require more space, for example, when they mirror multiple z-streams. You can use standard Red Hat Quay functionality to remove unnecessary images and free up space. 3.5.1. Mirror registry for Red Hat OpenShift introduction For disconnected deployments of OpenShift Container Platform, a container registry is required to carry out the installation of the clusters. To run a production-grade registry service on such a cluster, you must create a separate registry deployment to install the first cluster. The mirror registry for Red Hat OpenShift addresses this need and is included in every OpenShift subscription. It is available for download on the OpenShift console Downloads page. The mirror registry for Red Hat OpenShift allows users to install a small-scale version of Red Hat Quay and its required components using the mirror-registry command line interface (CLI) tool. The mirror registry for Red Hat OpenShift is deployed automatically with pre-configured local storage and a local database. It also includes auto-generated user credentials and access permissions with a single set of inputs and no additional configuration choices to get started. The mirror registry for Red Hat OpenShift provides a pre-determined network configuration and reports deployed component credentials and access URLs upon success. A limited set of optional configuration inputs like fully qualified domain name (FQDN) services, superuser name and password, and custom TLS certificates are also provided. This provides users with a container registry so that they can easily create an offline mirror of all OpenShift Container Platform release content when running OpenShift Container Platform in restricted network environments. The mirror registry for Red Hat OpenShift is limited to hosting images that are required to install a disconnected OpenShift Container Platform cluster, such as Release images or Red Hat Operator images. It uses local storage on your Red Hat Enterprise Linux (RHEL) machine, and storage supported by RHEL is supported by the mirror registry for Red Hat OpenShift . Content built by customers should not be hosted by the mirror registry for Red Hat OpenShift . Unlike Red Hat Quay, the mirror registry for Red Hat OpenShift is not a highly-available registry and only local file system storage is supported. Using the mirror registry for Red Hat OpenShift with more than one cluster is discouraged, because multiple clusters can create a single point of failure when updating your cluster fleet. It is advised to leverage the mirror registry for Red Hat OpenShift to install a cluster that can host a production-grade, highly-available registry such as Red Hat Quay, which can serve OpenShift Container Platform content to other clusters. Use of the mirror registry for Red Hat OpenShift is optional if another container registry is already available in the install environment. 3.5.2. Mirroring on a local host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a local host using the mirror-registry installer tool. By doing so, users can create a local host registry running on port 443 for the purpose of storing a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD sudo ./mirror-registry install \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the registry by running the following command: USD podman login --authfile pull-secret.txt \ -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 3.5.3. Mirroring on a remote host with mirror registry for Red Hat OpenShift This procedure explains how to install the mirror registry for Red Hat OpenShift on a remote host using the mirror-registry tool. By doing so, users can create a registry to hold a mirror of OpenShift Container Platform images. Note Installing the mirror registry for Red Hat OpenShift using the mirror-registry CLI tool makes several changes to your machine. After installation, a /etc/quay-install directory is created, which has installation files, local storage, and the configuration bundle. Trusted SSH keys are generated in case the deployment target is the local host, and systemd files on the host machine are set up to ensure that container runtimes are persistent. Additionally, an initial user named init is created with an automatically generated password. All access credentials are printed at the end of the install routine. Procedure Download the mirror-registry.tar.gz package for the latest version of the mirror registry for Red Hat OpenShift found on the OpenShift console Downloads page. Install the mirror registry for Red Hat OpenShift on your local host with your current user account by using the mirror-registry tool. For a full list of available flags, see "mirror registry for Red Hat OpenShift flags". USD sudo ./mirror-registry install -v \ --targetHostname <host_example_com> \ --targetUsername <example_user> \ -k ~/.ssh/my_ssh_key \ --quayHostname <host_example_com> \ --quayRoot <example_directory_name> Use the user name and password generated during installation to log into the mirror registry by running the following command: USD podman login --authfile pull-secret.txt \ -u init \ -p <password> \ <host_example_com>:8443> \ --tls-verify=false 1 1 You can avoid running --tls-verify=false by configuring your system to trust the generated rootCA certificates. See "Using SSL to protect connections to Red Hat Quay" and "Configuring the system to trust the certificate authority" for more information. Note You can also log in by accessing the UI at https://<host.example.com>:8443 after installation. You can mirror OpenShift Container Platform images after logging in. Depending on your needs, see either the "Mirroring the OpenShift Container Platform image repository" or the "Mirroring Operator catalogs for use with disconnected clusters" sections of this document. Note If there are issues with images stored by the mirror registry for Red Hat OpenShift due to storage layer problems, you can remirror the OpenShift Container Platform images, or reinstall mirror registry on more stable storage. 3.6. Upgrading the mirror registry for Red Hat OpenShift You can upgrade the mirror registry for Red Hat OpenShift from your local host by running the following command: USD sudo ./mirror-registry upgrade Note Users who upgrade the mirror registry for Red Hat OpenShift with the ./mirror-registry upgrade flag must include the same credentials used when creating their mirror registry. For example, if you installed the mirror registry for Red Hat OpenShift with --quayHostname <host_example_com> and --quayRoot <example_directory_name> , you must include that string to properly upgrade the mirror registry. 3.6.1. Uninstalling the mirror registry for Red Hat OpenShift You can uninstall the mirror registry for Red Hat OpenShift from your local host by running the following command: USD sudo ./mirror-registry uninstall -v \ --quayRoot <example_directory_name> Note Deleting the mirror registry for Red Hat OpenShift will prompt the user before deletion. You can use --autoApprove to skip this prompt. Users who install the mirror registry for Red Hat OpenShift with the --quayRoot flag must include the --quayRoot flag when uninstalling. For example, if you installed the mirror registry for Red Hat OpenShift with --quayRoot example_directory_name , you must include that string to properly uninstall the mirror registry. 3.6.2. Mirror registry for Red Hat OpenShift flags The following flags are available for the mirror registry for Red Hat OpenShift : Flags Description --autoApprove A boolean value that disables interactive prompts. If set to true , the quayRoot directory is automatically deleted when uninstalling the mirror registry. Defaults to false if left unspecified. --initPassword The password of the init user created during Quay installation. Must be at least eight characters and contain no whitespace. --initUser string Shows the username of the initial user. Defaults to init if left unspecified. --quayHostname The fully-qualified domain name of the mirror registry that clients will use to contact the registry. Equivalent to SERVER_HOSTNAME in the Quay config.yaml . Must resolve by DNS. Defaults to <targetHostname>:8443 if left unspecified. [1] --quayRoot , -r The directory where container image layer and configuration data is saved, including rootCA.key , rootCA.pem , and rootCA.srl certificates. Requires about 9.6 GB for OpenShift Container Platform 4.9 Release images, or about 444 GB for OpenShift Container Platform 4.9 Release images and OpenShift Container Platform 4.9 Red Hat Operator images. Defaults to /etc/quay-install if left unspecified. --ssh-key , -k The path of your SSH identity key. Defaults to ~/.ssh/quay_installer if left unspecified. --sslCert The path to the SSL/TLS public key / certificate. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --sslCheckSkip Skips the check for the certificate hostname against the SERVER_HOSTNAME in the config.yaml file. [2] --sslKey The path to the SSL/TLS private key used for HTTPS communication. Defaults to {quayRoot}/quay-config and is auto-generated if left unspecified. --targetHostname , -H The hostname of the target you want to install Quay to. Defaults to USDHOST , for example, a local host, if left unspecified. --targetUsername , -u The user on the target host which will be used for SSH. Defaults to USDUSER , for example, the current user if left unspecified. --verbose , -v Shows debug logs and Ansible playbook outputs. --version Shows the version for the mirror registry for Red Hat OpenShift . --quayHostname must be modified if the public DNS name of your system is different from the local hostname. --sslCheckSkip is used in cases when the mirror registry is set behind a proxy and the exposed hostname is different from the internal Quay hostname. It can also be used when users do not want the certificates to be validated against the provided Quay hostname during installation. Additional resources Using SSL to protect connections to Red Hat Quay Configuring the system to trust the certificate authority Mirroring the OpenShift Container Platform image repository Mirroring Operator catalogs for use with disconnected clusters 3.7. Mirroring the OpenShift Container Platform image repository Mirror the OpenShift Container Platform image repository to your registry to use during cluster installation or upgrade. Prerequisites Your mirror host has access to the internet. You configured a mirror registry to use in your restricted network and can access the certificate and credentials that you configured. You downloaded the pull secret from the Red Hat OpenShift Cluster Manager and modified it to include authentication to your mirror repository. If you use self-signed certificates that do not set a Subject Alternative Name, you must precede the oc commands in this procedure with GODEBUG=x509ignoreCN=0 . If you do not set this variable, the oc commands will fail with the following error: x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0 Procedure Complete the following steps on the mirror host: Review the OpenShift Container Platform downloads page to determine the version of OpenShift Container Platform that you want to install and determine the corresponding tag on the Repository Tags page. Set the required environment variables: Export the release version: USD OCP_RELEASE=<release_version> For <release_version> , specify the tag that corresponds to the version of OpenShift Container Platform to install, such as 4.5.4 . Export the local registry name and host port: USD LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>' For <local_registry_host_name> , specify the registry domain name for your mirror repository, and for <local_registry_host_port> , specify the port that it serves content on. Export the local repository name: USD LOCAL_REPOSITORY='<local_repository_name>' For <local_repository_name> , specify the name of the repository to create in your registry, such as ocp4/openshift4 . Export the name of the repository to mirror: USD PRODUCT_REPO='openshift-release-dev' For a production release, you must specify openshift-release-dev . Export the path to your registry pull secret: USD LOCAL_SECRET_JSON='<path_to_pull_secret>' For <path_to_pull_secret> , specify the absolute path to and file name of the pull secret for your mirror registry that you created. Export the release mirror: USD RELEASE_NAME="ocp-release" For a production release, you must specify ocp-release . Export the type of architecture for your server, such as x86_64 : USD ARCHITECTURE=<server_architecture> Export the path to the directory to host the mirrored images: USD REMOVABLE_MEDIA_PATH=<path> 1 1 Specify the full path, including the initial forward slash (/) character. Mirror the version images to the mirror registry: If your mirror host does not have internet access, take the following actions: Connect the removable media to a system that is connected to the internet. Review the images and configuration manifests to mirror: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Mirror the images to a directory on the removable media: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} Take the media to the restricted network environment and upload the images to the local container registry. USD oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror "file://openshift/release:USD{OCP_RELEASE}*" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1 1 For REMOVABLE_MEDIA_PATH , you must use the same path that you specified when you mirrored the images. If the local container registry is connected to the mirror host, take the following actions: Directly push the release images to the local registry by using following command: USD oc adm release mirror -a USD{LOCAL_SECRET_JSON} \ --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} \ --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} \ --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} This command pulls the release information as a digest, and its output includes the imageContentSources data that you require when you install your cluster. Record the entire imageContentSources section from the output of the command. The information about your mirrors is unique to your mirrored repository, and you must add the imageContentSources section to the install-config.yaml file during installation. Note The image name gets patched to Quay.io during the mirroring process, and the podman images will show Quay.io in the registry on the bootstrap virtual machine. To create the installation program that is based on the content that you mirrored, extract it and pin it to the release: If your mirror host does not have internet access, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}" If the local container registry is connected to the mirror host, run the following command: USD oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install "USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}" Important To ensure that you use the correct images for the version of OpenShift Container Platform that you selected, you must extract the installation program from the mirrored content. You must perform this step on a machine with an active internet connection. If you are in a disconnected environment, use the --image flag as part of must-gather and point to the payload image. For clusters using installer-provisioned infrastructure, run the following command: USD openshift-install 3.8. The Cluster Samples Operator in a disconnected environment In a disconnected environment, you must take additional steps after you install a cluster to configure the Cluster Samples Operator. Review the following information in preparation. 3.8.1. Cluster Samples Operator assistance for mirroring During installation, OpenShift Container Platform creates a config map named imagestreamtag-to-image in the openshift-cluster-samples-operator namespace. The imagestreamtag-to-image config map contains an entry, the populating image, for each image stream tag. The format of the key for each entry in the data field in the config map is <image_stream_name>_<image_stream_tag_name> . During a disconnected installation of OpenShift Container Platform, the status of the Cluster Samples Operator is set to Removed . If you choose to change it to Managed , it installs samples. Note The use of samples in a network-restricted or discontinued environment may require access to services external to your network. Some example services include: Github, Maven Central, npm, RubyGems, PyPi and others. There might be additional steps to take that allow the cluster samples operators's objects to reach the services they require. You can use this config map as a reference for which images need to be mirrored for your image streams to import. While the Cluster Samples Operator is set to Removed , you can create your mirrored registry, or determine which existing mirrored registry you want to use. Mirror the samples you want to the mirrored registry using the new config map as your guide. Add any of the image streams you did not mirror to the skippedImagestreams list of the Cluster Samples Operator configuration object. Set samplesRegistry of the Cluster Samples Operator configuration object to the mirrored registry. Then set the Cluster Samples Operator to Managed to install the image streams you have mirrored. 3.9. Mirroring Operator catalogs for use with disconnected clusters You can mirror the Operator contents of a Red Hat-provided catalog, or a custom catalog, into a container image registry using the oc adm catalog mirror command. The target registry must support Docker v2-2 . For a cluster on a restricted network, this registry can be one that the cluster has network access to, such as a mirror registry created during a restricted network cluster installation. Important The internal registry of the OpenShift Container Platform cluster cannot be used as the target registry because it does not support pushing without a tag, which is required during the mirroring process. The oc adm catalog mirror command also automatically mirrors the index image that is specified during the mirroring process, whether it be a Red Hat-provided index image or your own custom-built index image, to the target registry. You can then use the mirrored index image to create a catalog source that allows Operator Lifecycle Manager (OLM) to load the mirrored catalog onto your OpenShift Container Platform cluster. Additional resources Using Operator Lifecycle Manager on restricted networks 3.9.1. Prerequisites Mirroring Operator catalogs for use with disconnected clusters has the following prerequisites: Workstation with unrestricted network access. podman version 1.9.3 or later. If you want to filter, or prune , the default catalog and selectively mirror only a subset of Operators, see the following sections: Installing the opm CLI Filtering a SQLite-based index image If you want to mirror a Red Hat-provided catalog, run the following command on your workstation with unrestricted network access to authenticate with registry.redhat.io : USD podman login registry.redhat.io Access to a mirror registry that supports Docker v2-2 . On your mirror registry, decide which namespace to use for storing mirrored Operator content. For example, you might create an olm-mirror namespace. If your mirror registry does not have internet access, connect removable media to your workstation with unrestricted network access. If you are working with private registries, including registry.redhat.io , set the REG_CREDS environment variable to the file path of your registry credentials for use in later steps. For example, for the podman CLI: USD REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json 3.9.2. Extracting and mirroring catalog contents The oc adm catalog mirror command extracts the contents of an index image to generate the manifests required for mirroring. The default behavior of the command generates manifests, then automatically mirrors all of the image content from the index image, as well as the index image itself, to your mirror registry. Alternatively, if your mirror registry is on a completely disconnected, or airgapped , host, you can first mirror the content to removable media, move the media to the disconnected environment, then mirror the content from the media to the registry. 3.9.2.1. Mirroring catalog contents to registries on the same network If your mirror registry is co-located on the same network as your workstation with unrestricted network access, take the following actions on your workstation. Procedure If your mirror registry requires authentication, run the following command to log in to the registry: USD podman login <mirror_registry> Run the following command to extract and mirror the content to the mirror registry: USD oc adm catalog mirror \ <index_image> \ 1 <mirror_registry>:<port>/<namespace> \ 2 [-a USD{REG_CREDS}] \ 3 [--insecure] \ 4 [--index-filter-by-os='<platform>/<arch>'] \ 5 [--manifests-only] 6 1 Specify the index image for the catalog that you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.9 . 2 Specify the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator contents to, where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. 3 Optional: If required, specify the location of your registry credentials file. {REG_CREDS} is required for registry.redhat.io . 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are passed as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , and .* 6 Optional: Generate only the manifests required for mirroring, and do not actually mirror the image content to a registry. This option can be useful for reviewing what will be mirrored, and it allows you to make any changes to the mapping list if you require only a subset of packages. You can then use the mapping.txt file with the oc image mirror command to mirror the modified list of images in a later step. This flag is intended for only advanced selective mirroring of content from the catalog; the opm index prune command, if you used it previously to prune the index image, is suitable for most catalog management use cases. Example output src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 ... wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2 1 Directory for the temporary index.db database generated by the command. 2 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Additional resources Architecture and operating system support for Operators 3.9.2.2. Mirroring catalog contents to airgapped registries If your mirror registry is on a completely disconnected, or airgapped, host, take the following actions. Procedure Run the following command on your workstation with unrestricted network access to mirror the content to local files: USD oc adm catalog mirror \ <index_image> \ 1 file:///local/index \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the index image for the catalog that you want to mirror. For example, this might be a pruned index image that you created previously, or one of the source index images for the default catalogs, such as registry.redhat.io/redhat/redhat-operator-index:v4.9 . 2 Specify the content to mirror to local files in your current directory. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , and .* Example output ... info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2 1 Record the manifests directory name that is generated. This directory is referenced in subsequent procedures. 2 Record the expanded file:// path that is based on your provided index image. This path is referenced in a subsequent step. This command creates a v2/ directory in your current directory. Copy the v2/ directory to removable media. Physically remove the media and attach it to a host in the disconnected environment that has access to the mirror registry. If your mirror registry requires authentication, run the following command on your host in the disconnected environment to log in to the registry: USD podman login <mirror_registry> Run the following command from the parent directory containing the v2/ directory to upload the images from local files to the mirror registry: USD oc adm catalog mirror \ file://local/index/<repo>/<index_image>:<tag> \ 1 <mirror_registry>:<port>/<namespace> \ 2 -a USD{REG_CREDS} \ 3 --insecure \ 4 --index-filter-by-os='<platform>/<arch>' 5 1 Specify the file:// path from the command output. 2 Specify the fully qualified domain name (FQDN) for the target registry and namespace to mirror the Operator contents to, where <namespace> is any existing namespace on the registry. For example, you might create an olm-mirror namespace to push all mirrored content to. 3 Optional: If required, specify the location of your registry credentials file. 4 Optional: If you do not want to configure trust for the target registry, add the --insecure flag. 5 Optional: Specify which platform and architecture of the index image to select when multiple variants are available. Images are specified as '<platform>/<arch>[/<variant>]' . This does not apply to images referenced by the index. Valid values are linux/amd64 , linux/ppc64le , linux/s390x , and .* Note Red Hat Quay does not support nested repositories. As a result, running the oc adm catalog mirror command will fail with a 401 unauthorized error. As a workaround, you can use the --max-components=2 option when running the oc adm catalog mirror command to disable the creation of nested repositories. For more information on this workaround, see the Unauthorized error thrown while using catalog mirror command with Quay registry Knowledgebase Solution. Run the oc adm catalog mirror command again. Use the newly mirrored index image as the source and the same mirror registry namespace used in the step as the target: USD oc adm catalog mirror \ <mirror_registry>:<port>/<index_image> \ <mirror_registry>:<port>/<namespace> \ --manifests-only \ 1 [-a USD{REG_CREDS}] \ [--insecure] 1 The --manifests-only flag is required for this step so that the command does not copy all of the mirrored content again. Important This step is required because the image mappings in the imageContentSourcePolicy.yaml file generated during the step must be updated from local paths to valid mirror locations. Failure to do so will cause errors when you create the ImageContentSourcePolicy object in a later step. After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to enable installation of Operators from OperatorHub. Additional resources Architecture and operating system support for Operators 3.9.3. Generated manifests After mirroring Operator catalog content to your mirror registry, a manifests directory is generated in your current directory. If you mirrored content to a registry on the same network, the directory name takes the following pattern: manifests-<index_image_name>-<random_number> If you mirrored content to a registry on a disconnected host in the section, the directory name takes the following pattern: manifests-index/<namespace>/<index_image_name>-<random_number> Note The manifests directory name is referenced in subsequent procedures. The manifests directory contains the following files, some of which might require further modification: The catalogSource.yaml file is a basic definition for a CatalogSource object that is pre-populated with your index image tag and other relevant metadata. This file can be used as is or modified to add the catalog source to your cluster. Important If you mirrored the content to local files, you must modify your catalogSource.yaml file to remove any backslash ( / ) characters from the metadata.name field. Otherwise, when you attempt to create the object, it fails with an "invalid resource name" error. The imageContentSourcePolicy.yaml file defines an ImageContentSourcePolicy object that can configure nodes to translate between the image references stored in Operator manifests and the mirrored registry. Note If your cluster uses an ImageContentSourcePolicy object to configure repository mirroring, you can use only global pull secrets for mirrored registries. You cannot add a pull secret to a project. The mapping.txt file contains all of the source images and where to map them in the target registry. This file is compatible with the oc image mirror command and can be used to further customize the mirroring configuration. Important If you used the --manifests-only flag during the mirroring process and want to further trim the subset of packages to mirror, see the steps in the Mirroring a package manifest format catalog image procedure of the OpenShift Container Platform 4.7 documentation about modifying your mapping.txt file and using the file with the oc image mirror command. 3.9.4. Post-installation requirements After you mirror the catalog, you can continue with the remainder of your cluster installation. After your cluster installation has finished successfully, you must specify the manifests directory from this procedure to create the ImageContentSourcePolicy and CatalogSource objects. These objects are required to populate and enable installation of Operators from OperatorHub. Additional resources Populating OperatorHub from mirrored Operator catalogs 3.10. steps Install a cluster on infrastructure that you provision in your restricted network, such as on VMware vSphere , bare metal , or Amazon Web Services . 3.11. Additional resources See Gathering data about specific features for more information about using must-gather.
[ "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "echo -n '<user_name>:<password>' | base64 -w0 1 BGVtbYk3ZHAtqXs=", "cat ./pull-secret.text | jq . > <path>/<pull_secret_file_in_json> 1", "{ \"auths\": { \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "\"auths\": { \"<mirror_registry>\": { 1 \"auth\": \"<credentials>\", 2 \"email\": \"[email protected]\" } },", "{ \"auths\": { \"registry.example.com\": { \"auth\": \"BGVtbYk3ZHAtqXs=\", \"email\": \"[email protected]\" }, \"cloud.openshift.com\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"quay.io\": { \"auth\": \"b3BlbnNo...\", \"email\": \"[email protected]\" }, \"registry.connect.redhat.com\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" }, \"registry.redhat.io\": { \"auth\": \"NTE3Njg5Nj...\", \"email\": \"[email protected]\" } } }", "sudo ./mirror-registry install --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login --authfile pull-secret.txt -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry install -v --targetHostname <host_example_com> --targetUsername <example_user> -k ~/.ssh/my_ssh_key --quayHostname <host_example_com> --quayRoot <example_directory_name>", "podman login --authfile pull-secret.txt -u init -p <password> <host_example_com>:8443> --tls-verify=false 1", "sudo ./mirror-registry upgrade", "sudo ./mirror-registry uninstall -v --quayRoot <example_directory_name>", "x509: certificate relies on legacy Common Name field, use SANs or temporarily enable Common Name matching with GODEBUG=x509ignoreCN=0", "OCP_RELEASE=<release_version>", "LOCAL_REGISTRY='<local_registry_host_name>:<local_registry_host_port>'", "LOCAL_REPOSITORY='<local_repository_name>'", "PRODUCT_REPO='openshift-release-dev'", "LOCAL_SECRET_JSON='<path_to_pull_secret>'", "RELEASE_NAME=\"ocp-release\"", "ARCHITECTURE=<server_architecture>", "REMOVABLE_MEDIA_PATH=<path> 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --dry-run", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --to-dir=USD{REMOVABLE_MEDIA_PATH}/mirror quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc image mirror -a USD{LOCAL_SECRET_JSON} --from-dir=USD{REMOVABLE_MEDIA_PATH}/mirror \"file://openshift/release:USD{OCP_RELEASE}*\" USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} 1", "oc adm release mirror -a USD{LOCAL_SECRET_JSON} --from=quay.io/USD{PRODUCT_REPO}/USD{RELEASE_NAME}:USD{OCP_RELEASE}-USD{ARCHITECTURE} --to=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY} --to-release-image=USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}\"", "oc adm release extract -a USD{LOCAL_SECRET_JSON} --command=openshift-install \"USD{LOCAL_REGISTRY}/USD{LOCAL_REPOSITORY}:USD{OCP_RELEASE}-USD{ARCHITECTURE}\"", "openshift-install", "podman login registry.redhat.io", "REG_CREDS=USD{XDG_RUNTIME_DIR}/containers/auth.json", "podman login <mirror_registry>", "oc adm catalog mirror <index_image> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 [-a USD{REG_CREDS}] \\ 3 [--insecure] \\ 4 [--index-filter-by-os='<platform>/<arch>'] \\ 5 [--manifests-only] 6", "src image has index label for database path: /database/index.db using database path mapping: /database/index.db:/tmp/153048078 wrote database to /tmp/153048078 1 wrote mirroring manifests to manifests-redhat-operator-index-1614211642 2", "oc adm catalog mirror <index_image> \\ 1 file:///local/index \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "info: Mirroring completed in 5.93s (5.915MB/s) wrote mirroring manifests to manifests-my-index-1614985528 1 To upload local images to a registry, run: oc adm catalog mirror file://local/index/myrepo/my-index:v1 REGISTRY/REPOSITORY 2", "podman login <mirror_registry>", "oc adm catalog mirror file://local/index/<repo>/<index_image>:<tag> \\ 1 <mirror_registry>:<port>/<namespace> \\ 2 -a USD{REG_CREDS} \\ 3 --insecure \\ 4 --index-filter-by-os='<platform>/<arch>' 5", "oc adm catalog mirror <mirror_registry>:<port>/<index_image> <mirror_registry>:<port>/<namespace> --manifests-only \\ 1 [-a USD{REG_CREDS}] [--insecure]", "manifests-<index_image_name>-<random_number>", "manifests-index/<namespace>/<index_image_name>-<random_number>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/installing/installing-mirroring-installation-images
Chapter 6. Installing a private cluster on IBM Power Virtual Server
Chapter 6. Installing a private cluster on IBM Power Virtual Server In OpenShift Container Platform version 4.18, you can install a private cluster into an existing VPC and IBM Power(R) Virtual Server Workspace. The installation program provisions the rest of the required infrastructure, which you can further customize. To customize the installation, you modify parameters in the install-config.yaml file before you install the cluster. 6.1. Prerequisites You reviewed details about the OpenShift Container Platform installation and update processes. You read the documentation on selecting a cluster installation method and preparing it for users . You configured an IBM Cloud(R) account to host the cluster. If you use a firewall, you configured it to allow the sites that your cluster requires access to. You configured the ccoctl utility before you installed the cluster. For more information, see Configuring the Cloud Credential Operator utility . 6.2. Private clusters You can deploy a private OpenShift Container Platform cluster that does not expose external endpoints. Private clusters are accessible from only an internal network and are not visible to the internet. By default, OpenShift Container Platform is provisioned to use publicly-accessible DNS and endpoints. A private cluster sets the DNS, Ingress Controller, and API server to private when you deploy your cluster. This means that the cluster resources are only accessible from your internal network and are not visible to the internet. Important If the cluster has any public subnets, load balancer services created by administrators might be publicly accessible. To ensure cluster security, verify that these services are explicitly annotated as private. To deploy a private cluster, you must: Use existing networking that meets your requirements. Create a DNS zone using IBM Cloud(R) DNS Services and specify it as the base domain of the cluster. For more information, see "Using IBM Cloud(R) DNS Services to configure DNS resolution". Deploy from a machine that has access to: The API services for the cloud to which you provision. The hosts on the network that you provision. The internet to obtain installation media. You can use any machine that meets these access requirements and follows your company's guidelines. For example, this machine can be a bastion host on your cloud network or a machine that has access to the network through a VPN. 6.3. Private clusters in IBM Power Virtual Server To create a private cluster on IBM Power(R) Virtual Server, you must provide an existing private Virtual Private Cloud (VPC) and subnets to host the cluster. The installation program must also be able to resolve the DNS records that the cluster requires. The installation program configures the Ingress Operator and API server for only internal traffic. The cluster still requires access to internet to access the IBM Cloud(R) APIs. The following items are not required or created when you install a private cluster: Public subnets Public network load balancers, which support public Ingress A public DNS zone that matches the baseDomain for the cluster You will also need to create an IBM(R) DNS service containing a DNS zone that matches your baseDomain . Unlike standard deployments on Power VS which use IBM(R) CIS for DNS, you must use IBM(R) DNS for your DNS service. 6.3.1. Limitations Private clusters on IBM Power(R) Virtual Server are subject only to the limitations associated with the existing VPC that was used for cluster deployment. 6.4. Requirements for using your VPC You must correctly configure the existing VPC and its subnets before you install the cluster. The installation program does not create a VPC or VPC subnet in this scenario. The installation program cannot: Subdivide network ranges for the cluster to use Set route tables for the subnets Set VPC options like DHCP Note The installation program requires that you use the cloud-provided DNS server. Using a custom DNS server is not supported and causes the installation to fail. 6.4.1. VPC validation The VPC and all of the subnets must be in an existing resource group. The cluster is deployed to this resource group. As part of the installation, specify the following in the install-config.yaml file: The name of the resource group The name of VPC The name of the VPC subnet To ensure that the subnets that you provide are suitable, the installation program confirms that all of the subnets you specify exists. Note Subnet IDs are not supported. 6.4.2. Isolation between clusters If you deploy OpenShift Container Platform to an existing network, the isolation of cluster services is reduced in the following ways: ICMP Ingress is allowed to the entire network. TCP port 22 Ingress (SSH) is allowed to the entire network. Control plane TCP 6443 Ingress (Kubernetes API) is allowed to the entire network. Control plane TCP 22623 Ingress (MCS) is allowed to the entire network. 6.5. Internet access for OpenShift Container Platform In OpenShift Container Platform 4.18, you require access to the internet to install your cluster. You must have internet access to: Access OpenShift Cluster Manager to download the installation program and perform subscription management. If the cluster has internet access and you do not disable Telemetry, that service automatically entitles your cluster. Access Quay.io to obtain the packages that are required to install your cluster. Obtain the packages that are required to perform cluster updates. Important If your cluster cannot have direct internet access, you can perform a restricted network installation on some types of infrastructure that you provision. During that process, you download the required content and use it to populate a mirror registry with the installation packages. With some installation types, the environment that you install your cluster in will not require internet access. Before you update the cluster, you update the content of the mirror registry. 6.6. Generating a key pair for cluster node SSH access During an OpenShift Container Platform installation, you can provide an SSH public key to the installation program. The key is passed to the Red Hat Enterprise Linux CoreOS (RHCOS) nodes through their Ignition config files and is used to authenticate SSH access to the nodes. The key is added to the ~/.ssh/authorized_keys list for the core user on each node, which enables password-less authentication. After the key is passed to the nodes, you can use the key pair to SSH in to the RHCOS nodes as the user core . To access the nodes through SSH, the private key identity must be managed by SSH for your local user. If you want to SSH in to your cluster nodes to perform installation debugging or disaster recovery, you must provide the SSH public key during the installation process. The ./openshift-install gather command also requires the SSH public key to be in place on the cluster nodes. Important Do not skip this procedure in production environments, where disaster recovery and debugging is required. Note You must use a local key, not one that you configured with platform-specific approaches. Procedure If you do not have an existing SSH key pair on your local machine to use for authentication onto your cluster nodes, create one. For example, on a computer that uses a Linux operating system, run the following command: USD ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1 1 Specify the path and file name, such as ~/.ssh/id_ed25519 , of the new SSH key. If you have an existing key pair, ensure your public key is in the your ~/.ssh directory. View the public SSH key: USD cat <path>/<file_name>.pub For example, run the following to view the ~/.ssh/id_ed25519.pub public key: USD cat ~/.ssh/id_ed25519.pub Add the SSH private key identity to the SSH agent for your local user, if it has not already been added. SSH agent management of the key is required for password-less SSH authentication onto your cluster nodes, or if you want to use the ./openshift-install gather command. Note On some distributions, default SSH private key identities such as ~/.ssh/id_rsa and ~/.ssh/id_dsa are managed automatically. If the ssh-agent process is not already running for your local user, start it as a background task: USD eval "USD(ssh-agent -s)" Example output Agent pid 31874 Add your SSH private key to the ssh-agent : USD ssh-add <path>/<file_name> 1 1 Specify the path and file name for your SSH private key, such as ~/.ssh/id_ed25519 Example output Identity added: /home/<you>/<path>/<file_name> (<computer_name>) steps When you install OpenShift Container Platform, provide the SSH public key to the installation program. 6.7. Obtaining the installation program Before you install OpenShift Container Platform, download the installation file on the host you are using for installation. Prerequisites You have a computer that runs Linux or macOS, with 500 MB of local disk space. Procedure Go to the Cluster Type page on the Red Hat Hybrid Cloud Console. If you have a Red Hat account, log in with your credentials. If you do not, create an account. Tip You can also download the binaries for a specific OpenShift Container Platform release . Select your infrastructure provider from the Run it yourself section of the page. Select your host operating system and architecture from the dropdown menus under OpenShift Installer and click Download Installer . Place the downloaded file in the directory where you want to store the installation configuration files. Important The installation program creates several files on the computer that you use to install your cluster. You must keep the installation program and the files that the installation program creates after you finish installing the cluster. Both of the files are required to delete the cluster. Deleting the files created by the installation program does not remove your cluster, even if the cluster failed during installation. To remove your cluster, complete the OpenShift Container Platform uninstallation procedures for your specific cloud provider. Extract the installation program. For example, on a computer that uses a Linux operating system, run the following command: USD tar -xvf openshift-install-linux.tar.gz Download your installation pull secret from Red Hat OpenShift Cluster Manager . This pull secret allows you to authenticate with the services that are provided by the included authorities, including Quay.io, which serves the container images for OpenShift Container Platform components. Tip Alternatively, you can retrieve the installation program from the Red Hat Customer Portal , where you can specify a version of the installation program to download. However, you must have an active subscription to access this page. 6.8. Exporting the API key You must set the API key you created as a global variable; the installation program ingests the variable during startup to set the API key. Prerequisites You have created either a user API key or service ID API key for your IBM Cloud(R) account. Procedure Export your API key for your account as a global variable: USD export IBMCLOUD_API_KEY=<api_key> Important You must set the variable name exactly as specified; the installation program expects the variable name to be present during startup. 6.9. Manually creating the installation configuration file Installing the cluster requires that you manually create the installation configuration file. Prerequisites You have an SSH public key on your local machine to provide to the installation program. The key will be used for SSH authentication onto your cluster nodes for debugging and disaster recovery. You have obtained the OpenShift Container Platform installation program and the pull secret for your cluster. Procedure Create an installation directory to store your required installation assets in: USD mkdir <installation_directory> Important You must create a directory. Some installation assets, like bootstrap X.509 certificates have short expiration intervals, so you must not reuse an installation directory. If you want to reuse individual files from another cluster installation, you can copy them into your directory. However, the file names for the installation assets might change between releases. Use caution when copying installation files from an earlier OpenShift Container Platform version. Customize the sample install-config.yaml file template that is provided and save it in the <installation_directory> . Note You must name this configuration file install-config.yaml . Back up the install-config.yaml file so that you can use it to install multiple clusters. Important The install-config.yaml file is consumed during the step of the installation process. You must back it up now. Additional resources Installation configuration parameters for IBM Power(R) Virtual Server 6.9.1. Minimum resource requirements for cluster installation Each cluster machine must meet the following minimum requirements: Table 6.1. Minimum resource requirements Machine Operating System vCPU [1] Virtual RAM Storage Input/Output Per Second (IOPS) [2] Bootstrap RHCOS 2 16 GB 100 GB 300 Control plane RHCOS 2 16 GB 100 GB 300 Compute RHCOS 2 8 GB 100 GB 300 One vCPU is equivalent to one physical core when simultaneous multithreading (SMT), or Hyper-Threading, is not enabled. When enabled, use the following formula to calculate the corresponding ratio: (threads per core x cores) x sockets = vCPUs. OpenShift Container Platform and Kubernetes are sensitive to disk performance, and faster storage is recommended, particularly for etcd on the control plane nodes. Note that on many cloud platforms, storage size and IOPS scale together, so you might need to over-allocate storage volume to obtain sufficient performance. Note For OpenShift Container Platform version 4.18, RHCOS is based on RHEL version 9.4, which updates the micro-architecture requirements. The following list contains the minimum instruction set architectures (ISA) that each architecture requires: x86-64 architecture requires x86-64-v2 ISA ARM64 architecture requires ARMv8.0-A ISA IBM Power architecture requires Power 9 ISA s390x architecture requires z14 ISA For more information, see Architectures (RHEL documentation). If an instance type for your platform meets the minimum requirements for cluster machines, it is supported to use in OpenShift Container Platform. Additional resources Optimizing storage 6.9.2. Sample customized install-config.yaml file for IBM Power Virtual Server You can customize the install-config.yaml file to specify more details about your OpenShift Container Platform cluster's platform or modify the values of the required parameters. Important This sample YAML file is provided for reference only. You must obtain your install-config.yaml file by using the installation program and modify it. apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: "ibmcloud-resource-group" region: powervs-region vpcName: name-of-existing-vpc 11 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: "powervs-region-service-instance-guid" publish: Internal 12 pullSecret: '{"auths": ...}' 13 sshKey: ssh-ed25519 AAAA... 14 1 5 If you do not provide these parameters and values, the installation program provides the default value. 2 6 The controlPlane section is a single mapping, but the compute section is a sequence of mappings. To meet the requirements of the different data structures, the first line of the compute section must begin with a hyphen, - , and the first line of the controlPlane section must not. Both sections currently define a single machine pool. Only one control plane pool is used. 3 7 Whether to enable or disable simultaneous multithreading, or hyperthreading . By default, simultaneous multithreading is enabled to increase the performance of your machines' cores. You can disable it by setting the parameter value to Disabled . If you disable simultaneous multithreading in some cluster machines, you must disable it in all cluster machines. 4 8 The smtLevel specifies the level of SMT to set to the control plane and compute machines. The supported values are 1, 2, 4, 8, 'off' and 'on' . The default value is 8. The smtLevel 'off' sets SMT to off and smtlevel 'on' sets SMT to the default value 8 on the cluster nodes. Note When simultaneous multithreading (SMT), or hyperthreading is not enabled, one vCPU is equivalent to one physical core. When enabled, total vCPUs is computed as (Thread(s) per core * Core(s) per socket) * Socket(s). The smtLevel controls the threads per core. Lower SMT levels may require additional assigned cores when deploying the cluster nodes. You can do this by setting the 'processors' parameter in the install-config.yaml file to an appropriate value to meet the requirements for deploying OpenShift Container Platform successfully. 9 The machine CIDR must contain the subnets for the compute machines and control plane machines. 10 The cluster network plugin to install. The default value OVNKubernetes is the only supported value. 11 Specify the name of an existing VPC. 12 Specify how to publish the user-facing endpoints of your cluster. Set publish to Internal to deploy a private cluster. 13 Required. The installation program prompts you for this value. 14 Provide the sshKey value that you use to access the machines in your cluster. Important If you disable simultaneous multithreading, ensure that your capacity planning accounts for the dramatically decreased machine performance. Note For production OpenShift Container Platform clusters on which you want to perform installation debugging or disaster recovery, specify an SSH key that your ssh-agent process uses. 6.9.3. Configuring the cluster-wide proxy during installation Production environments can deny direct access to the internet and instead have an HTTP or HTTPS proxy available. You can configure a new OpenShift Container Platform cluster to use a proxy by configuring the proxy settings in the install-config.yaml file. Prerequisites You have an existing install-config.yaml file. You reviewed the sites that your cluster requires access to and determined whether any of them need to bypass the proxy. By default, all cluster egress traffic is proxied, including calls to hosting cloud provider APIs. You added sites to the Proxy object's spec.noProxy field to bypass the proxy if necessary. Note The Proxy object status.noProxy field is populated with the values of the networking.machineNetwork[].cidr , networking.clusterNetwork[].cidr , and networking.serviceNetwork[] fields from your installation configuration. For installations on Amazon Web Services (AWS), Google Cloud Platform (GCP), Microsoft Azure, and Red Hat OpenStack Platform (RHOSP), the Proxy object status.noProxy field is also populated with the instance metadata endpoint ( 169.254.169.254 ). Procedure Edit your install-config.yaml file and add the proxy settings. For example: apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. 3 A comma-separated list of destination domain names, IP addresses, or other network CIDRs to exclude from proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass the proxy for all destinations. 4 If provided, the installation program generates a config map that is named user-ca-bundle in the openshift-config namespace that contains one or more additional CA certificates that are required for proxying HTTPS connections. The Cluster Network Operator then creates a trusted-ca-bundle config map that merges these contents with the Red Hat Enterprise Linux CoreOS (RHCOS) trust bundle, and this config map is referenced in the trustedCA field of the Proxy object. The additionalTrustBundle field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. 5 Optional: The policy to determine the configuration of the Proxy object to reference the user-ca-bundle config map in the trustedCA field. The allowed values are Proxyonly and Always . Use Proxyonly to reference the user-ca-bundle config map only when http/https proxy is configured. Use Always to always reference the user-ca-bundle config map. The default value is Proxyonly . Note The installation program does not support the proxy readinessEndpoints field. Note If the installer times out, restart and then complete the deployment by using the wait-for command of the installer. For example: USD ./openshift-install wait-for install-complete --log-level debug Save the file and reference it when installing OpenShift Container Platform. The installation program creates a cluster-wide proxy that is named cluster that uses the proxy settings in the provided install-config.yaml file. If no proxy settings are provided, a cluster Proxy object is still created, but it will have a nil spec . Note Only the Proxy object named cluster is supported, and no additional proxies can be created. 6.10. Manually creating IAM Installing the cluster requires that the Cloud Credential Operator (CCO) operate in manual mode. While the installation program configures the CCO for manual mode, you must specify the identity and access management secrets for you cloud provider. You can use the Cloud Credential Operator (CCO) utility ( ccoctl ) to create the required IBM Cloud(R) resources. Prerequisites You have configured the ccoctl binary. You have an existing install-config.yaml file. Procedure Edit the install-config.yaml configuration file so that it contains the credentialsMode parameter set to Manual . Example install-config.yaml configuration file apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled 1 This line is added to set the credentialsMode parameter to Manual . To generate the manifests, run the following command from the directory that contains the installation program: USD ./openshift-install create manifests --dir <installation_directory> From the directory that contains the installation program, set a USDRELEASE_IMAGE variable with the release image from your installation file by running the following command: USD RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}') Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \ 2 --to=<path_to_directory_for_credentials_requests> 3 1 The --included parameter includes only the manifests that your specific cluster configuration requires. 2 Specify the location of the install-config.yaml file. 3 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. Sample CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: "1.0" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer Create the service ID for each credential request, assign the policies defined, create an API key, and generate the secret: USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Note If your cluster uses Technology Preview features that are enabled by the TechPreviewNoUpgrade feature set, you must include the --enable-tech-preview parameter. If an incorrect resource group name is provided, the installation fails during the bootstrap phase. To find the correct resource group name, run the following command: USD grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml Verification Ensure that the appropriate secrets were generated in your cluster's manifests directory. 6.11. Deploying the cluster You can install OpenShift Container Platform on a compatible cloud platform. Important You can run the create cluster command of the installation program only once, during initial installation. Prerequisites You have configured an account with the cloud platform that hosts your cluster. You have the OpenShift Container Platform installation program and the pull secret for your cluster. You have verified that the cloud provider account on your host has the correct permissions to deploy the cluster. An account with incorrect permissions causes the installation process to fail with an error message that displays the missing permissions. Procedure Change to the directory that contains the installation program and initialize the cluster deployment: USD ./openshift-install create cluster --dir <installation_directory> \ 1 --log-level=info 2 1 For <installation_directory> , specify the location of your customized ./install-config.yaml file. 2 To view different installation details, specify warn , debug , or error instead of info . Verification When the cluster deployment completes successfully: The terminal displays directions for accessing your cluster, including a link to the web console and credentials for the kubeadmin user. Credential information also outputs to <installation_directory>/.openshift_install.log . Important Do not delete the installation program or the files that the installation program creates. Both are required to delete the cluster. Example output ... INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: "kubeadmin", and password: "password" INFO Time elapsed: 36m22s Important The Ignition config files that the installation program generates contain certificates that expire after 24 hours, which are then renewed at that time. If the cluster is shut down before renewing the certificates and the cluster is later restarted after the 24 hours have elapsed, the cluster automatically recovers the expired certificates. The exception is that you must manually approve the pending node-bootstrapper certificate signing requests (CSRs) to recover kubelet certificates. See the documentation for Recovering from expired control plane certificates for more information. It is recommended that you use Ignition config files within 12 hours after they are generated because the 24-hour certificate rotates from 16 to 22 hours after the cluster is installed. By using the Ignition config files within 12 hours, you can avoid installation failure if the certificate update runs during installation. 6.12. Installing the OpenShift CLI You can install the OpenShift CLI ( oc ) to interact with OpenShift Container Platform from a command-line interface. You can install oc on Linux, Windows, or macOS. Important If you installed an earlier version of oc , you cannot use it to complete all of the commands in OpenShift Container Platform 4.18. Download and install the new version of oc . Installing the OpenShift CLI on Linux You can install the OpenShift CLI ( oc ) binary on Linux by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the architecture from the Product Variant drop-down list. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Linux Clients entry and save the file. Unpack the archive: USD tar xvf <file> Place the oc binary in a directory that is on your PATH . To check your PATH , execute the following command: USD echo USDPATH Verification After you install the OpenShift CLI, it is available using the oc command: USD oc <command> Installing the OpenShift CLI on Windows You can install the OpenShift CLI ( oc ) binary on Windows by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 Windows Client entry and save the file. Unzip the archive with a ZIP program. Move the oc binary to a directory that is on your PATH . To check your PATH , open the command prompt and execute the following command: C:\> path Verification After you install the OpenShift CLI, it is available using the oc command: C:\> oc <command> Installing the OpenShift CLI on macOS You can install the OpenShift CLI ( oc ) binary on macOS by using the following procedure. Procedure Navigate to the OpenShift Container Platform downloads page on the Red Hat Customer Portal. Select the appropriate version from the Version drop-down list. Click Download Now to the OpenShift v4.18 macOS Clients entry and save the file. Note For macOS arm64, choose the OpenShift v4.18 macOS arm64 Client entry. Unpack and unzip the archive. Move the oc binary to a directory on your PATH. To check your PATH , open a terminal and execute the following command: USD echo USDPATH Verification Verify your installation by using an oc command: USD oc <command> 6.13. Logging in to the cluster by using the CLI You can log in to your cluster as a default system user by exporting the cluster kubeconfig file. The kubeconfig file contains information about the cluster that is used by the CLI to connect a client to the correct cluster and API server. The file is specific to a cluster and is created during OpenShift Container Platform installation. Prerequisites You deployed an OpenShift Container Platform cluster. You installed the oc CLI. Procedure Export the kubeadmin credentials: USD export KUBECONFIG=<installation_directory>/auth/kubeconfig 1 1 For <installation_directory> , specify the path to the directory that you stored the installation files in. Verify you can run oc commands successfully using the exported configuration: USD oc whoami Example output system:admin Additional resources Accessing the web console 6.14. Telemetry access for OpenShift Container Platform In OpenShift Container Platform 4.18, the Telemetry service, which runs by default to provide metrics about cluster health and the success of updates, requires internet access. If your cluster is connected to the internet, Telemetry runs automatically, and your cluster is registered to OpenShift Cluster Manager . After you confirm that your OpenShift Cluster Manager inventory is correct, either maintained automatically by Telemetry or manually by using OpenShift Cluster Manager, use subscription watch to track your OpenShift Container Platform subscriptions at the account or multi-cluster level. Additional resources About remote health monitoring 6.15. steps Customize your cluster Optional: Opt out of remote health reporting
[ "ssh-keygen -t ed25519 -N '' -f <path>/<file_name> 1", "cat <path>/<file_name>.pub", "cat ~/.ssh/id_ed25519.pub", "eval \"USD(ssh-agent -s)\"", "Agent pid 31874", "ssh-add <path>/<file_name> 1", "Identity added: /home/<you>/<path>/<file_name> (<computer_name>)", "tar -xvf openshift-install-linux.tar.gz", "export IBMCLOUD_API_KEY=<api_key>", "mkdir <installation_directory>", "apiVersion: v1 baseDomain: example.com compute: 1 2 - architecture: ppc64le hyperthreading: Enabled 3 name: worker platform: powervs: smtLevel: 8 4 replicas: 3 controlPlane: 5 6 architecture: ppc64le hyperthreading: Enabled 7 name: master platform: powervs: smtLevel: 8 8 replicas: 3 metadata: creationTimestamp: null name: example-private-cluster-name networking: clusterNetwork: - cidr: 10.128.0.0/14 9 hostPrefix: 23 machineNetwork: - cidr: 192.168.0.0/24 networkType: OVNKubernetes 10 serviceNetwork: - 172.30.0.0/16 platform: powervs: userID: ibm-user-id powervsResourceGroup: \"ibmcloud-resource-group\" region: powervs-region vpcName: name-of-existing-vpc 11 vpcRegion : vpc-region zone: powervs-zone serviceInstanceGUID: \"powervs-region-service-instance-guid\" publish: Internal 12 pullSecret: '{\"auths\": ...}' 13 sshKey: ssh-ed25519 AAAA... 14", "apiVersion: v1 baseDomain: my.domain.com proxy: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 additionalTrustBundle: | 4 -----BEGIN CERTIFICATE----- <MY_TRUSTED_CA_CERT> -----END CERTIFICATE----- additionalTrustBundlePolicy: <policy_to_add_additionalTrustBundle> 5", "./openshift-install wait-for install-complete --log-level debug", "apiVersion: v1 baseDomain: cluster1.example.com credentialsMode: Manual 1 compute: - architecture: ppc64le hyperthreading: Enabled", "./openshift-install create manifests --dir <installation_directory>", "RELEASE_IMAGE=USD(./openshift-install version | awk '/release image/ {print USD3}')", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --install-config=<path_to_directory_with_installation_configuration>/install-config.yaml \\ 2 --to=<path_to_directory_for_credentials_requests> 3", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: labels: controller-tools.k8s.io: \"1.0\" name: openshift-image-registry-ibmcos namespace: openshift-cloud-credential-operator spec: secretRef: name: installer-cloud-credentials namespace: openshift-image-registry providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: IBMCloudProviderSpec policies: - attributes: - name: serviceName value: cloud-object-storage roles: - crn:v1:bluemix:public:iam::::role:Viewer - crn:v1:bluemix:public:iam::::role:Operator - crn:v1:bluemix:public:iam::::role:Editor - crn:v1:bluemix:public:iam::::serviceRole:Reader - crn:v1:bluemix:public:iam::::serviceRole:Writer - attributes: - name: resourceType value: resource-group roles: - crn:v1:bluemix:public:iam::::role:Viewer", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "grep resourceGroup <installation_directory>/manifests/cluster-infrastructure-02-config.yml", "./openshift-install create cluster --dir <installation_directory> \\ 1 --log-level=info 2", "INFO Install complete! INFO To access the cluster as the system:admin user when using 'oc', run 'export KUBECONFIG=/home/myuser/install_dir/auth/kubeconfig' INFO Access the OpenShift web-console here: https://console-openshift-console.apps.mycluster.example.com INFO Login to the console with user: \"kubeadmin\", and password: \"password\" INFO Time elapsed: 36m22s", "tar xvf <file>", "echo USDPATH", "oc <command>", "C:\\> path", "C:\\> oc <command>", "echo USDPATH", "oc <command>", "export KUBECONFIG=<installation_directory>/auth/kubeconfig 1", "oc whoami", "system:admin" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.18/html/installing_on_ibm_power_virtual_server/installing-ibm-power-vs-private-cluster
E.2.11. /proc/iomem
E.2.11. /proc/iomem This file shows you the current map of the system's memory for each physical device: The first column displays the memory registers used by each of the different types of memory. The second column lists the kind of memory located within those registers and displays which memory registers are used by the kernel within the system RAM or, if the network interface card has multiple Ethernet ports, the memory registers assigned for each port.
[ "00000000-0009fbff : System RAM 0009fc00-0009ffff : reserved 000a0000-000bffff : Video RAM area 000c0000-000c7fff : Video ROM 000f0000-000fffff : System ROM 00100000-07ffffff : System RAM 00100000-00291ba8 : Kernel code 00291ba9-002e09cb : Kernel data e0000000-e3ffffff : VIA Technologies, Inc. VT82C597 [Apollo VP3] e4000000-e7ffffff : PCI Bus #01 e4000000-e4003fff : Matrox Graphics, Inc. MGA G200 AGP e5000000-e57fffff : Matrox Graphics, Inc. MGA G200 AGP e8000000-e8ffffff : PCI Bus #01 e8000000-e8ffffff : Matrox Graphics, Inc. MGA G200 AGP ea000000-ea00007f : Digital Equipment Corporation DECchip 21140 [FasterNet] ea000000-ea00007f : tulip ffff0000-ffffffff : reserved" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/deployment_guide/s2-proc-iomem
Chapter 9. Audit logs in Red Hat Developer Hub
Chapter 9. Audit logs in Red Hat Developer Hub Audit logs are a chronological set of records documenting the user activities, system events, and data changes that affect your Red Hat Developer Hub users, administrators, or components. Administrators can view Developer Hub audit logs in the OpenShift Container Platform web console to monitor scaffolder events, changes to the RBAC system, and changes to the Catalog database. Audit logs include the following information: Name of the audited event Actor that triggered the audited event, for example, terminal, port, IP address, or hostname Event metadata, for example, date, time Event status, for example, success , failure Severity levels, for example, info , debug , warn , error You can use the information in the audit log to achieve the following goals: Enhance security Trace activities, including those initiated by automated systems and software templates, back to their source. Know when software templates are executed, as well as the details of application and component installations, updates, configuration changes, and removals. Automate compliance Use streamlined processes to view log data for specified points in time for auditing purposes or continuous compliance maintenance. Debug issues Use access records and activity details to fix issues with software templates or plugins. Note Audit logs are not forwarded to the internal log store by default because this does not provide secure storage. You are responsible for ensuring that the system to which you forward audit logs is compliant with your organizational and governmental regulations, and is properly secured. Additional resources For more information about logging in OpenShift Container Platform, see About Logging For a complete list of fields that a Developer Hub audit log can include, see Section 9.2.1, "Audit log fields" For a list of scaffolder events that a Developer Hub audit log can include, see Section 9.2.2, "Scaffolder events" 9.1. Configuring audit logs for Developer Hub on OpenShift Container Platform Use the OpenShift Container Platform web console to configure the following OpenShift Container Platform logging components to use audit logging for Developer Hub: Logging deployment Configure the logging environment, including both the CPU and memory limits for each logging component. For more information, see Red Hat OpenShift Container Platform - Configuring your Logging deployment . Logging collector Configure the spec.collection stanza in the ClusterLogging custom resource (CR) to use a supported modification to the log collector and collect logs from STDOUT . For more information, see Red Hat OpenShift Container Platform - Configuring the logging collector . Log forwarding Send logs to specific endpoints inside and outside your OpenShift Container Platform cluster by specifying a combination of outputs and pipelines in a ClusterLogForwarder CR. For more information, see Red Hat OpenShift Container Platform - Enabling JSON log forwarding and Red Hat OpenShift Container Platform - Configuring log forwarding . 9.2. Viewing audit logs in Developer Hub Administrators can view, search, filter, and manage the log data from the Red Hat OpenShift Container Platform web console. You can filter audit logs from other log types by using the isAuditLog field. Prerequisites You are logged in as an administrator in the OpenShift Container Platform web console. Procedure From the Developer perspective of the OpenShift Container Platform web console, click the Topology tab. From the Topology view, click the pod that you want to view audit log data for. From the pod panel, click the Resources tab. From the Pods section of the Resources tab, click View logs . From the Logs view, enter isAuditLog into the Search field to filter audit logs from other log types. You can use the arrows to browse the logs containing the isAuditLog field. 9.2.1. Audit log fields Developer Hub audit logs can include the following fields: eventName The name of the audited event. actor An object containing information about the actor that triggered the audited event. Contains the following fields: actorId The name/id/ entityRef of the associated user or service. Can be null if an unauthenticated user accesses the endpoints and the default authentication policy is disabled. ip The IP address of the actor (optional). hostname The hostname of the actor (optional). client The user agent of the actor (optional). stage The stage of the event at the time that the audit log was generated, for example, initiation or completion . status The status of the event, for example, succeeded or failed . meta An optional object containing event specific data, for example, taskId . request An optional field that contains information about the HTTP request sent to an endpoint. Contains the following fields: method The HTTP method of the request. query The query fields of the request. params The params fields of the request. body The request body . The secrets provided when creating a task are redacted and appear as * . url The endpoint URL of the request. response An optional field that contains information about the HTTP response sent from an endpoint. Contains the following fields: status The status code of the HTTP response. body The contents of the request body. isAuditLog A flag set to true to differentiate audit logs from other log types. errors A list of errors containing the name , message and potentially the stack field of the error. Only appears when status is failed . 9.2.2. Scaffolder events Developer Hub audit logs can include the following scaffolder events: ScaffolderParameterSchemaFetch Tracks GET requests to the /v2/templates/:namespace/:kind/:name/parameter-schema endpoint which return template parameter schemas ScaffolderInstalledActionsFetch Tracks GET requests to the /v2/actions endpoint which grabs the list of installed actions ScaffolderTaskCreation Tracks POST requests to the /v2/tasks endpoint which creates tasks that the scaffolder executes ScaffolderTaskListFetch Tracks GET requests to the /v2/tasks endpoint which fetches details of all tasks in the scaffolder. ScaffolderTaskFetch Tracks GET requests to the /v2/tasks/:taskId endpoint which fetches details of a specified task :taskId ScaffolderTaskCancellation Tracks POST requests to the /v2/tasks/:taskId/cancel endpoint which cancels a running task ScaffolderTaskStream Tracks GET requests to the /v2/tasks/:taskId/eventstream endpoint which returns an event stream of the task logs of task :taskId ScaffolderTaskEventFetch Tracks GET requests to the /v2/tasks/:taskId/events endpoint which returns a snapshot of the task logs of task :taskId ScaffolderTaskDryRun Tracks POST requests to the /v2/dry-run endpoint which creates a dry-run task. All audit logs for events associated with dry runs have the meta.isDryLog flag set to true . ScaffolderStaleTaskCancellation Tracks automated cancellation of stale tasks ScaffolderTaskExecution Tracks the initiation and completion of a real scaffolder task execution (will not occur during dry runs) ScaffolderTaskStepExecution Tracks initiation and completion of a scaffolder task step execution ScaffolderTaskStepSkip Tracks steps skipped due to if conditionals not being met ScaffolderTaskStepIteration Tracks the step execution of each iteration of a task step that contains the each field. 9.2.3. Catalog events Developer Hub audit logs can include the following catalog events: CatalogEntityAncestryFetch Tracks GET requests to the /entities/by-name/:kind/:namespace/:name/ancestry endpoint, which returns the ancestry of an entity CatalogEntityBatchFetch Tracks POST requests to the /entities/by-refs endpoint, which returns a batch of entities CatalogEntityDeletion Tracks DELETE requests to the /entities/by-uid/:uid endpoint, which deletes an entity Note If the parent location of the deleted entity is still present in the catalog, then the entity is restored in the catalog during the processing cycle. CatalogEntityFacetFetch Tracks GET requests to the /entity-facets endpoint, which returns the facets of an entity CatalogEntityFetch Tracks GET requests to the /entities endpoint, which returns a list of entities CatalogEntityFetchByName Tracks GET requests to the /entities/by-name/:kind/:namespace/:name endpoint, which returns an entity matching the specified entity reference, for example, <kind>:<namespace>/<name> CatalogEntityFetchByUid Tracks GET requests to the /entities/by-uid/:uid endpoint, which returns an entity matching the unique ID of the specified entity CatalogEntityRefresh Tracks POST requests to the /entities/refresh endpoint, which schedules the specified entity to be refreshed CatalogEntityValidate Tracks POST requests to the /entities/validate endpoint, which validates the specified entity CatalogLocationCreation Tracks POST requests to the /locations endpoint, which creates a location Note A location is a marker that references other places to look for catalog data. CatalogLocationAnalyze Tracks POST requests to the /locations/analyze endpoint, which analyzes the specified location CatalogLocationDeletion Tracks DELETE requests to the /locations/:id endpoint, which deletes a location and all child entities associated with it CatalogLocationFetch Tracks GET requests to the /locations endpoint, which returns a list of locations CatalogLocationFetchByEntityRef Tracks GET requests to the /locations/by-entity endpoint, which returns a list of locations associated with the specified entity reference CatalogLocationFetchById Tracks GET requests to the /locations/:id endpoint, which returns a location matching the specified location ID QueriedCatalogEntityFetch Tracks GET requests to the /entities/by-query endpoint, which returns a list of entities matching the specified query
null
https://docs.redhat.com/en/documentation/red_hat_developer_hub/1.2/html/getting_started_with_red_hat_developer_hub/assembly-audit-log
Chapter 3. NFV Hardware
Chapter 3. NFV Hardware See Director Installation and Usage for guidance on hardware selection for OpenStack nodes. For a list of tested NICs for network functions virtualization (NFV), see Network Adapter Support . Customer Portal login required.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/network_functions_virtualization_product_guide/ch-nfv_hardware
Chapter 4. Managing high availability services with Pacemaker
Chapter 4. Managing high availability services with Pacemaker The Pacemaker service manages core container and active-passive services, such as Galera, RabbitMQ, Redis, and HAProxy. You use Pacemaker to view and manage general information about the managed services, virtual IP addresses, power management, and fencing. For more information about Pacemaker in Red Hat Enterprise Linux, see Configuring and Managing High Availability Clusters in the Red Hat Enterprise Linux documentation. 4.1. Resource bundles and containers Pacemaker manages Red Hat OpenStack Platform (RHOSP) services as Bundle Set resources , or bundles . Most of these services are active-active services that start in the same way and always run on each Controller node. Pacemaker manages the following resource types: Bundle A bundle resource configures and replicates the same container on all Controller nodes, maps the necessary storage paths to the container directories, and sets specific attributes related to the resource itself. Container A container can run different kinds of resources, from simple systemd services like HAProxy to complex services like Galera, which requires specific resource agents that control and set the state of the service on the different nodes. Important You cannot use podman or systemctl to manage bundles or containers. You can use the commands to check the status of the services, but you must use Pacemaker to perform actions on these services. Podman containers that Pacemaker controls have a RestartPolicy set to no by Podman. This is to ensure that Pacemaker, and not Podman, controls the container start and stop actions. Simple Bundle Set resources (simple bundles) A simple Bundle Set resource, or simple bundle , is a set of containers that each include the same Pacemaker services that you want to deploy across the Controller nodes. The following example shows a list of simple bundles from the output of the pcs status command: For each bundle, you can see the following details: The name that Pacemaker assigns to the service. The reference to the container that is associated with the bundle. The list and status of replicas that are running on the different Controller nodes. The following example shows the settings for the haproxy-bundle simple bundle: The example shows the following information about the containers in the bundle: image : Image used by the container, which refers to the local registry of the undercloud. network : Container network type, which is "host" in the example. options : Specific options for the container. replicas : Indicates how many copies of the container must run in the cluster. Each bundle includes three containers, one for each Controller node. run-command : System command used to spawn the container. Storage Mapping : Mapping of the local path on each host to the container. To check the haproxy configuration from the host, open the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file instead of the /etc/haproxy/haproxy.cfg file. Note Although HAProxy provides high availability services by load balancing traffic to selected services, you configure HAProxy as a highly available service by managing it as a Pacemaker bundle service. Complex Bundle Set resources (complex bundles) Complex Bundle Set resources, or complex bundles , are Pacemaker services that specify a resource configuration in addition to the basic container configuration that is included in simple bundles. This configuration is needed to manage Multi-State resources, which are services that can have different states depending on the Controller node they run on. This example shows a list of complex bundles from the output of the pcs status command: This output shows the following information about each complex bundle: RabbitMQ: All three Controller nodes run a standalone instance of the service, similar to a simple bundle. Galera: All three Controller nodes are running as Galera masters under the same constraints. Redis: The overcloud-controller-0 container is running as the master, while the other two Controller nodes are running as slaves. Each container type might run under different constraints. The following example shows the settings for the galera-bundle complex bundle: This output shows that, unlike in a simple bundle, the galera-bundle resource includes explicit resource configuration that determines all aspects of the multi-state resource. Note Although a service can run on multiple Controller nodes at the same time, the Controller node itself might not be listening at the IP address that is required to reach those services. For information about how to check the IP address of a service, see Section 4.4, "Viewing virtual IP addresses" . 4.2. Viewing general Pacemaker information To view general Pacemaker information, use the pcs status command. Procedure Log in to any Controller node as the heat-admin user. Run the pcs status command: Example output: The main sections of the output show the following information about the cluster: Cluster name : Name of the cluster. [NUM] nodes configured : Number of nodes that are configured for the cluster. [NUM] resources configured : Number of resources that are configured for the cluster. Online : Names of the Controller nodes that are currently online. GuestOnline : Names of the guest nodes that are currently online. Each guest node consists of a complex Bundle Set resource. For more information about bundle sets, see Section 4.1, "Resource bundles and containers" . 4.3. Viewing bundle status You can check the status of a bundle from an undercloud node or log in to one of the Controller nodes to check the bundle status directly. Check bundle status from an undercloud node Run the following command: Example output: The output shows that the haproxy process is running inside the container. Check bundle status from a Controller node Log in to a Controller node and run the following command: Example output: 4.4. Viewing virtual IP addresses Each IPaddr2 resource sets a virtual IP address that clients use to request access to a service. If the Controller node with that IP address fails, the IPaddr2 resource reassigns the IP address to a different Controller node. Show all virtual IP addresses Run the pcs resource show command with the --full option to display all resources that use the VirtualIP type: The following example output shows each Controller node that is currently set to listen to a particular virtual IP address: Each IP address is initially attached to a specific Controller node. For example, 192.168.1.150 is started on overcloud-controller-0 . However, if that Controller node fails, the IP address is reassigned to other Controller nodes in the cluster. The following table describes the IP addresses in the example output and shows the original allocation of each IP address. Table 4.1. IP address description and allocation source IP Address Description Allocated From 192.168.1.150 Public IP address ExternalAllocationPools attribute in the network-environment.yaml file 10.200.0.6 Controller virtual IP address Part of the dhcp_start and dhcp_end range set to 10.200.0.5-10.200.0.24 in the undercloud.conf file 172.16.0.10 Provides access to OpenStack API services on a Controller node InternalApiAllocationPools in the network-environment.yaml file 172.18.0.10 Storage virtual IP address that provides access to the Glance API and to Swift Proxy services StorageAllocationPools attribute in the network-environment.yaml file 172.16.0.11 Provides access to Redis service on a Controller node InternalApiAllocationPools in the network-environment.yaml file 172.19.0.10 Provides access to storage management StorageMgmtAlloctionPools in the network-environment.yaml file View a specific IP address Run the pcs resource show command. Example output: View network information for a specific IP address Log in to the Controller node that is assigned to the IP address you want to view. Run the ip addr show command to view network interface information. Example output: Run the netstat command to show all processes that listen to the IP address. Example output: Note Processes that are listening to all local addresses, such as 0.0.0.0 , are also available through 192.168.1.150 . These processes include sshd , mysqld , dhclient , ntpd . View port number assignments Open the /var/lib/config-data/puppet-generated/haproxy/etc/haproxy/haproxy.cfg file to see default port number assignments. The following example shows the port numbers and the services that they listen to: TCP port 6080: nova_novncproxy TCP port 9696: neutron TCP port 8000: heat_cfn TCP port 80: horizon TCP port 8776: cinder In this example, most services that are defined in the haproxy.cfg file listen to the 192.168.1.150 IP address on all three Controller nodes. However, only the controller-0 node is listening externally to the 192.168.1.150 IP address. Therefore, if the controller-0 node fails, HAProxy only needs to re-assign 192.168.1.150 to another Controller node and all other services will already be running on the fallback Controller node. 4.5. Viewing Pacemaker status and power management information The last sections of the pcs status output show information about your power management fencing, such as IPMI, and the status of the Pacemaker service itself: The my-ipmilan-for-controller settings show the type of fencing for each Controller node ( stonith:fence_ipmilan ) and whether or not the IPMI service is stopped or running. The PCSD Status shows that all three Controller nodes are currently online. The Pacemaker service consists of three daemons: corosync , pacemaker , and pcsd . In the example, all three services are active and enabled. 4.6. Troubleshooting failed Pacemaker resources If one the Pacemaker resources fails, you can view the Failed Actions section of the pcs status output. In the following example, the openstack-cinder-volume service stopped working on controller-0 : In this case, you must enable the systemd service openstack-cinder-volume . In other cases, you might need to locate and fix the problem and then clean up the resources. For more information about troubleshooting resource problems, see Chapter 8, Troubleshooting resource problems .
[ "Podman container set: haproxy-bundle [192.168.24.1:8787/rhosp-rhel8/openstack-haproxy:pcmklatest] haproxy-bundle-podman-0 (ocf::heartbeat:podman): Started overcloud-controller-0 haproxy-bundle-podman-1 (ocf::heartbeat:podman): Started overcloud-controller-1 haproxy-bundle-podman-2 (ocf::heartbeat:podman): Started overcloud-controller-2", "sudo pcs resource show haproxy-bundle Bundle: haproxy-bundle Podman: image=192.168.24.1:8787/rhosp-rhel8/openstack-haproxy:pcmklatest network=host options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" replicas=3 run-command=\"/bin/bash /usr/local/bin/kolla_start\" Storage Mapping: options=ro source-dir=/var/lib/kolla/config_files/haproxy.json target-dir=/var/lib/kolla/config_files/config.json (haproxy-cfg-files) options=ro source-dir=/var/lib/config-data/puppet-generated/haproxy/ target-dir=/var/lib/kolla/config_files/src (haproxy-cfg-data) options=ro source-dir=/etc/hosts target-dir=/etc/hosts (haproxy-hosts) options=ro source-dir=/etc/localtime target-dir=/etc/localtime (haproxy-localtime) options=ro source-dir=/etc/pki/ca-trust/extracted target-dir=/etc/pki/ca-trust/extracted (haproxy-pki-extracted) options=ro source-dir=/etc/pki/tls/certs/ca-bundle.crt target-dir=/etc/pki/tls/certs/ca-bundle.crt (haproxy-pki-ca-bundle-crt) options=ro source-dir=/etc/pki/tls/certs/ca-bundle.trust.crt target-dir=/etc/pki/tls/certs/ca-bundle.trust.crt (haproxy-pki-ca-bundle-trust-crt) options=ro source-dir=/etc/pki/tls/cert.pem target-dir=/etc/pki/tls/cert.pem (haproxy-pki-cert) options=rw source-dir=/dev/log target-dir=/dev/log (haproxy-dev-log)", "Podman container set: rabbitmq-bundle [192.168.24.1:8787/rhosp-rhel8/openstack-rabbitmq:pcmklatest] rabbitmq-bundle-0 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-0 rabbitmq-bundle-1 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-1 rabbitmq-bundle-2 (ocf::heartbeat:rabbitmq-cluster): Started overcloud-controller-2 Podman container set: galera-bundle [192.168.24.1:8787/rhosp-rhel8/openstack-mariadb:pcmklatest] galera-bundle-0 (ocf::heartbeat:galera): Master overcloud-controller-0 galera-bundle-1 (ocf::heartbeat:galera): Master overcloud-controller-1 galera-bundle-2 (ocf::heartbeat:galera): Master overcloud-controller-2 Podman container set: redis-bundle [192.168.24.1:8787/rhosp-rhel8/openstack-redis:pcmklatest] redis-bundle-0 (ocf::heartbeat:redis): Master overcloud-controller-0 redis-bundle-1 (ocf::heartbeat:redis): Slave overcloud-controller-1 redis-bundle-2 (ocf::heartbeat:redis): Slave overcloud-controller-2", "[...] Bundle: galera-bundle Podman: image=192.168.24.1:8787/rhosp-rhel8/openstack-mariadb:pcmklatest masters=3 network=host options=\"--user=root --log-driver=journald -e KOLLA_CONFIG_STRATEGY=COPY_ALWAYS\" replicas=3 run-command=\"/bin/bash /usr/local/bin/kolla_start\" Network: control-port=3123 Storage Mapping: options=ro source-dir=/var/lib/kolla/config_files/mysql.json target-dir=/var/lib/kolla/config_files/config.json (mysql-cfg-files) options=ro source-dir=/var/lib/config-data/puppet-generated/mysql/ target-dir=/var/lib/kolla/config_files/src (mysql-cfg-data) options=ro source-dir=/etc/hosts target-dir=/etc/hosts (mysql-hosts) options=ro source-dir=/etc/localtime target-dir=/etc/localtime (mysql-localtime) options=rw source-dir=/var/lib/mysql target-dir=/var/lib/mysql (mysql-lib) options=rw source-dir=/var/log/mariadb target-dir=/var/log/mariadb (mysql-log-mariadb) options=rw source-dir=/dev/log target-dir=/dev/log (mysql-dev-log) Resource: galera (class=ocf provider=heartbeat type=galera) Attributes: additional_parameters=--open-files-limit=16384 cluster_host_map=overcloud-controller-0:overcloud-controller-0.internalapi.localdomain;overcloud-controller-1:overcloud-controller-1.internalapi.localdomain;overcloud-controller-2:overcloud-controller-2.internalapi.localdomain enable_creation=true wsrep_cluster_address=gcomm://overcloud-controller-0.internalapi.localdomain,overcloud-controller-1.internalapi.localdomain,overcloud-controller-2.internalapi.localdomain Meta Attrs: container-attribute-target=host master-max=3 ordered=true Operations: demote interval=0s timeout=120 (galera-demote-interval-0s) monitor interval=20 timeout=30 (galera-monitor-interval-20) monitor interval=10 role=Master timeout=30 (galera-monitor-interval-10) monitor interval=30 role=Slave timeout=30 (galera-monitor-interval-30) promote interval=0s on-fail=block timeout=300s (galera-promote-interval-0s) start interval=0s timeout=120 (galera-start-interval-0s) stop interval=0s timeout=120 (galera-stop-interval-0s) [...]", "ssh heat-admin@overcloud-controller-0", "[heat-admin@overcloud-controller-0 ~] USD sudo pcs status", "Cluster name: tripleo_cluster Stack: corosync Current DC: overcloud-controller-1 (version 2.0.1-4.el8-0eb7991564) - partition with quorum Last updated: Thu Feb 8 14:29:21 2018 Last change: Sat Feb 3 11:37:17 2018 by root via cibadmin on overcloud-controller-2 12 nodes configured 37 resources configured Online: [ overcloud-controller-0 overcloud-controller-1 overcloud-controller-2 ] GuestOnline: [ galera-bundle-0@overcloud-controller-0 galera-bundle-1@overcloud-controller-1 galera-bundle-2@overcloud-controller-2 rabbitmq-bundle-0@overcloud-controller-0 rabbitmq-bundle-1@overcloud-controller-1 rabbitmq-bundle-2@overcloud-controller-2 redis-bundle-0@overcloud-controller-0 redis-bundle-1@overcloud-controller-1 redis-bundle-2@overcloud-controller-2 ] Full list of resources: [...]", "sudo podman exec -it haproxy-bundle-podman-0 ps -efww | grep haproxy*", "root 7 1 0 06:08 ? 00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws haproxy 11 7 0 06:08 ? 00:00:17 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws", "ps -ef | grep haproxy*", "root 17774 17729 0 06:08 ? 00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws 42454 17819 17774 0 06:08 ? 00:00:21 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws root 288508 237714 0 07:04 pts/0 00:00:00 grep --color=auto haproxy* ps -ef | grep -e 17774 -e 17819 root 17774 17729 0 06:08 ? 00:00:00 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws 42454 17819 17774 0 06:08 ? 00:00:22 /usr/sbin/haproxy -f /etc/haproxy/haproxy.cfg -Ws root 301950 237714 0 07:07 pts/0 00:00:00 grep --color=auto -e 17774 -e 17819", "sudo pcs resource show --full", "ip-10.200.0.6 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 ip-192.168.1.150 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 ip-172.16.0.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-1 ip-172.16.0.11 (ocf::heartbeat:IPaddr2): Started overcloud-controller-0 ip-172.18.0.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2 ip-172.19.0.10 (ocf::heartbeat:IPaddr2): Started overcloud-controller-2", "sudo pcs resource show ip-192.168.1.150", "Resource: ip-192.168.1.150 (class=ocf provider=heartbeat type=IPaddr2) Attributes: ip=192.168.1.150 cidr_netmask=32 Operations: start interval=0s timeout=20s (ip-192.168.1.150-start-timeout-20s) stop interval=0s timeout=20s (ip-192.168.1.150-stop-timeout-20s) monitor interval=10s timeout=20s (ip-192.168.1.150-monitor-interval-10s)", "ip addr show vlan100", "9: vlan100: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN link/ether be:ab:aa:37:34:e7 brd ff:ff:ff:ff:ff:ff inet *192.168.1.151/24* brd 192.168.1.255 scope global vlan100 valid_lft forever preferred_lft forever inet *192.168.1.150/32* brd 192.168.1.255 scope global vlan100 valid_lft forever preferred_lft forever", "sudo netstat -tupln | grep \"192.168.1.150.haproxy\"", "tcp 0 0 192.168.1.150:8778 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8042 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:9292 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8080 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:80 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8977 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:6080 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:9696 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8000 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8004 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8774 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:5000 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8776 0.0.0.0:* LISTEN 61029/haproxy tcp 0 0 192.168.1.150:8041 0.0.0.0:* LISTEN 61029/haproxy", "my-ipmilan-for-controller-0 (stonith:fence_ipmilan): Started my-ipmilan-for-controller-0 my-ipmilan-for-controller-1 (stonith:fence_ipmilan): Started my-ipmilan-for-controller-1 my-ipmilan-for-controller-2 (stonith:fence_ipmilan): Started my-ipmilan-for-controller-2 PCSD Status: overcloud-controller-0: Online overcloud-controller-1: Online overcloud-controller-2: Online Daemon Status: corosync: active/enabled pacemaker: active/enabled openstack-cinder-volume (systemd:openstack-cinder-volume): Started overcloud-controller-0 pcsd: active/enabled", "Failed Actions: * openstack-cinder-volume_monitor_60000 on overcloud-controller-0 'not running' (7): call=74, status=complete, exitreason='none', last-rc-change='Wed Dec 14 08:33:14 2016', queued=0ms, exec=0ms" ]
https://docs.redhat.com/en/documentation/red_hat_openstack_platform/16.0/html/high_availability_deployment_and_usage/assembly_pacemaker
Chapter 3. Ceph Object Gateway and the S3 API
Chapter 3. Ceph Object Gateway and the S3 API As a developer, you can use a RESTful application programming interface (API) that is compatible with the Amazon S3 data access model. You can manage the buckets and objects stored in a Red Hat Ceph Storage cluster through the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.1. S3 limitations Important The following limitations should be used with caution. There are implications related to your hardware selections, so you should always discuss these requirements with your Red Hat account team. Maximum object size when using Amazon S3: Individual Amazon S3 objects can range in size from a minimum of 0B to a maximum of 5TB. The largest object that can be uploaded in a single PUT is 5GB. For objects larger than 100MB, you should consider using the Multipart Upload capability. Maximum metadata size when using Amazon S3: There is no defined limit on the total size of user metadata that can be applied to an object, but a single HTTP request is limited to 16,000 bytes. The amount of data overhead Red Hat Ceph Storage cluster produces to store S3 objects and metadata: The estimate here is 200-300 bytes plus the length of the object name. Versioned objects consume additional space proportional to the number of versions. Also, transient overhead is produced during multi-part upload and other transactional updates, but these overheads are recovered during garbage collection. Additional Resources See the Red Hat Ceph Storage Developer Guide for details on the unsupported header fields . 3.2. Accessing the Ceph Object Gateway with the S3 API As a developer, you must configure access to the Ceph Object Gateway and the Secure Token Service (STS) before you can start using the Amazon S3 API. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. A RESTful client. 3.2.1. S3 authentication Requests to the Ceph Object Gateway can be either authenticated or unauthenticated. Ceph Object Gateway assumes unauthenticated requests are sent by an anonymous user. Ceph Object Gateway supports canned ACLs. For most use cases, clients use existing open source libraries like the Amazon SDK's AmazonS3Client for Java, and Python Boto. With open source libraries you simply pass in the access key and secret key and the library builds the request header and authentication signature for you. However, you can create requests and sign them too. Authenticating a request requires including an access key and a base 64-encoded hash-based Message Authentication Code (HMAC) in the request before it is sent to the Ceph Object Gateway server. Ceph Object Gateway uses an S3-compatible authentication approach. Example In the above example, replace ACCESS_KEY with the value for the access key ID followed by a colon ( : ). Replace HASH_OF_HEADER_AND_SECRET with a hash of a canonicalized header string and the secret corresponding to the access key ID. Generate hash of header string and secret To generate the hash of the header string and secret: Get the value of the header string. Normalize the request header string into canonical form. Generate an HMAC using a SHA-1 hashing algorithm. Encode the hmac result as base-64. Normalize header To normalize the header into canonical form: Get all content- headers. Remove all content- headers except for content-type and content-md5 . Ensure the content- header names are lowercase. Sort the content- headers lexicographically. Ensure you have a Date header AND ensure the specified date uses GMT and not an offset. Get all headers beginning with x-amz- . Ensure that the x-amz- headers are all lowercase. Sort the x-amz- headers lexicographically. Combine multiple instances of the same field name into a single field and separate the field values with a comma. Replace white space and line breaks in header values with a single space. Remove white space before and after colons. Append a new line after each header. Merge the headers back into the request header. Replace the HASH_OF_HEADER_AND_SECRET with the base-64 encoded HMAC string. Additional Resources For additional details, consult the Signing and Authenticating REST Requests section of Amazon Simple Storage Service documentation. 3.2.2. S3-server-side encryption The Ceph Object Gateway supports server-side encryption of uploaded objects for the S3 application programming interface (API). Server-side encryption means that the S3 client sends data over HTTP in its unencrypted form, and the Ceph Object Gateway stores that data in the Red Hat Ceph Storage cluster in encrypted form. Note Red Hat does NOT support S3 object encryption of Static Large Object (SLO) or Dynamic Large Object (DLO). Important To use encryption, client requests MUST send requests over an SSL connection. Red Hat does not support S3 encryption from a client unless the Ceph Object Gateway uses SSL. However, for testing purposes, administrators can disable SSL during testing by setting the rgw_crypt_require_ssl configuration setting to false at runtime, using the ceph config set client.rgw command, and then restarting the Ceph Object Gateway instance. In a production environment, it might not be possible to send encrypted requests over SSL. In such a case, send requests using HTTP with server-side encryption. For information about how to configure HTTP with server-side encryption, see the Additional Resources section below. There are two options for the management of encryption keys: Customer-provided Keys When using customer-provided keys, the S3 client passes an encryption key along with each request to read or write encrypted data. It is the customer's responsibility to manage those keys. Customers must remember which key the Ceph Object Gateway used to encrypt each object. Ceph Object Gateway implements the customer-provided key behavior in the S3 API according to the Amazon SSE-C specification. Since the customer handles the key management and the S3 client passes keys to the Ceph Object Gateway, the Ceph Object Gateway requires no special configuration to support this encryption mode. Key Management Service When using a key management service, the secure key management service stores the keys and the Ceph Object Gateway retrieves them on demand to serve requests to encrypt or decrypt data. Ceph Object Gateway implements the key management service behavior in the S3 API according to the Amazon SSE-KMS specification. Important Currently, the only tested key management implementations are HashiCorp Vault, and OpenStack Barbican. However, OpenStack Barbican is a Technology Preview and is not supported for use in production systems. Additional Resources Amazon SSE-C Amazon SSE-KMS Configuring server-side encryption The HashiCorp Vault 3.2.3. S3 access control lists Ceph Object Gateway supports S3-compatible Access Control Lists (ACL) functionality. An ACL is a list of access grants that specify which operations a user can perform on a bucket or on an object. Each grant has a different meaning when applied to a bucket versus applied to an object: Table 3.1. User Operations Permission Bucket Object READ Grantee can list the objects in the bucket. Grantee can read the object. WRITE Grantee can write or delete objects in the bucket. N/A READ_ACP Grantee can read bucket ACL. Grantee can read the object ACL. WRITE_ACP Grantee can write bucket ACL. Grantee can write to the object ACL. FULL_CONTROL Grantee has full permissions for object in the bucket. Grantee can read or write to the object ACL. 3.2.4. Preparing access to the Ceph Object Gateway using S3 You have to follow some pre-requisites on the Ceph Object Gateway node before attempting to access the gateway server. Prerequisites Installation of the Ceph Object Gateway software. Root-level access to the Ceph Object Gateway node. Procedure As root , open port 8080 on the firewall: Add a wildcard to the DNS server that you are using for the gateway as mentioned in the Object Gateway Configuration and Administration Guide . You can also set up the gateway node for local DNS caching. To do so, execute the following steps: As root , install and setup dnsmasq : Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. As root , stop NetworkManager: As root , set the gateway server's IP as the nameserver: Replace IP_OF_GATEWAY_NODE and FQDN_OF_GATEWAY_NODE with the IP address and FQDN of the gateway node. Verify subdomain requests: Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Warning Setting up the gateway server for local DNS caching is for testing purposes only. You won't be able to access the outside network after doing this. It is strongly recommended to use a proper DNS server for the Red Hat Ceph Storage cluster and gateway node. Create the radosgw user for S3 access carefully as mentioned in the Object Gateway Configuration and Administration Guide and copy the generated access_key and secret_key . You will need these keys for S3 access and subsequent bucket management tasks. 3.2.5. Accessing the Ceph Object Gateway using Ruby AWS S3 You can use Ruby programming language along with aws-s3 gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::S3 . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and its essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-s3 Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true it would mean that bucket my-new-bucket1 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example, my-new-bucket4 , my-new-bucket5 . , edit the above-mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting non-empty buckets: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 3.2.6. Accessing the Ceph Object Gateway using Ruby AWS SDK You can use the Ruby programming language along with aws-sdk gem for S3 access. Execute the steps mentioned below on the node used for accessing the Ceph Object Gateway server with Ruby AWS::SDK . Prerequisites User-level access to Ceph Object Gateway. Root-level access to the node accessing the Ceph Object Gateway. Internet access. Procedure Install the ruby package: Note The above command will install ruby and its essential dependencies like rubygems and ruby-libs . If somehow the command does not install all the dependencies, install them separately. Install the aws-sdk Ruby package: Create a project directory: Create the connection file: Paste the following contents into the conn.rb file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the Ceph Object Gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when you created the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Example Save the file and exit the editor. Make the file executable: Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Make the file executable: Run the file: If the output of the command is true , this means that bucket my-new-bucket2 was created successfully. Create a new file for listing owned buckets: Paste the following content into the file: Save the file and exit the editor. Make the file executable: Run the file: The output should look something like this: Create a new file for creating an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will create a file hello.txt with the string Hello World! . Create a new file for listing a bucket's content: Paste the following content into the file: Save the file and exit the editor. Make the file executable. Run the file: The output will look something like this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.rb file to create empty buckets, for example, my-new-bucket6 , my-new-bucket7 . , edit the above-mentioned del_empty_bucket.rb file accordingly before trying to delete empty buckets. Create a new file for deleting a non-empty bucket: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: If the bucket is successfully deleted, the command will return 0 as output. Create a new file for deleting an object: Paste the following contents into the file: Save the file and exit the editor. Make the file executable: Run the file: This will delete the object hello.txt . 3.2.7. Accessing the Ceph Object Gateway using PHP You can use PHP scripts for S3 access. This procedure provides some example PHP scripts to do various tasks, such as deleting a bucket or an object. Important The examples given below are tested against php v5.4.16 and aws-sdk v2.8.24 . Prerequisites Root-level access to a development workstation. Internet access. Procedure Install the php package: Download the zip archive of aws-sdk for PHP and extract it. Create a project directory: Copy the extracted aws directory to the project directory. For example: Create the connection file: Paste the following contents in the conn.php file: Syntax Replace FQDN_OF_GATEWAY_NODE with the FQDN of the gateway node. Replace MY_ACCESS_KEY and MY_SECRET_KEY with the access_key and secret_key that were generated when creating the radosgw user for S3 access as mentioned in the Red Hat Ceph Storage Object Gateway Configuration and Administration Guide . Replace PATH_TO_AWS with the absolute path to the extracted aws directory that you copied to the php project directory. Save the file and exit the editor. Run the file: If you have provided the values correctly in the file, the output of the command will be 0 . Create a new file for creating a bucket: Paste the following contents into the new file: Syntax Save the file and exit the editor. Run the file: Create a new file for listing owned buckets: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output should look similar to this: Create an object by first creating a source file named hello.txt : Create a new php file: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will create the object hello.txt in bucket my-new-bucket3 . Create a new file for listing a bucket's content: Paste the following content into the file: Syntax Save the file and exit the editor. Run the file: The output will look similar to this: Create a new file for deleting an empty bucket: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: If the bucket is successfully deleted, the command will return 0 as output. Note Edit the create_bucket.php file to create empty buckets, for example, my-new-bucket4 , my-new-bucket5 . , edit the above-mentioned del_empty_bucket.php file accordingly before trying to delete empty buckets. Important Deleting a non-empty bucket is currently not supported in PHP 2 and newer versions of aws-sdk . Create a new file for deleting an object: Paste the following contents into the file: Syntax Save the file and exit the editor. Run the file: This will delete the object hello.txt . 3.2.8. Secure Token Service The Amazon Web Services' Secure Token Service (STS) returns a set of temporary security credentials for authenticating users. Red Hat Ceph Storage Object Gateway supports a subset of Amazon STS application programming interfaces (APIs) for identity and access management (IAM). Users first authenticate against STS and receive a short-lived S3 access key and secret key that can be used in subsequent requests. Red Hat Ceph Storage can authenticate S3 users by integrating with a Single Sign-On by configuring an OIDC provider. This feature enables Object Storage users to authenticate against an enterprise identity provider rather than the local Ceph Object Gateway database. For instance, if the SSO is connected to an enterprise IDP in the backend, Object Storage users can use their enterprise credentials to authenticate and get access to the Ceph Object Gateway S3 endpoint. By using STS along with the IAM role policy feature, you can create finely tuned authorization policies to control access to your data. This enables you to implement either a Role-Based Access Control (RBAC) or Attribute-Based Access Control (ABAC) authorization model for your object storage data, giving you complete control over who can access the data. Simplifies workflow to access S3 resources with STS The user wants access S3 resources in Red Hat Ceph Storage. The user needs to authenticate against the SSO provider. The SSO provider is federated with an IDP and checks if the user credentials are valid, the user gets authenticated and the SSO provides a Token to the user. Using the Token provided by the SSO, the user accesses the Ceph Object Gateway STS endpoint, asking to assume an IAM role that provides the user with access to S3 resources. The Red Hat Ceph Storage gateway receives the user token and asks the SSO to validate the token. Once the SSO validates the token, the user is allowed to assume the role. Through STS, the user is with temporary access and secret keys that give the user access to the S3 resources. Depending on the policies attached to the IAM role the user has assumed, the user can access a set of S3 resources. For example, read for bucket A and write to bucket B. Additional Resources Amazon Web Services Secure Token Service welcome page . See the Configuring and using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on STS Lite and Keystone. See the Working around the limitations of using STS Lite with Keystone section of the Red Hat Ceph Storage Developer Guide for details on the limitations of STS Lite and Keystone. 3.2.8.1. The Secure Token Service application programming interfaces The Ceph Object Gateway implements the following Secure Token Service (STS) application programming interfaces (APIs): AssumeRole This API returns a set of temporary credentials for cross-account access. These temporary credentials allow for both, permission policies attached with Role and policies attached with AssumeRole API. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ExternalId Description When assuming a role for another account, provide the unique external identifier if available. This parameter's value has a length of 2 to 1224 characters. Type String Required No SerialNumber Description A user's identification number from their associated multi-factor authentication (MFA) device. The parameter's value can be the serial number of a hardware device or a virtual device, with a length of 9 to 256 characters. Type String Required No TokenCode Description The value generated from the multi-factor authentication (MFA) device, if the trust policy requires MFA. If an MFA device is required, and if this parameter's value is empty or expired, then AssumeRole call returns an "access denied" error message. This parameter's value has a fixed length of 6 characters. Type String Required No AssumeRoleWithWebIdentity This API returns a set of temporary credentials for users who have been authenticated by an application, such as OpenID Connect or OAuth 2.0 Identity Provider. The RoleArn and the RoleSessionName request parameters are required, but the other request parameters are optional. RoleArn Description The role to assume for the Amazon Resource Name (ARN) with a length of 20 to 2048 characters. Type String Required Yes RoleSessionName Description Identifying the role session name to assume. The role session name can uniquely identify a session when different principals or different reasons assume a role. This parameter's value has a length of 2 to 64 characters. The = , , , . , @ , and - characters are allowed, but no spaces are allowed. Type String Required Yes Policy Description An identity and access management policy (IAM) in a JSON format for use in an inline session. This parameter's value has a length of 1 to 2048 characters. Type String Required No DurationSeconds Description The duration of the session in seconds, with a minimum value of 900 seconds to a maximum value of 43200 seconds. The default value is 3600 seconds. Type Integer Required No ProviderId Description The fully qualified host component of the domain name from the identity provider. This parameter's value is only valid for OAuth 2.0 access tokens, with a length of 4 to 2048 characters. Type String Required No WebIdentityToken Description The OpenID Connect identity token or OAuth 2.0 access token provided from an identity provider. This parameter's value has a length of 4 to 2048 characters. Type String Required No Additional Resources See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. Amazon Web Services Security Token Service, the AssumeRole action. Amazon Web Services Security Token Service, the AssumeRoleWithWebIdentity action. 3.2.8.2. Configuring the Secure Token Service Configure the Secure Token Service (STS) for use with the Ceph Object Gateway by setting the rgw_sts_key , and rgw_s3_auth_use_sts options. Note The S3 and STS APIs co-exist in the same namespace, and both can be accessed from the same endpoint in the Ceph Object Gateway. Prerequisites A running Red Hat Ceph Storage cluster. A running Ceph Object Gateway. Root-level access to a Ceph Manager node. Procedure Set the following configuration options for the Ceph Object Gateway client: Syntax The rgw_sts_key is the STS key for encrypting or decrypting the session token and is exactly 16 hex characters. Important The STS key needs to be alphanumeric. Example Restart the Ceph Object Gateway for the added key to take effect. Note Use the output from the ceph orch ps command, under the NAME column, to get the SERVICE_TYPE . ID information. To restart the Ceph Object Gateway on an individual node in the storage cluster: Syntax Example To restart the Ceph Object Gateways on all nodes in the storage cluster: Syntax Example Additional Resources See Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the The basics of Ceph configuration chapter in the Red Hat Ceph Storage Configuration Guide for more details on using the Ceph configuration database. 3.2.8.3. Creating a user for an OpenID Connect provider To establish trust between the Ceph Object Gateway and the OpenID Connect Provider create a user entity and a role trust policy. Prerequisites User-level access to the Ceph Object Gateway node. Secure Token Service configured. Procedure Create a new Ceph user: Syntax Example Configure the Ceph user capabilities: Syntax Example Add a condition to the role trust policy using the Secure Token Service (STS) API: Syntax Important The app_id in the syntax example above must match the AUD_FIELD field of the incoming token. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 3.2.8.4. Obtaining a thumbprint of an OpenID Connect provider Get the OpenID Connect provider's (IDP) configuration document. Any SSO that follows the OIDC protocol standards is expected to work with the Ceph Object Gateway. Red Hat has tested with the following SSO providers: Red Hat Single Sing-on Keycloak Prerequisites Installation of the openssl and curl packages. Procedure Get the configuration document from the IDP's URL: Syntax Example Get the IDP certificate: Syntax Example Note The x5c cert can be available on the /certs path or in the /jwks path depending on the SSO provider. Copy the result of the "x5c" response from the command and paste it into the certificate.crt file. Include --BEGIN CERTIFICATE-- at the beginning and --END CERTIFICATE-- at the end. Example Get the certificate thumbprint: Syntax Example Remove all the colons from the SHA1 fingerprint and use this as the input for creating the IDP entity in the IAM request. Additional Resources See the Obtaining the Root CA Thumbprint for an OpenID Connect Identity Provider article on Amazon's website. See the Secure Token Service application programming interfaces section in the Red Hat Ceph Storage Developer Guide for more details on the STS APIs. See the Examples using the Secure Token Service APIs section of the Red Hat Ceph Storage Developer Guide for more details. 3.2.8.5. Registering the OpenID Connect provider Register the OpenID Connect provider's (IDP) configuration document. Prerequisites Installation of the openssl and curl packages. Secure Token Service configured. User created for an OIDC provider. Thumbprint of an OIDC obtained. Procedure Extract URL from the token. Example Register the OIDC provider with Ceph Object Gateway. Example Verify that the OIDC provider is added to the Ceph Object Gateway. Example 3.2.8.6. Creating IAM roles and policies Create IAM roles and policies. Prerequisites Installation of the openssl and curl packages. Secure Token Service configured. User created for an OIDC provider. Thumbprint of an OIDC obtained. The OIDC provider in Ceph Object Gateway registered. Procedure Retrieve and validate JWT token. Example Verify the token. Example In this example, the jq filter is used by the subfield in the token and is set to ceph. Create a JSON file with role properties. Set Statement to Allow and the Action as AssumeRoleWithWebIdentity . Allow access to any user with the JWT token that matches the condition with sub:ceph . Example Create a Ceph Object Gateway role using the JSON file. Example . 3.2.8.7. Accessing S3 resources Verify the Assume Role with STS credentials to access S3 resources. Prerequisites Installation of the openssl and curl packages. Secure Token Service configured. User created for an OIDC provider. Thumbprint of an OIDC obtained. The OIDC provider in Ceph Object Gateway registered. IAM roles and policies created Procedure Following is an example of assume Role with STS to get temporary access and secret key to access S3 resources. Run the script. Example 3.2.9. Configuring and using STS Lite with Keystone (Technology Preview) The Amazon Secure Token Service (STS) and S3 APIs co-exist in the same namespace. The STS options can be configured in conjunction with the Keystone options. Note Both S3 and STS APIs can be accessed using the same endpoint in Ceph Object Gateway. Prerequisites Red Hat Ceph Storage 5.0 or higher. A running Ceph Object Gateway. Installation of the Boto Python module, version 3 or higher. Root-level access to a Ceph Manager node. User-level access to an OpenStack node. Procedure Set the following configuration options for the Ceph Object Gateway client: Syntax The rgw_sts_key is the STS key for encrypting or decrypting the session token and is exactly 16 hex characters. Important The STS key needs to be alphanumeric. Example Generate the EC2 credentials on the OpenStack node: Example Use the generated credentials to get back a set of temporary security credentials using GetSessionToken API: Example Obtaining the temporary credentials can be used for making S3 calls: Example Create a new S3Access role and configure a policy. Assign a user with administrative CAPS: Syntax Example Create the S3Access role: Syntax Example Attach a permission policy to the S3Access role: Syntax Example Now another user can assume the role of the gwadmin user. For example, the gwuser user can assume the permissions of the gwadmin user. Make a note of the assuming user's access_key and secret_key values. Example Use the AssumeRole API call, providing the access_key and secret_key values from the assuming user: Example Important The AssumeRole API requires the S3Access role. Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. See the Create a User section in the Red Hat Ceph Storage Object Gateway Guide for more information. 3.2.10. Working around the limitations of using STS Lite with Keystone (Technology Preview) A limitation with Keystone is that it does not supports Secure Token Service (STS) requests. Another limitation is the payload hash is not included with the request. To work around these two limitations the Boto authentication code must be modified. Prerequisites A running Red Hat Ceph Storage cluster, version 5.0 or higher. A running Ceph Object Gateway. Installation of Boto Python module, version 3 or higher. Procedure Open and edit Boto's auth.py file. Add the following four lines to the code block: class SigV4Auth(BaseSigner): """ Sign a request with Signature V4. """ REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4 Add the following two lines to the code block: def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request) Additional Resources See the Test S3 Access section in the Red Hat Ceph Storage Object Gateway Guide for more information on installing the Boto Python module. 3.3. S3 bucket operations As a developer, you can perform bucket operations with the Amazon S3 application programming interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for buckets, along with the function's support status. Table 3.2. Bucket operations Feature Status Notes List Buckets Supported Create a Bucket Supported Different set of canned ACLs. Put Bucket Website Supported Get Bucket Website Supported Delete Bucket Website Supported Put Bucket replication Supported Get Bucket replication Supported Delete Bucket replication Supported Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Put Bucket Lifecycle Partially Supported Expiration , NoncurrentVersionExpiration and AbortIncompleteMultipartUpload supported. Delete Bucket Lifecycle Supported Get Bucket Objects Supported Bucket Location Supported Get Bucket Version Supported Put Bucket Version Supported Delete Bucket Supported Get Bucket ACLs Supported Different set of canned ACLs Put Bucket ACLs Supported Different set of canned ACLs Get Bucket cors Supported Put Bucket cors Supported Delete Bucket cors Supported List Bucket Object Versions Supported Head Bucket Supported List Bucket Multipart Uploads Supported Bucket Policies Partially Supported Get a Bucket Request Payment Supported Put a Bucket Request Payment Supported Multi-tenant Bucket Operations Supported GET PublicAccessBlock Supported PUT PublicAccessBlock Supported Delete PublicAccessBlock Supported Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.3.1. S3 create bucket notifications Create bucket notifications at the bucket level. The notification configuration has the Red Hat Ceph Storage Object Gateway S3 events, ObjectCreated , ObjectRemoved , and ObjectLifecycle:Expiration . These need to be published and the destination to send the bucket notifications. Bucket notifications are S3 operations. To create a bucket notification for s3:objectCreate , s3:objectRemove and s3:ObjectLifecycle:Expiration events, use PUT: Example Important Red Hat supports ObjectCreate events, such as put , post , multipartUpload , and copy . Red Hat also supports ObjectRemove events, such as object_delete and s3_multi_object_delete . Request Entities NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic , and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description List of supported events. Multiple event entities can be used. If omitted, all events are handled. Type String Required No Filter Description S3Key , S3Metadata and S3Tags entities. Type Container Required No S3Key Description A list of FilterRule entities, for filtering based on the object key. At most, 3 entities may be in the list, for example Name would be prefix , suffix , or regex . All filter rules in the list must match for the filter to match. Type Container Required No S3Metadata Description A list of FilterRule entities, for filtering based on object metadata. All filter rules in the list must match the metadata defined on the object. However, the object still matches if it has other metadata entries not listed in the filter. Type Container Required No S3Tags Description A list of FilterRule entities, for filtering based on object tags. All filter rules in the list must match the tags defined on the object. However, the object still matches if it has other tags not listed in the filter. Type Container Required No S3Key.FilterRule Description Name and Value entities. Name is : prefix , suffix , or regex . The Value would hold the key prefix, key suffix, or a regular expression for matching the key, accordingly. Type Container Required Yes S3Metadata.FilterRule Description Name and Value entities. Name is the name of the metadata attribute for example x-amz-meta-xxx . The value is the expected value for this attribute. Type Container Required Yes S3Tags.FilterRule Description Name and Value entities. Name is the tag key, and the value is the tag value. Type Container Required Yes HTTP response 400 Status Code MalformedXML Description The XML is not well-formed. 400 Status Code InvalidArgument Description Missing Id or missing or invalid topic ARN or invalid event. 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The topic does not exist. 3.3.2. S3 get bucket notifications Get a specific notification or list all the notifications configured on a bucket. Syntax Example Example Response Note The notification subresource returns the bucket notification configuration or an empty NotificationConfiguration element. The caller must be the bucket owner. Request Entities notification-id Description Name of the notification. All notifications are listed if the ID is not provided. Type String NotificationConfiguration Description list of TopicConfiguration entities. Type Container Required Yes TopicConfiguration Description Id , Topic , and list of Event entities. Type Container Required Yes id Description Name of the notification. Type String Required Yes Topic Description Topic Amazon Resource Name(ARN) Note The topic must be created beforehand. Type String Required Yes Event Description Handled event. Multiple event entities may exist. Type String Required Yes Filter Description The filters for the specified configuration. Type Container Required No HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 404 Status Code NoSuchKey Description The notification does not exist if it has been provided. 3.3.3. S3 delete bucket notifications Delete a specific or all notifications from a bucket. Note Notification deletion is an extension to the S3 notification API. Any defined notifications on a bucket are deleted when the bucket is deleted. Deleting an unknown notification for example double delete , is not considered an error. To delete a specific or all notifications use DELETE: Syntax Example Request Entities notification-id Description Name of the notification. All notifications on the bucket are deleted if the notification ID is not provided. Type String HTTP response 404 Status Code NoSuchBucket Description The bucket does not exist. 3.3.4. Accessing bucket host names There are two different modes of accessing the buckets. The first, and preferred method identifies the bucket as the top-level directory in the URI. Example The second method identifies the bucket via a virtual bucket host name. Example Tip Red Hat prefers the first method, because the second method requires expensive domain certification and DNS wild cards. 3.3.5. S3 list buckets GET / returns a list of buckets created by the user making the request. GET / only returns buckets created by an authenticated user. You cannot make an anonymous request. Syntax Response Entities Buckets Description Container for list of buckets. Type Container Bucket Description Container for bucket information. Type Container Name Description Bucket name. Type String CreationDate Description UTC time when the bucket was created. Type Date ListAllMyBucketsResult Description A container for the result. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String 3.3.6. S3 return a list of bucket objects Returns a list of bucket objects. Syntax Parameters prefix Description Only returns objects that contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String marker Description A beginning index for the list of objects returned. Type String max-keys Description The maximum number of keys to return. Default is 1000. Type Integer HTTP Response 200 Status Code OK Description Buckets retrieved. GET / BUCKET returns a container for buckets with the following fields: Bucket Response Entities ListBucketResult Description The container for the list of objects. Type Entity Name Description The name of the bucket whose contents will be returned. Type String Prefix Description A prefix for the object keys. Type String Marker Description A beginning index for the list of objects returned. Type String MaxKeys Description The maximum number of keys returned. Type Integer Delimiter Description If set, objects with the same prefix will appear in the CommonPrefixes list. Type String IsTruncated Description If true , only a subset of the bucket's contents were returned. Type Boolean CommonPrefixes Description If multiple objects contain the same prefix, they will appear in this list. Type Container The ListBucketResult contains objects, where each object is within a Contents container. Object Response Entities Contents Description A container for the object. Type Object Key Description The object's key. Type String LastModified Description The object's last-modified date and time. Type Date ETag Description An MD-5 hash of the object. Etag is an entity tag. Type String Size Description The object's size. Type Integer StorageClass Description Should always return STANDARD . Type String 3.3.7. S3 create a new bucket Creates a new bucket. To create a bucket, you must have a user ID and a valid AWS Access Key ID to authenticate requests. You can not create buckets as an anonymous user. Constraints In general, bucket names should follow domain name constraints. Bucket names must be unique. Bucket names cannot be formatted as IP address. Bucket names can be between 3 and 63 characters long. Bucket names must not contain uppercase characters or underscores. Bucket names must start with a lowercase letter or number. Bucket names can contain a dash (-). Bucket names must be a series of one or more labels. Adjacent labels are separated by a single period (.). Bucket names can contain lowercase letters, numbers, and hyphens. Each label must start and end with a lowercase letter or a number. Note The above constraints are relaxed if rgw_relaxed_s3_bucket_names is set to true . The bucket names must still be unique, cannot be formatted as IP address, and can contain letters, numbers, periods, dashes, and underscores of up to 255 characters long. Syntax Parameters x-amz-acl Description Canned ACLs. Valid Values private , public-read , public-read-write , authenticated-read Required No HTTP Response If the bucket name is unique, within constraints, and unused, the operation will succeed. If a bucket with the same name already exists and the user is the bucket owner, the operation will succeed. If the bucket name is already in use, the operation will fail. 409 Status Code BucketAlreadyExists Description Bucket already exists under different user's ownership. 3.3.8. S3 put bucket website The put bucket website API sets the configuration of the website that is specified in the website subresource. To configure a bucket as a website, the website subresource can be added on the bucket. Note Put operation requires S3:PutBucketWebsite permission. By default, only the bucket owner can configure the website attached to a bucket. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.3.9. S3 get bucket website The get bucket website API retrieves the configuration of the website that is specified in the website subresource. Note Get operation requires the S3:GetBucketWebsite permission. By default, only the bucket owner can read the bucket website configuration. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.3.10. S3 delete bucket website The delete bucket website API removes the website configuration for a bucket. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.3.11. S3 put bucket replication The put bucket replication API configures replication configuration for a bucket or replaces an existing one. Syntax Example 3.3.12. S3 get bucket replication The get bucket replication API returns the replication configuration of a bucket. Syntax Example 3.3.13. S3 delete bucket replication The delete bucket replication API deletes the replication configuration from a bucket. Syntax Example 3.3.14. S3 delete a bucket Deletes a bucket. You can reuse bucket names following a successful bucket removal. Syntax HTTP Response 204 Status Code No Content Description Bucket removed. 3.3.15. S3 bucket lifecycle You can use a bucket lifecycle configuration to manage your objects so they are stored effectively throughout their lifetime. The S3 API in the Ceph Object Gateway supports a subset of the AWS bucket lifecycle actions: Expiration : This defines the lifespan of objects within a bucket. It takes the number of days the object should live or expiration date, at which point Ceph Object Gateway will delete the object. If the bucket doesn't enable versioning, Ceph Object Gateway will delete the object permanently. If the bucket enables versioning, Ceph Object Gateway will create a delete marker for the current version, and then delete the current version. NoncurrentVersionExpiration : This defines the lifespan of noncurrent object versions within a bucket. To use this feature, you must enable bucket versioning. It takes the number of days a noncurrent object should live, at which point Ceph Object Gateway will delete the noncurrent object. NewerNoncurrentVersions : Specifies how many noncurrent object versions to retain. You can specify up to 100 noncurrent versions to retain. If the specified number to retain is more than 100, additional noncurrent versions are deleted. AbortIncompleteMultipartUpload : This defines the number of days an incomplete multipart upload should live before it is aborted. BlockPublicPolicy reject : This action is for public access block. It calls PUT access point policy and PUT bucket policy that are made through the access point if the specified policy (for either the access point or the underlying bucket) allows public access. The Amazon S3 Block Public Access feature is available in Red Hat Ceph Storage 5.x/ Ceph Pacific versions. It provides settings for access points, buckets, and accounts to help you manage public access to Amazon S3 resources. By default, new buckets, access points, and objects do not allow public access. However, you can modify bucket policies, access point policies, or object permissions to allow public access. S3 Block Public Access settings override these policies and permissions so that you can limit public access to these resources. The lifecycle configuration contains one or more rules using the <Rule> element. Example A lifecycle rule can apply to all or a subset of objects in a bucket based on the <Filter> element that you specify in the lifecycle rule. You can specify a filter in several ways: Key prefixes Object tags Both key prefix and one or more object tags Key prefixes You can apply a lifecycle rule to a subset of objects based on the key name prefix. For example, specifying <keypre/> would apply to objects that begin with keypre/ : You can also apply different lifecycle rules to objects with different key prefixes: Object tags You can apply a lifecycle rule to only objects with a specific tag using the <Key> and <Value> elements: Both prefix and one or more tags In a lifecycle rule, you can specify a filter based on both the key prefix and one or more tags. They must be wrapped in the <And> element. A filter can have only one prefix, and zero or more tags: Additional Resources See the S3 GET bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details on getting a bucket lifecycle. See the S3 create or replace a bucket lifecycle section in the Red Hat Ceph Storage Developer Guide for details on creating a bucket lifecycle. See the S3 delete a bucket lifecycle secton in the Red Hat Ceph Storage Developer Guide for details on deleting a bucket lifecycle. 3.3.16. S3 GET bucket lifecycle To get a bucket lifecycle, use GET and specify a destination bucket. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response contains the bucket lifecycle and its elements. 3.3.17. S3 create or replace a bucket lifecycle To create or replace a bucket lifecycle, use PUT and specify a destination bucket and a lifecycle configuration. The Ceph Object Gateway only supports a subset of the S3 lifecycle functionality. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message Valid Values String No defaults or constraints. Required No Additional Resources See the S3 common request headers section in Appendix B of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common request headers. See the S3 bucket lifecycles section of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 bucket lifecycles. 3.3.18. S3 delete a bucket lifecycle To delete a bucket lifecycle, use DELETE and specify a destination bucket. Syntax Request Headers The request does not contain any special elements. Response The response returns common response status. Additional Resources See the S3 common request headers section in Appendix B of the Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common request headers. See the S3 common response status codes section in Appendix C of Red Hat Ceph Storage Developer Guide for more information on Amazon S3 common response status codes. 3.3.19. S3 get bucket location Retrieves the bucket's zone group. The user needs to be the bucket owner to call this. A bucket can be constrained to a zone group by providing LocationConstraint during a PUT request. Add the location subresource to the bucket resource as shown below. Syntax Response Entities LocationConstraint Description The zone group where bucket resides, an empty string for default zone group. Type String 3.3.20. S3 get bucket versioning Retrieves the versioning state of a bucket. The user needs to be the bucket owner to call this. Add the versioning subresource to the bucket resource as shown below. Syntax 3.3.21. S3 put bucket versioning This subresource set the versioning state of an existing bucket. The user needs to be the bucket owner to set the versioning state. If the versioning state has never been set on a bucket, then it has no versioning state. Doing a GET versioning request does not return a versioning state value. Setting the bucket versioning state: Enabled : Enables versioning for the objects in the bucket. All objects added to the bucket receive a unique version ID. Suspended : Disables versioning for the objects in the bucket. All objects added to the bucket receive the version ID null. Syntax Example Bucket Request Entities VersioningConfiguration Description A container for the request. Type Container Status Description Sets the versioning state of the bucket. Valid Values: Suspended/Enabled Type String 3.3.22. S3 get bucket access control lists Retrieves the bucket access control list. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Response Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.3.23. S3 put bucket Access Control Lists Sets an access control to an existing bucket. The user needs to be the bucket owner or to have been granted WRITE_ACP permission on the bucket. Add the acl subresource to the bucket request as shown below. Syntax Request Entities S3 list multipart uploads AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.3.24. S3 get bucket cors Retrieves the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.3.25. S3 put bucket cors Sets the cors configuration for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.3.26. S3 delete a bucket cors Deletes the cors configuration information set for the bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the cors subresource to the bucket request as shown below. Syntax 3.3.27. S3 list bucket object versions Returns a list of metadata about all the version of objects within a bucket. Requires READ access to the bucket. Add the versions subresource to the bucket request as shown below. Syntax You can specify parameters for GET / BUCKET ?versions , but none of them are required. Parameters prefix Description Returns in-progress uploads whose keys contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String key-marker Description The beginning marker for the list of uploads. Type String max-keys Description The maximum number of in-progress uploads. The default is 1000. Type Integer version-id-marker Description Specifies the object version to begin the list. Type String Response Entities KeyMarker Description The key marker specified by the key-marker request parameter, if any. Type String NextKeyMarker Description The key marker to use in a subsequent request if IsTruncated is true . Type String NextUploadIdMarker Description The upload ID marker to use in a subsequent request if IsTruncated is true . Type String IsTruncated Description If true , only a subset of the bucket's upload contents were returned. Type Boolean Size Description The size of the uploaded part. Type Integer DisplayName Description The owner's display name. Type String ID Description The owner's ID. Type String Owner Description A container for the ID and DisplayName of the user who owns the object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String Version Description Container for the version information. Type Container versionId Description Version ID of an object. Type String versionIdMarker Description The last version of the key in a truncated response. Type String 3.3.28. S3 head bucket Calls HEAD on a bucket to determine if it exists and if the caller has access permissions. Returns 200 OK if the bucket exists and the caller has permissions; 404 Not Found if the bucket does not exist; and, 403 Forbidden if the bucket exists but the caller does not have access permissions. Syntax 3.3.29. S3 list multipart uploads GET /?uploads returns a list of the current in-progress multipart uploads, that is, the application initiates a multipart upload, but the service hasn't completed all the uploads yet. Syntax You can specify parameters for GET / BUCKET ?uploads , but none of them are required. Parameters prefix Description Returns in-progress uploads whose keys contain the specified prefix. Type String delimiter Description The delimiter between the prefix and the rest of the object name. Type String key-marker Description The beginning marker for the list of uploads. Type String max-keys Description The maximum number of in-progress uploads. The default is 1000. Type Integer max-uploads Description The maximum number of multipart uploads. The range is from 1-1000. The default is 1000. Type Integer version-id-marker Description Ignored if key-marker isn't specified. Specifies the ID of the first upload to list in lexicographical order at or following the ID . Type String Response Entities ListMultipartUploadsResult Description A container for the results. Type Container ListMultipartUploadsResult.Prefix Description The prefix specified by the prefix request parameter, if any. Type String Bucket Description The bucket that will receive the bucket contents. Type String KeyMarker Description The key marker specified by the key-marker request parameter, if any. Type String UploadIdMarker Description The marker specified by the upload-id-marker request parameter, if any. Type String NextKeyMarker Description The key marker to use in a subsequent request if IsTruncated is true . Type String NextUploadIdMarker Description The upload ID marker to use in a subsequent request if IsTruncated is true . Type String MaxUploads Description The max uploads specified by the max-uploads request parameter. Type Integer Delimiter Description If set, objects with the same prefix will appear in the CommonPrefixes list. Type String IsTruncated Description If true , only a subset of the bucket's upload contents were returned. Type Boolean Upload Description A container for Key , UploadId , InitiatorOwner , StorageClass , and Initiated elements. Type Container Key Description The key of the object once the multipart upload is complete. Type String UploadId Description The ID that identifies the multipart upload. Type String Initiator Description Contains the ID and DisplayName of the user who initiated the upload. Type Container DisplayName Description The initiator's display name. Type String ID Description The initiator's ID. Type String Owner Description A container for the ID and DisplayName of the user who owns the uploaded object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String Initiated Description The date and time the user initiated the upload. Type Date CommonPrefixes Description If multiple objects contain the same prefix, they will appear in this list. Type Container CommonPrefixes.Prefix Description The substring of the key after the prefix as defined by the prefix request parameter. Type String 3.3.30. S3 bucket policies The Ceph Object Gateway supports a subset of the Amazon S3 policy language applied to buckets. Creation and Removal Ceph Object Gateway manages S3 Bucket policies through standard S3 operations rather than using the radosgw-admin CLI tool. Administrators may use the s3cmd command to set or delete a policy. Example Limitations Ceph Object Gateway only supports the following S3 actions: s3:AbortMultipartUpload s3:CreateBucket s3:DeleteBucketPolicy s3:DeleteBucket s3:DeleteBucketWebsite s3:DeleteBucketReplication s3:DeleteReplicationConfiguration s3:DeleteObject s3:DeleteObjectVersion s3:GetBucketAcl s3:GetBucketCORS s3:GetBucketLocation s3:GetBucketPolicy s3:GetBucketRequestPayment s3:GetBucketVersioning s3:GetBucketWebsite s3:GetBucketReplication s3:GetReplicationConfiguration s3:GetLifecycleConfiguration s3:GetObjectAcl s3:GetObject s3:GetObjectTorrent s3:GetObjectVersionAcl s3:GetObjectVersion s3:GetObjectVersionTorrent s3:ListAllMyBuckets s3:ListBucketMultiPartUploads s3:ListBucket s3:ListBucketVersions s3:ListMultipartUploadParts s3:PutBucketAcl s3:PutBucketCORS s3:PutBucketPolicy s3:PutBucketRequestPayment s3:PutBucketVersioning s3:PutBucketWebsite s3:PutBucketReplication s3:PutReplicationConfiguration s3:PutLifecycleConfiguration s3:PutObjectAcl s3:PutObject s3:PutObjectVersionAcl Note Ceph Object Gateway does not support setting policies on users, groups, or roles. The Ceph Object Gateway uses the RGW tenant identifier in place of the Amazon twelve-digit account ID. Ceph Object Gateway administrators who want to use policies between Amazon Web Service (AWS) S3 and Ceph Object Gateway S3 will have to use the Amazon account ID as the tenant ID when creating users. With AWS S3, all tenants share a single namespace. By contrast, Ceph Object Gateway gives every tenant its own namespace of buckets. At present, Ceph Object Gateway clients trying to access a bucket belonging to another tenant MUST address it as tenant:bucket in the S3 request. In the AWS, a bucket policy can grant access to another account, and that account owner can then grant access to individual users with user permissions. Since Ceph Object Gateway does not yet support user, role, and group permissions, account owners will need to grant access directly to individual users. Important Granting an entire account access to a bucket grants access to ALL users in that account. Bucket policies do NOT support string interpolation. Ceph Object Gateway supports the following condition keys: aws:CurrentTime aws:EpochTime aws:PrincipalType aws:Referer aws:SecureTransport aws:SourceIp aws:UserAgent aws:username Ceph Object Gateway ONLY supports the following condition keys for the ListBucket action: s3:prefix s3:delimiter s3:max-keys Impact on Swift Ceph Object Gateway provides no functionality to set bucket policies under the Swift API. However, bucket policies that are set with the S3 API govern Swift and S3 operations. Ceph Object Gateway matches Swift credentials against principals that are specified in a policy. 3.3.31. S3 get the request payment configuration on a bucket Uses the requestPayment subresource to return the request payment configuration of a bucket. The user needs to be the bucket owner or to have been granted READ_ACP permission on the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax 3.3.32. S3 set the request payment configuration on a bucket Uses the requestPayment subresource to set the request payment configuration of a bucket. By default, the bucket owner pays for downloads from the bucket. This configuration parameter enables the bucket owner to specify that the person requesting the download will be charged for the request and the data download from the bucket. Add the requestPayment subresource to the bucket request as shown below. Syntax Request Entities Payer Description Specifies who pays for the download and request fees. Type Enum RequestPaymentConfiguration Description A container for Payer . Type Container 3.3.33. Multi-tenant bucket operations When a client application accesses buckets, it always operates with the credentials of a particular user. In Red Hat Ceph Storage cluster, every user belongs to a tenant. Consequently, every bucket operation has an implicit tenant in its context if no tenant is specified explicitly. Thus multi-tenancy is completely backward compatible with releases, as long as the referred buckets and referring user belong to the same tenant. Extensions employed to specify an explicit tenant differ according to the protocol and authentication system used. In the following example, a colon character separates tenant and bucket. Thus a sample URL would be: By contrast, a simple Python example separates the tenant and bucket in the bucket method itself: Example Note It's not possible to use S3-style subdomains using multi-tenancy, since host names cannot contain colons or any other separators that are not already valid in bucket names. Using a period creates an ambiguous syntax. Therefore, the bucket-in-URL-path format has to be used with multi-tenancy. Additional Resources See the Multi Tenancy section under User Management in the Red Hat Ceph Storage Object Gateway Guide for additional details. 3.3.34. S3 Block Public Access You can use the S3 Block Public Access feature to set buckets and users to help you manage public access to Red Hat Ceph Storage object storage S3 resources. Using this feature, bucket policies, access point policies, and object permissions can be overridden to allow public access. By default, new buckets, access points, and objects do not allow public access. The S3 API in the Ceph Object Gateway supports a subset of the AWS public access settings: BlockPublicPolicy : This defines the setting to allow users to manage access point and bucket policies. This setting does not allow the users to publicly share the bucket or the objects it contains. Existing access point and bucket policies are not affected by enabling this setting. Setting this option to TRUE causes the S3: To reject calls to PUT Bucket policy. To reject calls to PUT access point policy for all of the bucket's same-account access points. Important Apply this setting at the user level so that users cannot alter a specific bucket's block public access setting. Note The TRUE setting only works if the specified policy allows public access. RestrictPublicBuckets : This defines the setting to restrict access to a bucket or access point with public policy. The restriction applies to only AWS service principals and authorized users within the bucket owner's account and access point owner's account. This blocks cross-account access to the access point or bucket, except for the cases specified, while still allowing users within the account to manage the access points or buckets. Enabling this setting does not affect existing access point or bucket policies. It only defines that Amazon S3 blocks public and cross-account access derived from any public access point or bucket policy, including non-public delegation to specific accounts. Note Access control lists (ACLs) are not currently supported by Red Hat Ceph Storage. Bucket policies are assumed to be public unless defined otherwise. To block public access a bucket policy must give access only to fixed values for one or more of the following: Note A fixed value does not contain a wildcard ( * ) or an AWS Identity and Access Management Policy Variable. An AWS principal, user, role, or service principal A set of Classless Inter-Domain Routings (CIDRs), using aws:SourceIp aws:SourceArn aws:SourceVpc aws:SourceVpce aws:SourceOwner aws:SourceAccount s3:x-amz-server-side-encryption-aws-kms-key-id aws:userid , outside the pattern AROLEID:* s3:DataAccessPointArn Note When used in a bucket policy, this value can contain a wildcard for the access point name without rendering the policy public, as long as the account ID is fixed. s3:DataAccessPointPointAccount The following example policy is considered public. Example To make a policy non-public, include any of the condition keys with a fixed value. Example Additional Resources See the S3 GET `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on getting a PublicAccessBlock. See the S3 PUT `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on creating or modifying a PublicAccessBlock. See the S3 Delete `PublicAccessBlock` section in the Red Hat Ceph Storage Developer Guide for details on deleting a PublicAccessBlock. See the S3 bucket policies section in the Red Hat Ceph Storage Developer Guide for details on bucket policies. See the Blocking public access to your Amazon S3 storage section of Amazon Simple Storage Service (S3) documentation. 3.3.35. S3 GET PublicAccessBlock To get the S3 Block Public Access feature configured, use GET and specify a destination AWS account. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned in XML format. 3.3.36. S3 PUT PublicAccessBlock Use this to create or modify the PublicAccessBlock configuration for an S3 bucket. To use this operation, you must have the s3:PutBucketPublicAccessBlock permission. Important If the PublicAccessBlock configuration is different between the bucket and the account, Amazon S3 uses the most restrictive combination of the bucket-level and account-level settings. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned with an empty HTTP body. 3.3.37. S3 delete PublicAccessBlock Use this to delete the PublicAccessBlock configuration for an S3 bucket. Syntax Request Headers See the S3 common request headers in Appendix B for more information about common request headers. Response The response is an HTTP 200 response and is returned with an empty HTTP body. 3.4. S3 object operations As a developer, you can perform object operations with the Amazon S3 application programming interface (API) through the Ceph Object Gateway. The following table list the Amazon S3 functional operations for objects, along with the function's support status. Table 3.3. Object operations Feature Status Get Object Supported Get Object Information Supported Put Object Lock Supported Get Object Lock Supported Put Object Legal Hold Supported Get Object Legal Hold Supported Put Object Retention Supported Get Object Retention Supported Put Object Tagging Supported Get Object Tagging Supported Delete Object Tagging Supported Put Object Supported Delete Object Supported Delete Multiple Objects Supported Get Object ACLs Supported Put Object ACLs Supported Copy Object Supported Post Object Supported Options Object Supported Initiate Multipart Upload Supported Add a Part to a Multipart Upload Supported List Parts of a Multipart Upload Supported Assemble Multipart Upload Supported Copy Multipart Upload Supported Abort Multipart Upload Supported Multi-Tenancy Supported Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. 3.4.1. S3 get an object from a bucket Retrieves an object from a bucket: Syntax Add the versionId subresource to retrieve a particular version of the object: Syntax Request Headers partNumber Description Part number of the object being read. This enables a ranged GET request for the specified part. Using this request is useful for downloading just a part of an object. Valid Values A positive integer between 1 and 10,000. Required No range Description The range of the object to retrieve. Note Multiple ranges of data per GET request are not supported. Valid Values Range:bytes=beginbyte-endbyte Required No if-modified-since Description Gets only if modified since the timestamp. Valid Values Timestamp Required No if-unmodified-since Description Gets only if not modified since the timestamp. Valid Values Timestamp Required No if-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No if-none-match Description Gets only if object ETag does not match ETag. Valid Values Entity Tag Required No Sytnax with request headers Response Headers Content-Range Description Data range, will only be returned if the range header field was specified in the request. x-amz-version-id Description Returns the version ID or null. 3.4.2. S3 get information on an object Returns information about an object. This request will return the same header information as with the Get Object request, but will include the metadata only, not the object data payload. Retrieves the current version of the object: Syntax Add the versionId subresource to retrieve info for a particular version: Syntax Request Headers range Description The range of the object to retrieve. Valid Values Range:bytes=beginbyte-endbyte Required No if-modified-since Description Gets only if modified since the timestamp. Valid Values Timestamp Required No if-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No if-none-match Description Gets only if object ETag matches ETag. Valid Values Entity Tag Required No Response Headers x-amz-version-id Description Returns the version ID or null. 3.4.3. S3 put object lock The put object lock API places a lock configuration on the selected bucket. With object lock, you can store objects using a Write-Once-Read-Many (WORM) model. Object lock ensures an object is not deleted or overwritten, for a fixed amount of time or indefinitely. The rule specified in the object lock configuration is applied by default to every new object placed in the selected bucket. Important Enable the object lock when creating a bucket otherwise, the operation fails. Syntax Example Request Entities ObjectLockConfiguration Description A container for the request. Type Container Required Yes ObjectLockEnabled Description Indicates whether this bucket has an object lock configuration enabled. Type String Required Yes Rule Description The object lock rule in place for the specified bucket. Type Container Required No DefaultRetention Description The default retention period applied to new objects placed in the specified bucket. Type Container Required No Mode Description The default object lock retention mode. Valid values: GOVERNANCE/COMPLIANCE. Type Container Required Yes Days Description The number of days specified for the default retention period. Type Integer Required No Years Description The number of years specified for the default retention period. Type Integer Required No HTTP Response 400 Status Code MalformedXML Description The XML is not well-formed. 409 Status Code InvalidBucketState Description The bucket object lock is not enabled. Additional Resources For more information about this API call, see S3 API . 3.4.4. S3 get object lock The get object lock API retrieves the lock configuration for a bucket. Syntax Example Response Entities ObjectLockConfiguration Description A container for the request. Type Container Required Yes ObjectLockEnabled Description Indicates whether this bucket has an object lock configuration enabled. Type String Required Yes Rule Description The object lock rule is in place for the specified bucket. Type Container Required No DefaultRetention Description The default retention period applied to new objects placed in the specified bucket. Type Container Required No Mode Description The default object lock retention mode. Valid values: GOVERNANCE/COMPLIANCE. Type Container Required Yes Days Description The number of days specified for the default retention period. Type Integer Required No Years Description The number of years specified for the default retention period. Type Integer Required No Additional Resources For more information about this API call, see S3 API . 3.4.5. S3 put object legal hold The put object legal hold API applies a legal hold configuration to the selected object. With a legal hold in place, you cannot overwrite or delete an object version. A legal hold does not have an associated retention period and remains in place until you explicitly remove it. Syntax Example The versionId subresource retrieves a particular version of the object. Request Entities LegalHold Description A container for the request. Type Container Required Yes Status Description Indicates whether the specified object has a legal hold in place. Valid values: ON/OFF Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.6. S3 get object legal hold The get object legal hold API retrieves an object's current legal hold status. Syntax Example The versionId subresource retrieves a particular version of the object. Response Entities LegalHold Description A container for the request. Type Container Required Yes Status Description Indicates whether the specified object has a legal hold in place. Valid values: ON/OFF Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.7. S3 put object retention The put object retention API places an object retention configuration on an object. A retention period protects an object version for a fixed amount of time. There are two modes: GOVERNANCE and COMPLIANCE. These two retention modes apply different levels of protection to your objects. Note During this period, your object is Write-Once-Read-Many-protected (WORM-protected) and cannot be overwritten or deleted. Syntax Example The versionId sub-resource retrieves a particular version of the object. Request Entities Retention Description A container for the request. Type Container Required Yes Mode Description Retention mode for the specified object. Valid values: GOVERNANCE, COMPLIANCE. Type String Required Yes RetainUntilDate Description Retention date. Format 2020-01-05T00:00:00.000Z Type Timestamp Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.8. S3 get object retention The get object retention API retrieves an object retention configuration on an object. Syntax Example The versionId subresource retrieves a particular version of the object. Response Entities Retention Description A container for the request. Type Container Required Yes Mode Description Retention mode for the specified object. Valid values: GOVERNANCE/COMPLIANCE Type String Required Yes RetainUntilDate Description Retention date. Format: 2020-01-05T00:00:00.000Z Type Timestamp Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.9. S3 put object tagging The put object tagging API associates tags with an object. A tag is a key-value pair. To put tags of any other version, use the versionId query parameter. You must have permission to perform the s3:PutObjectTagging action. By default, the bucket owner has this permission and can grant this permission to others. Syntax Example Request Entities Tagging Description A container for the request. Type Container Required Yes TagSet Description A collection of a set of tags. Type String Required Yes Additional Resources For more information about this API call, see S3 API . 3.4.10. S3 get object tagging The get object tagging API returns the tag of an object. By default, the GET operation returns information on the current version of an object. Note For a versioned bucket, you can have multiple versions of an object in your bucket. To retrieve tags of any other version, add the versionId query parameter in the request. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.11. S3 delete object tagging The delete object tagging API removes the entire tag set from the specified object. You must have permission to perform the s3:DeleteObjectTagging action, to use this operation. Note To delete tags of a specific object version, add the versionId query parameter in the request. Syntax Example Additional Resources For more information about this API call, see S3 API . 3.4.12. S3 add an object to a bucket Adds an object to a bucket. You must have write permissions on the bucket to perform this operation. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message. Valid Values A string. No defaults or constraints. Required No content-type Description A standard MIME type. Valid Values Any MIME type. Default: binary/octet-stream . Required No x-amz-meta-<... >* Description User metadata. Stored with the object. Valid Values A string up to 8kb. No defaults. Required No x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No Response Headers x-amz-version-id Description Returns the version ID or null. 3.4.13. S3 delete an object Removes an object. Requires WRITE permission set on the containing bucket. Deletes an object. If object versioning is on, it creates a marker. Syntax To delete an object when versioning is on, you must specify the versionId subresource and the version of the object to delete. 3.4.14. S3 delete multiple objects This API call deletes multiple objects from a bucket. Syntax 3.4.15. S3 get an object's Access Control List (ACL) Returns the ACL for the current version of the object: Syntax Add the versionId subresource to retrieve the ACL for a particular version: Syntax Response Headers x-amz-version-id Description Returns the version ID or null. Response Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.4.16. S3 set an object's Access Control List (ACL) Sets an object ACL for the current version of the object. Syntax Request Entities AccessControlPolicy Description A container for the response. Type Container AccessControlList Description A container for the ACL information. Type Container Owner Description A container for the bucket owner's ID and DisplayName . Type Container ID Description The bucket owner's ID. Type String DisplayName Description The bucket owner's display name. Type String Grant Description A container for Grantee and Permission . Type Container Grantee Description A container for the DisplayName and ID of the user receiving a grant of permission. Type Container Permission Description The permission given to the Grantee bucket. Type String 3.4.17. S3 copy an object To copy an object, use PUT and specify a destination bucket and the object name. Syntax Request Headers x-amz-copy-source Description The source bucket name + object name. Valid Values BUCKET / OBJECT Required Yes x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No x-amz-copy-if-modified-since Description Copies only if modified since the timestamp. Valid Values Timestamp Required No x-amz-copy-if-unmodified-since Description Copies only if unmodified since the timestamp. Valid Values Timestamp Required No x-amz-copy-if-match Description Copies only if object ETag matches ETag. Valid Values Entity Tag Required No x-amz-copy-if-none-match Description Copies only if object ETag matches ETag. Valid Values Entity Tag Required No Response Entities CopyObjectResult Description A container for the response elements. Type Container LastModified Description The last modified date of the source object. Type Date Etag Description The ETag of the new object. Type String 3.4.18. S3 add an object to a bucket using HTML forms Adds an object to a bucket using HTML forms. You must have write permissions on the bucket to perform this operation. Syntax 3.4.19. S3 determine options for a request A preflight request to determine if an actual request can be sent with the specific origin, HTTP method, and headers. Syntax 3.4.20. S3 initiate a multipart upload Initiates a multi-part upload process. Returns a UploadId , which you can specify when adding additional parts, listing parts, and completing or abandoning a multi-part upload. Syntax Request Headers content-md5 Description A base64 encoded MD-5 hash of the message. Valid Values A string. No defaults or constraints. Required No content-type Description A standard MIME type. Valid Values Any MIME type. Default: binary/octet-stream Required No x-amz-meta-<... > Description User metadata. Stored with the object. Valid Values A string up to 8kb. No defaults. Required No x-amz-acl Description A canned ACL. Valid Values private , public-read , public-read-write , authenticated-read Required No Response Entities InitiatedMultipartUploadsResult Description A container for the results. Type Container Bucket Description The bucket that will receive the object contents. Type String Key Description The key specified by the key request parameter, if any. Type String UploadId Description The ID specified by the upload-id request parameter identifying the multipart upload, if any. Type String 3.4.21. S3 add a part to a multipart upload Adds a part to a multi-part upload. Specify the uploadId subresource and the upload ID to add a part to a multi-part upload: Syntax The following HTTP response might be returned: HTTP Response 404 Status Code NoSuchUpload Description Specified upload-id does not match any initiated upload on this object. 3.4.22. S3 list the parts of a multipart upload Specify the uploadId subresource and the upload ID to list the parts of a multi-part upload: Syntax Response Entities InitiatedMultipartUploadsResult Description A container for the results. Type Container Bucket Description The bucket that will receive the object contents. Type String Key Description The key specified by the key request parameter, if any. Type String UploadId Description The ID specified by the upload-id request parameter identifying the multipart upload, if any. Type String Initiator Description Contains the ID and DisplayName of the user who initiated the upload. Type Container ID Description The initiator's ID. Type String DisplayName Description The initiator's display name. Type String Owner Description A container for the ID and DisplayName of the user who owns the uploaded object. Type Container StorageClass Description The method used to store the resulting object. STANDARD or REDUCED_REDUNDANCY Type String PartNumberMarker Description The part marker to use in a subsequent request if IsTruncated is true . Precedes the list. Type String NextPartNumberMarker Description The part marker to use in a subsequent request if IsTruncated is true . The end of the list. Type String IsTruncated Description If true , only a subset of the object's upload contents were returned. Type Boolean Part Description A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. Type Container PartNumber Description A container for Key , Part , InitiatorOwner , StorageClass , and Initiated elements. Type Integer ETag Description The part's entity tag. Type String Size Description The size of the uploaded part. Type Integer 3.4.23. S3 assemble the uploaded parts Assembles uploaded parts and creates a new object, thereby completing a multipart upload. Specify the uploadId subresource and the upload ID to complete a multi-part upload: Syntax Request Entities CompleteMultipartUpload Description A container consisting of one or more parts. Type Container Required Yes Part Description A container for the PartNumber and ETag . Type Container Required Yes PartNumber Description The identifier of the part. Type Integer Required Yes ETag Description The part's entity tag. Type String Required Yes Response Entities CompleteMultipartUploadResult Description A container for the response. Type Container Location Description The resource identifier (path) of the new object. Type URI bucket Description The name of the bucket that contains the new object. Type String Key Description The object's key. Type String ETag Description The entity tag of the new object. Type String 3.4.24. S3 copy a multipart upload Uploads a part by copying data from an existing object as data source. Specify the uploadId subresource and the upload ID to perform a multi-part upload copy: Syntax Request Headers x-amz-copy-source Description The source bucket name and object name. Valid Values BUCKET / OBJECT Required Yes x-amz-copy-source-range Description The range of bytes to copy from the source object. Valid Values Range: bytes=first-last , where the first and last are the zero-based byte offsets to copy. For example, bytes=0-9 indicates that you want to copy the first ten bytes of the source. Required No Response Entities CopyPartResult Description A container for all response elements. Type Container ETag Description Returns the ETag of the new part. Type String LastModified Description Returns the date the part was last modified. Type String Additional Resources For more information about this feature, see the Amazon S3 site . 3.4.25. S3 abort a multipart upload Aborts a multipart upload. Specify the uploadId subresource and the upload ID to abort a multi-part upload: Syntax 3.4.26. S3 Hadoop interoperability For data analytics applications that require Hadoop Distributed File System (HDFS) access, the Ceph Object Gateway can be accessed using the Apache S3A connector for Hadoop. The S3A connector is an open-source tool that presents S3 compatible object storage as an HDFS file system with HDFS file system read and write semantics to the applications while data is stored in the Ceph Object Gateway. Ceph Object Gateway is fully compatible with the S3A connector that ships with Hadoop 2.7.3. Additional Resources See the Red Hat Ceph Storage Object Gateway Guide for details on multi-tenancy. 3.5. S3 select operations As a developer, you can run S3 select to accelerate throughput. Users can run S3 select queries directly without a mediator. There are three S3 select workflow - CSV, Apache Parquet (Parquet), and JSON that provide S3 select operations with CSV, Parquet, and JSON objects: A CSV file stores tabular data in plain text format. Each line of the file is a data record. Parquet is an open source, column-oriented data file format designed for efficient data storage and retrieval. It provides highly efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk. Parquet enables the S3 select-engine to skip columns and chunks, thereby reducing IOPS dramatically (contrary to CSV and JSON format). JSON is a format structure. The S3 select engine enables the use of SQL statements on top of the JSON format input data using the JSON reader, enabling the scanning of highly nested and complex JSON formatted data. For example, a CSV, Parquet, or JSON S3 object with several gigabytes of data allows the user to extract a single column which is filtered by another column using the following query: Example Currently, the S3 object must retrieve data from the Ceph OSD through the Ceph Object Gateway before filtering and extracting data. There is improved performance when the object is large and the query is more specific. The Parquet format can be processed more efficiently than CSV. Prerequisites A running Red Hat Ceph Storage cluster. A RESTful client. A S3 user created with user access. 3.5.1. S3 select content from an object The select object content API filters the content of an object through the structured query language (SQL). See the Metadata collected by inventory section in the AWS Systems Manager User Guide for an example of the description of what should reside in the inventory object. The inventory content impacts the type of queries that should be run against that inventory. The number of SQL statements that potentially could provide essential information is large, but S3 select is an SQL-like utility and therefore, some operators are not supported, such as group-by and join . For CSV only, you must specify the data serialization format as comma-separated values of the object to retrieve the specified content. Parquet has no delimiter because it is in binary format. Amazon Web Services (AWS) command-line interface (CLI) select object content uses the CSV or Parquet format to parse object data into records and returns only the records specified in the query. You must specify the data serialization format for the response. You must have s3:GetObject permission for this operation. Note The InputSerialization element describes the format of the data in the object that is being queried. Objects can be in CSV or Parquet format. The OutputSerialization element is part of the AWS-CLI user client and describes how the output data is formatted. Ceph has implemented the server client for AWS-CLI and therefore, provides the same output according to OutputSerialization which currently is CSV only. The format of the InputSerialization does not need to match the format of the OutputSerialization . So, for example, you can specify Parquet in the InputSerialization and CSV in the OutputSerialization . Syntax Example Request entities Bucket Description The bucket to select object content from. Type String Required Yes Key Description The object key. Length Constraints Minimum length of 1. Type String Required Yes SelectObjectContentRequest Description Root level tag for the select object content request parameters. Type String Required Yes Expression Description The expression that is used to query the object. Type String Required Yes ExpressionType Description The type of the provided expression for example SQL. Type String Valid Values SQL Required Yes InputSerialization Description Describes the format of the data in the object that is being queried. Type String Required Yes OutputSerialization Description Format of data returned in comma separator and new-line. Type String Required Yes Response entities If the action is successful, the service sends back HTTP 200 response. Data is returned in XML format by the service: Payload Description Root level tag for the payload parameters. Type String Required Yes Records Description The records event. Type Base64-encoded binary data object Required No Stats Description The stats event. Type Long Required No The Ceph Object Gateway supports the following response: Example Syntax (for CSV) Example (for CSV) Syntax (for Parquet) Example (for Parquet) Syntax (for JSON) Example (for JSON) Example (for BOTO3) Supported features Currently, only part of the AWS s3 select command is supported: Features Details Description Example Arithmetic operators ^ * % / + - ( ) select (int(_1)+int(_2))*int(_9) from s3object; Arithmetic operators % modulo select count(*) from s3object where cast(_1 as int)%2 = 0; Arithmetic operators ^ power-of select cast(2^10 as int) from s3object; Compare operators > < >= ⇐ == != select _1,_2 from s3object where (int(_1)+int(_3))>int(_5); logical operator AND OR NOT select count(*) from s3object where not (int(1)>123 and int(_5)<200); logical operator is null Returns true/false for null indication in expression logical operator and NULL is not null Returns true/false for null indication in expression logical operator and NULL unknown state Review null-handle and observe the results of logical operations with NULL. The query returns 0 . select count(*) from s3object where null and (3>2); Arithmetic operator with NULL unknown state Review null-handle and observe the results of binary operations with NULL. The query returns 0 . select count(*) from s3object where (null+1) and (3>2); Compare with NULL unknown state Review null-handle and observe results of compare operations with NULL. The query returns 0 . select count(*) from s3object where (null*1.5) != 3; missing column unknown state select count(*) from s3object where _1 is null; projection column Similar to if or then or else select case when (1+1==(2+1)*3) then 'case_1' when 4*3)==(12 then 'case_2' else 'case_else' end, age*2 from s3object; projection column Similar to switch/case default select case cast(_1 as int) + 1 when 2 then "a" when 3 then "b" else "c" end from s3object; logical operator coalesce returns first non-null argument select coalesce(nullif(5,5),nullif(1,1.0),age+12) from s3object; logical operator nullif returns null in case both arguments are equal, or else the first one, nullif(1,1)=NULL nullif(null,1)=NULL nullif(2,1)=2 select nullif(cast(_1 as int),cast(_2 as int)) from s3object; logical operator {expression} in ( .. {expression} ..) select count(*) from s3object where 'ben' in (trim(_5),substring(_1,char_length(_1)-3,3),last_name); logical operator {expression} between {expression} and {expression} select _1 from s3object where cast(_1 as int) between 800 and 900 ; select count(*) from stdin where substring(_3,char_length(_3),1) between "x" and trim(_1) and substring(_3,char_length(_3)-1,1) = ":"; logical operator {expression} like {match-pattern} select count( ) from s3object where first_name like '%de_'; select count( ) from s3object where _1 like "%a[r-s]; casting operator select cast(123 as int)%2 from s3object; casting operator select cast(123.456 as float)%2 from s3object; casting operator select cast('ABC0-9' as string),cast(substr('ab12cd',3,2) as int)*4 from s3object; casting operator select cast(substring('publish on 2007-01-01',12,10) as timestamp) from s3object; non AWS casting operator select int(_1),int( 1.2 + 3.4) from s3object; non AWS casting operator select float(1.2) from s3object; non AWS casting operator select to_timestamp('1999-10-10T12:23:44Z') from s3object; Aggregation Function sun select sum(int(_1)) from s3object; Aggregation Function avg select avg(cast(_1 as float) + cast(_2 as int)) from s3object; Aggregation Function min select avg(cast(_1 a float) + cast(_2 as int)) from s3object; Aggregation Function max select max(float(_1)),min(int(_5)) from s3object; Aggregation Function count select count(*) from s3object where (int(1)+int(_3))>int(_5); Timestamp Functions extract select count(*) from s3object where extract(year from to_timestamp(_2)) > 1950 and extract(year from to_timestamp(_1)) < 1960; Timestamp Functions dateadd select count(0) from s3object where date_diff(year,to_timestamp(_1),date_add(day,366,to_timestamp(_1))) = 1; Timestamp Functions datediff select count(0) from s3object where date_diff(month,to_timestamp(_1),to_timestamp(_2)) = 2; Timestamp Functions utcnow select count(0) from s3object where date_diff(hour,utcnow(),date_add(day,1,utcnow())) = 24 Timestamp Functions to_string select to_string( to_timestamp("2009-09-17T17:56:06.234567Z"), "yyyyMMdd-H:m:s") from s3object; String Functions substring select count(0) from s3object where int(substring(_1,1,4))>1950 and int(substring(_1,1,4))<1960; String Functions substring substring with from negative number is valid considered as first select substring("123456789" from -4) from s3object; String Functions substring substring with from zero for out-of-bound number is valid just as (first,last) select substring("123456789" from 0 for 100) from s3object; String Functions trim select trim(' foobar ') from s3object; String Functions trim select trim(trailing from ' foobar ') from s3object; String Functions trim select trim(leading from ' foobar ') from s3object; String Functions trim select trim(both '12' from '1112211foobar22211122') from s3object; String Functions lower or upper select lower('ABcD12#USDe') from s3object; String Functions char_length, character_length select count(*) from s3object where char_length(_3)=3; Complex queries select sum(cast(_1 as int)),max(cast(_3 as int)), substring('abcdefghijklm', (2-1)*3+sum(cast(_1 as int))/sum(cast(_1 as int))+1, (count() + count(0))/count(0)) from s3object; alias support select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300; Additional Resources See Amazon's S3 Select Object Content API for more details. 3.5.2. S3 supported select functions S3 select supports the following functions: .Timestamp to_timestamp(string) Description Converts string to timestamp basic type. In the string format, any missing 'time' value is populated with zero; for missing month and day value, 1 is the default value. 'Timezone' is in format +/-HH:mm or Z , where the letter 'Z' indicates Coordinated Universal Time (UTC). Value of timezone can range between - 12:00 and +14:00. Supported Currently it can convert the following string formats into timestamp: YYYY-MM-DDTHH:mm:ss.SSSSSS+/-HH:mm YYYY-MM-DDTHH:mm:ss.SSSSSSZ YYYY-MM-DDTHH:mm:ss+/-HH:mm YYYY-MM-DDTHH:mm:ssZ YYYY-MM-DDTHH:mm+/-HH:mm YYYY-MM-DDTHH:mmZ YYYY-MM-DDT YYYYT to_string(timestamp, format_pattern) Description Returns a string representation of the input timestamp in the given input string format. Parameters Format Example Description yy 69 2-year digit. y 1969 4-year digit. yyyy 1969 Zero-padded 4-digit year. M 1 Month of the year. MM 01 Zero-padded month of the year. MMM Jan Abbreviated month of the year name. MMMM January full month of the year name. MMMMM J Month of the year first letter. Not valid for use with the to_timestamp function. d 2 Day of the month (1-31). dd 02 Zero-padded day of the month (01-31). a AM AM or PM of day. h 3 Hour of the day (1-12). hh 03 Zero-padded hour of day (01-12). H 3 Hour of the day (0-23). HH 03 Zero-padded hour of the day (00-23). m 4 Minute of the hour (0-59). mm 04 Zero-padded minute of the hour (00-59). s 5 Second of the minute (0-59). ss 05 Zero-padded second of the minute (00-59). S 1 Fraction of the second (precision: 0.1, range: 0.0-0.9). SS 12 Fraction of the second (precision: 0.01, range: 0.0-0.99). SSS 123 Fraction of the second (precision: 0.01, range: 0.0-0.999). SSSS 1234 Fraction of the second (precision: 0.001, range: 0.0-0.9999). SSSSSS 123456 Fraction of the second (maximum precision: 1 nanosecond, range: 0.0-0.999999). n 60000000 Nano of second. X +07 or Z Offset in hours or "Z" if the offset is 0. XX or XXXX +0700 or Z Offset in hours and minutes or "Z" if the offset is 0. XXX or XXXXX +07:00 or Z Offset in hours and minutes or "Z" if the offset is 0. x 7 Offset in hours. xx or xxxx 700 Offset in hours and minutes. xxx or xxxxx +07:00 Offset in hours and minutes. extract(date-part from timestamp) Description Returns integer according to date-part extract from input timestamp. Supported year, month, week, day, hour, minute, second, timezone_hour, timezone_minute. date_add(date-part ,integer,timestamp) Description Returns timestamp, a calculation based on the results of input timestamp and date-part. Supported year, month, day, hour, minute, second. date_diff(date-part,timestamp,timestamp) Description Return an integer, a calculated result of the difference between two timestamps according to date-part. Supported year, month, day, hour, minute, second. utcnow() Description Return timestamp of current time. Aggregation count() Description Returns integers based on the number of rows that match a condition if there is one. sum(expression) Description Returns a summary of expression on each row that matches a condition if there is one. avg(expression) Description Returns an average expression on each row that matches a condition if there is one. max(expression) Description Returns the maximal result for all expressions that match a condition if there is one. min(expression) Description Returns the minimal result for all expressions that match a condition if there is one. String substring (string,from,for) Description Returns a string extract from the input string according to from, for inputs. Char_length Description Returns a number of characters in string. Character_length also does the same. trim([[leading | trailing | both remove_chars] from] string ) Description Trims leading/trailing (or both) characters from the target string. The default value is a blank character. Upper\lower Description Converts characters into uppercase or lowercase. NULL The NULL value is missing or unknown that is NULL can not produce a value on any arithmetic operations. The same applies to arithmetic comparison, any comparison to NULL is NULL that is unknown. Table 3.4. The NULL use case A is NULL Result(NULL=UNKNOWN) Not A NULL A or False NULL A or True True A or A NULL A and False False A and True NULL A and A NULL Additional Resources See Amazon's S3 Select Object Content API for more details. 3.5.3. S3 alias programming construct Alias programming construct is an essential part of the s3 select language because it enables better programming with objects that contain many columns or complex queries. When a statement with alias construct is parsed, it replaces the alias with a reference to the right projection column and on query execution, the reference is evaluated like any other expression. Alias maintains result-cache that is if an alias is used more than once, the same expression is not evaluated and the same result is returned because the result from the cache is used. Currently, Red Hat supports the column alias. Example 3.5.4. S3 parsing explained The S3 select engine has parsers for all three file formats - CSV, Parquet, and JSON which separate the commands into more processable components, which are then attached to tags that define each component. 3.5.4.1. S3 CSV parsing The CSV definitions with input serialization uses these default values: Use {\n}` for row-delimiter. Use {"} for quote. Use {\} for escape characters. The csv-header-info is parsed upon USE appearing in the AWS-CLI; this is the first row in the input object containing the schema. Currently, output serialization and compression-type is not supported. The S3 select engine has a CSV parser which parses S3-objects: Each row ends with a row-delimiter. The field-separator separates the adjacent columns. The successive field separator defines the NULL column. The quote-character overrides the field-separator; that is, the field separator is any character between the quotes. The escape character disables any special character except the row delimiter. The following are examples of CSV parsing rules: Table 3.5. CSV parsing Feature Description Input (Tokens) NULL Successive field delimiter ,,1,,2, =⇒ {null}{null}{1}{null}{2}{null} QUOTE The quote character overrides the field delimiter. 11,22,"a,b,c,d",last =⇒ {11}{22}{"a,b,c,d"}{last} Escape The escape character overrides the meta-character. A container for the object owner's ID and DisplayName row delimiter There is no closed quote; row delimiter is the closing line. 11,22,a="str,44,55,66 =⇒ {11}{22}{a="str,44,55,66} csv header info FileHeaderInfo tag USE value means each token on the first line is the column-name; IGNORE value means to skip the first line. Additional Resources See Amazon's S3 Select Object Content API for more details. 3.5.4.2. S3 Parquet parsing Apache Parquet is an open-source, columnar data file format designed for efficient data storage and retrieval. The S3 select engine's Parquet parser parses S3-objects as follows: Example In the above example, there are N columns in this table, split into M row groups. The file metadata contains the locations of all the column metadata start locations. Metadata is written after the data to allow for single pass writing. All the column chunks can be found in the file metadata which should later be read sequentially. The format is explicitly designed to separate the metadata from the data. This allows splitting columns into multiple files, as well as having a single metadata file reference multiple parquet files. 3.5.4.3. S3 JSON parsing JSON document enables nesting values within objects or arrays without limitations. When querying a specific value in a JSON document in the S3 select engine, the location of the value is specified through a path in the SELECT statement. The generic structure of a JSON document does not have a row and column structure like CSV and Parquet. Instead, it is the SQL statement itself that defines the rows and columns when querying a JSON document. The S3 select engine's JSON parser parses S3-objects as follows: The FROM clause in the SELECT statement defines the row boundaries. A row in a JSON document is similar to how the row delimiter is used to define rows for CSV objects, and how row groups are used to define rows for Parquet objects Consider the following example: Example The statement instructs the reader to search for the path aa.bb.cc and defines the row boundaries based on the occurrence of this path. A row begins when the reader encounters the path, and it ends when the reader exits the innermost part of the path, which in this case is the object cc . 3.5.5. Integrating Ceph Object Gateway with Trino Integrate the Ceph Object Gateway with Trino, an important utility that enables the user to run SQL queries 9x faster on S3 objects. Following are some benefits of using Trino: Trino is a complete SQL engine. Pushes down S3 select requests wherein the Trino engine identifies part of the SQL statement that is cost effective to run on the server-side. uses the optimization rules of Ceph/S3select to enhance performance. Leverages Red Hat Ceph Storage scalability and divides the original object into multiple equal parts, performs S3 select requests, and merges the request. Important If the s3select syntax does not work while querying through trino, use the SQL syntax. Prerequisites A running Red Hat Ceph Storage cluster with Ceph Object Gateway installed. Docker or Podman installed. Buckets created. Objects are uploaded. Procedure Deploy Trino and hive. Example Modify the hms_trino.yaml file with S3 endpoint, access key, and secret key. Example Modify the hive.properties file with S3 endpoint, access key, and secret key. Example Start a Trino container to integrate Ceph Object Gateway. Example Verify integration. Example Note The external location must point to the bucket name or a directory, and not the end of a file.
[ "HTTP/1.1 PUT /buckets/bucket/object.mpeg Host: cname.domain.com Date: Mon, 2 Jan 2012 00:01:01 +0000 Content-Encoding: mpeg Content-Length: 9999999 Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "firewall-cmd --zone=public --add-port=8080/tcp --permanent firewall-cmd --reload", "yum install dnsmasq echo \"address=/. FQDN_OF_GATEWAY_NODE / IP_OF_GATEWAY_NODE \" | tee --append /etc/dnsmasq.conf systemctl start dnsmasq systemctl enable dnsmasq", "systemctl stop NetworkManager systemctl disable NetworkManager", "echo \"DNS1= IP_OF_GATEWAY_NODE \" | tee --append /etc/sysconfig/network-scripts/ifcfg-eth0 echo \" IP_OF_GATEWAY_NODE FQDN_OF_GATEWAY_NODE \" | tee --append /etc/hosts systemctl restart network systemctl enable network systemctl restart dnsmasq", "[user@rgw ~]USD ping mybucket. FQDN_OF_GATEWAY_NODE", "yum install ruby", "gem install aws-s3", "[user@dev ~]USD mkdir ruby_aws_s3 [user@dev ~]USD cd ruby_aws_s3", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => ' FQDN_OF_GATEWAY_NODE ', :port => '8080', :access_key_id => ' MY_ACCESS_KEY ', :secret_access_key => ' MY_SECRET_KEY ' )", "#!/usr/bin/env ruby require 'aws/s3' require 'resolv-replace' AWS::S3::Base.establish_connection!( :server => 'testclient.englab.pnq.redhat.com', :port => '8080', :access_key_id => '98J4R9P22P5CDL65HKP8', :secret_access_key => '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.create('my-new-bucket1')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Service.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket1 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.store( 'hello.txt', 'Hello World!', 'my-new-bucket1', :content_type => 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' new_bucket = AWS::S3::Bucket.find('my-new-bucket1') new_bucket.each do |object| puts \"{object.key}\\t{object.about['content-length']}\\t{object.about['last-modified']}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::Bucket.delete('my-new-bucket1', :force => true)", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' AWS::S3::S3Object.delete('hello.txt', 'my-new-bucket1')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install ruby", "gem install aws-sdk", "[user@dev ~]USD mkdir ruby_aws_sdk [user@dev ~]USD cd ruby_aws_sdk", "[user@dev ~]USD vim conn.rb", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http:// FQDN_OF_GATEWAY_NODE :8080', access_key_id: ' MY_ACCESS_KEY ', secret_access_key: ' MY_SECRET_KEY ', force_path_style: true, region: 'us-east-1' )", "#!/usr/bin/env ruby require 'aws-sdk' require 'resolv-replace' Aws.config.update( endpoint: 'http://testclient.englab.pnq.redhat.com:8080', access_key_id: '98J4R9P22P5CDL65HKP8', secret_access_key: '6C+jcaP0dp0+FZfrRNgyGA9EzRy25pURldwje049', force_path_style: true, region: 'us-east-1' )", "[user@dev ~]USD chmod +x conn.rb", "[user@dev ~]USD ./conn.rb | echo USD?", "[user@dev ~]USD vim create_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.create_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x create_bucket.rb", "[user@dev ~]USD ./create_bucket.rb", "[user@dev ~]USD vim list_owned_buckets.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_buckets.buckets.each do |bucket| puts \"{bucket.name}\\t{bucket.creation_date}\" end", "[user@dev ~]USD chmod +x list_owned_buckets.rb", "[user@dev ~]USD ./list_owned_buckets.rb", "my-new-bucket2 2020-01-21 10:33:19 UTC", "[user@dev ~]USD vim create_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.put_object( key: 'hello.txt', body: 'Hello World!', bucket: 'my-new-bucket2', content_type: 'text/plain' )", "[user@dev ~]USD chmod +x create_object.rb", "[user@dev ~]USD ./create_object.rb", "[user@dev ~]USD vim list_bucket_content.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.list_objects(bucket: 'my-new-bucket2').contents.each do |object| puts \"{object.key}\\t{object.size}\" end", "[user@dev ~]USD chmod +x list_bucket_content.rb", "[user@dev ~]USD ./list_bucket_content.rb", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x del_empty_bucket.rb", "[user@dev ~]USD ./del_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim del_non_empty_bucket.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new Aws::S3::Bucket.new('my-new-bucket2', client: s3_client).clear! s3_client.delete_bucket(bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x del_non_empty_bucket.rb", "[user@dev ~]USD ./del_non_empty_bucket.rb | echo USD?", "[user@dev ~]USD vim delete_object.rb", "#!/usr/bin/env ruby load 'conn.rb' s3_client = Aws::S3::Client.new s3_client.delete_object(key: 'hello.txt', bucket: 'my-new-bucket2')", "[user@dev ~]USD chmod +x delete_object.rb", "[user@dev ~]USD ./delete_object.rb", "yum install php", "[user@dev ~]USD mkdir php_s3 [user@dev ~]USD cd php_s3", "[user@dev ~]USD cp -r ~/Downloads/aws/ ~/php_s3/", "[user@dev ~]USD vim conn.php", "<?php define('AWS_KEY', ' MY_ACCESS_KEY '); define('AWS_SECRET_KEY', ' MY_SECRET_KEY '); define('HOST', ' FQDN_OF_GATEWAY_NODE '); define('PORT', '8080'); // require the AWS SDK for php library require '/ PATH_TO_AWS /aws-autoloader.php'; use Aws\\S3\\S3Client; // Establish connection with host using S3 Client client = S3Client::factory(array( 'base_url' => HOST , 'port' => PORT , 'key' => AWS_KEY , 'secret' => AWS_SECRET_KEY )); ?>", "[user@dev ~]USD php -f conn.php | echo USD?", "[user@dev ~]USD vim create_bucket.php", "<?php include 'conn.php'; client->createBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@dev ~]USD php -f create_bucket.php", "[user@dev ~]USD vim list_owned_buckets.php", "<?php include 'conn.php'; blist = client->listBuckets(); echo \"Buckets belonging to \" . blist['Owner']['ID'] . \":\\n\"; foreach (blist['Buckets'] as b) { echo \"{b['Name']}\\t{b['CreationDate']}\\n\"; } ?>", "[user@dev ~]USD php -f list_owned_buckets.php", "my-new-bucket3 2020-01-21 10:33:19 UTC", "[user@dev ~]USD echo \"Hello World!\" > hello.txt", "[user@dev ~]USD vim create_object.php", "<?php include 'conn.php'; key = 'hello.txt'; source_file = './hello.txt'; acl = 'private'; bucket = 'my-new-bucket3'; client->upload(bucket, key, fopen(source_file, 'r'), acl); ?>", "[user@dev ~]USD php -f create_object.php", "[user@dev ~]USD vim list_bucket_content.php", "<?php include 'conn.php'; o_iter = client->getIterator('ListObjects', array( 'Bucket' => 'my-new-bucket3' )); foreach (o_iter as o) { echo \"{o['Key']}\\t{o['Size']}\\t{o['LastModified']}\\n\"; } ?>", "[user@dev ~]USD php -f list_bucket_content.php", "hello.txt 12 Fri, 22 Jan 2020 15:54:52 GMT", "[user@dev ~]USD vim del_empty_bucket.php", "<?php include 'conn.php'; client->deleteBucket(array('Bucket' => 'my-new-bucket3')); ?>", "[user@dev ~]USD php -f del_empty_bucket.php | echo USD?", "[user@dev ~]USD vim delete_object.php", "<?php include 'conn.php'; client->deleteObject(array( 'Bucket' => 'my-new-bucket3', 'Key' => 'hello.txt', )); ?>", "[user@dev ~]USD php -f delete_object.php", "ceph config set RGW_CLIENT_NAME rgw_sts_key STS_KEY ceph config set RGW_CLIENT_NAME rgw_s3_auth_use_sts true", "ceph config set client.rgw rgw_sts_key 7f8fd8dd4700mnop ceph config set client.rgw rgw_s3_auth_use_sts true", "systemctl restart ceph- CLUSTER_ID @ SERVICE_TYPE . ID .service", "systemctl restart ceph-c4b34c6f-8365-11ba-dc31-529020a7702d@rgw.realm.zone.host01.gwasto.service", "ceph orch restart SERVICE_TYPE", "ceph orch restart rgw", "radosgw-admin --uid USER_NAME --display-name \" DISPLAY_NAME \" --access_key USER_NAME --secret SECRET user create", "[user@rgw ~]USD radosgw-admin --uid TESTER --display-name \"TestUser\" --access_key TESTER --secret test123 user create", "radosgw-admin caps add --uid=\" USER_NAME \" --caps=\"oidc-provider=*\"", "[user@rgw ~]USD radosgw-admin caps add --uid=\"TESTER\" --caps=\"oidc-provider=*\"", "\"{\\\"Version\\\":\\\"2020-01-17\\\",\\\"Statement\\\":[{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":{\\\"Federated\\\":[\\\"arn:aws:iam:::oidc-provider/ IDP_URL \\\"]},\\\"Action\\\":[\\\"sts:AssumeRoleWithWebIdentity\\\"],\\\"Condition\\\":{\\\"StringEquals\\\":{\\\" IDP_URL :app_id\\\":\\\" AUD_FIELD \\\"\\}\\}\\}\\]\\}\"", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL :8000/ CONTEXT /realms/ REALM /.well-known/openid-configuration\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com:8000/auth/realms/quickstart/.well-known/openid-configuration\" | jq .", "curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \" IDP_URL / CONTEXT /realms/ REALM /protocol/openid-connect/certs\" | jq .", "[user@client ~]USD curl -k -v -X GET -H \"Content-Type: application/x-www-form-urlencoded\" \"http://www.example.com/auth/realms/quickstart/protocol/openid-connect/certs\" | jq .", "-----BEGIN CERTIFICATE----- MIIDYjCCAkqgAwIBAgIEEEd2CDANBgkqhkiG9w0BAQsFADBzMQkwBwYDVQQGEwAxCTAHBgNVBAgTADEJMAcGA1UEBxMAMQkwBwYDVQQKEwAxCTAHBgNVBAsTADE6MDgGA1UEAxMxYXV0aHN2Yy1pbmxpbmVtZmEuZGV2LnZlcmlmeS5pYm1jbG91ZHNlY3VyaXR5LmNvbTAeFw0yMTA3MDUxMzU2MzZaFw0zMTA3MDMxMzU2MzZaMHMxCTAHBgNVBAYTADEJMAcGA1UECBMAMQkwBwYDVQQHEwAxCTAHBgNVBAoTADEJMAcGA1UECxMAMTowOAYDVQQDEzFhdXRoc3ZjLWlubGluZW1mYS5kZXYudmVyaWZ5LmlibWNsb3Vkc2VjdXJpdHkuY29tMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAphyu3HaAZ14JH/EXetZxtNnerNuqcnfxcmLhBz9SsTlFD59ta+BOVlRnK5SdYEqO3ws2iGEzTvC55rczF+hDVHFZEBJLVLQe8ABmi22RAtG1P0dA/Bq8ReFxpOFVWJUBc31QM+ummW0T4yw44wQJI51LZTMz7PznB0ScpObxKe+frFKd1TCMXPlWOSzmTeFYKzR83Fg9hsnz7Y8SKGxi+RoBbTLT+ektfWpR7O+oWZIf4INe1VYJRxZvn+qWcwI5uMRCtQkiMknc3Rj6Eupiqq6FlAjDs0p//EzsHAlW244jMYnHCGq0UP3oE7vViLJyiOmZw7J3rvs3m9mOQiPLoQIDAQABMA0GCSqGSIb3DQEBCwUAA4IBAQCeVqAzSh7Tp8LgaTIFUuRbdjBAKXC9Nw3+pRBHoiUTdhqO3ualyGih9m/js/clb8Vq/39zl0VPeaslWl2NNX9zaK7xo+ckVIOY3ucCaTC04ZUn1KzZu/7azlN0C5XSWg/CfXgU2P3BeMNzc1UNY1BASGyWn2lEplIVWKLaDZpNdSyyGyaoQAIBdzxeNCyzDfPCa2oSO8WH1czmFiNPqR5kdknHI96CmsQdi+DT4jwzVsYgrLfcHXmiWyIAb883hR3Pobp+Bsw7LUnxebQ5ewccjYmrJzOk5Wb5FpXBhaJH1B3AEd6RGalRUyc/zUKdvEy0nIRMDS9x2BP3NVvZSADD -----END CERTIFICATE-----", "openssl x509 -in CERT_FILE -fingerprint -noout", "[user@client ~]USD openssl x509 -in certificate.crt -fingerprint -noout SHA1 Fingerprint=F7:D7:B3:51:5D:D0:D3:19:DD:21:9A:43:A9:EA:72:7A:D6:06:52:87", "bash check_token_isv.sh | jq .iss \"https://keycloak-sso.apps.ocp.example.com/auth/realms/ceph\"", "aws --endpoint https://cephproxy1.example.com:8443 iam create-open-id-connect-provider --url https://keycloak-sso.apps.ocp.example.com/auth/realms/ceph --thumbprint-list 00E9CFD697E0B16DD13C86B0FFDC29957E5D24DF", "aws --endpoint https://cephproxy1.example.com:8443 iam list-open-id-connect-providers { \"OpenIDConnectProviderList\": [ { \"Arn\": \"arn:aws:iam:::oidc-provider/keycloak-sso.apps.ocp.example.com/auth/realms/ceph\" } ] }", "curl -k -q -L -X POST \"https://keycloak-sso.apps.example.com/auth/realms/ceph/protocol/openid-connect/ token\" -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=ceph' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=XXXXXXXXXXXXXXXXXXXXXXX' --data-urlencode 'scope=openid' --data-urlencode \"username=SSOUSERNAME\" --data-urlencode \"password=SSOPASSWORD\"", "cat check_token.sh USERNAME=USD1 PASSWORD=USD2 KC_CLIENT=\"ceph\" KC_CLIENT_SECRET=\"7sQXqyMSzHIeMcSALoKaljB6sNIBDRjU\" KC_ACCESS_TOKEN=\"USD(./get_web_token.sh USDUSERNAME USDPASSWORD | jq -r '.access_token')\" KC_SERVER=\"https://keycloak-sso.apps.ocp.stg.local\" KC_CONTEXT=\"auth\" KC_REALM=\"ceph\" curl -k -s -q -X POST -u \"USDKC_CLIENT:USDKC_CLIENT_SECRET\" -d \"token=USDKC_ACCESS_TOKEN\" \"USDKC_SERVER/USDKC_CONTEXT/realms/USDKC_REALM/protocol/openid-connect/token/introspect\" | jq . ./check_token.sh s3admin passw0rd | jq .sub \"ceph\"", "cat role-rgwadmins.json { \"Version\": \"2012-10-17\", \"Statement\": [ { \"Effect\": \"Allow\", \"Principal\": { \"Federated\": [ \"arn:aws:iam:::oidc-provider/keycloak-sso.apps.example.com/auth/realms/ceph\" ] }, \"Action\": [ \"sts:AssumeRoleWithWebIdentity\" ], \"Condition\": { \"StringLike\": { \"keycloak-sso.apps.example.com/auth/realms/ceph:sub\":\"ceph\" } } } ] }", "radosgw-admin role create --role-name rgwadmins --assume-role-policy-doc=USD(jq -rc . /root/role-rgwadmins.json)", "cat test-assume-role.sh #!/bin/bash export AWS_CA_BUNDLE=\"/etc/pki/ca-trust/source/anchors/cert.pem\" unset AWS_ACCESS_KEY_ID unset AWS_SECRET_ACCESS_KEY unset AWS_SESSION_TOKEN KC_ACCESS_TOKEN=USD(curl -k -q -L -X POST \"https://keycloak-sso.apps.ocp.example.com/auth/realms/ceph/protocol/openid-connect/ token\" -H 'Content-Type: application/x-www-form-urlencoded' --data-urlencode 'client_id=ceph' --data-urlencode 'grant_type=password' --data-urlencode 'client_secret=XXXXXXXXXXXXXXXXXXXXXXX' --data-urlencode 'scope=openid' --data-urlencode \"<varname>SSOUSERNAME</varname>\" --data-urlencode \"<varname>SSOPASSWORD</varname>\" | jq -r .access_token) echo USD{KC_ACCESS_TOKEN} IDM_ASSUME_ROLE_CREDS=USD(aws sts assume-role-with-web-identity --role-arn \"arn:aws:iam:::role/USD3\" --role-session-name testbr --endpoint=https://cephproxy1.example.com:8443 --web-identity-token=\"USDKC_ACCESS_TOKEN\") echo \"aws sts assume-role-with-web-identity --role-arn \"arn:aws:iam:::role/USD3\" --role-session-name testb --endpoint=https://cephproxy1.example.com:8443 --web-identity-token=\"USDKC_ACCESS_TOKEN\"\" echo USDIDM_ASSUME_ROLE_CREDS export AWS_ACCESS_KEY_ID=USD(echo USDIDM_ASSUME_ROLE_CREDS | jq -r .Credentials.AccessKeyId) export AWS_SECRET_ACCESS_KEY=USD(echo USDIDM_ASSUME_ROLE_CREDS | jq -r .Credentials.SecretAccessKey) export AWS_SESSION_TOKEN=USD(echo USDIDM_ASSUME_ROLE_CREDS | jq -r .Credentials.SessionToken)", "source ./test-assume-role.sh s3admin passw0rd rgwadmins aws s3 mb s3://testbucket aws s3 ls", "ceph config set RGW_CLIENT_NAME rgw_sts_key STS_KEY ceph config set RGW_CLIENT_NAME rgw_s3_auth_use_sts true", "ceph config set client.rgw rgw_sts_key 7f8fd8dd4700mnop ceph config set client.rgw rgw_s3_auth_use_sts true", "[user@osp ~]USD openstack ec2 credentials create +------------+--------------------------------------------------------+ | Field | Value | +------------+--------------------------------------------------------+ | access | b924dfc87d454d15896691182fdeb0ef | | links | {u'self': u'http://192.168.0.15/identity/v3/users/ | | | 40a7140e424f493d8165abc652dc731c/credentials/ | | | OS-EC2/b924dfc87d454d15896691182fdeb0ef'} | | project_id | c703801dccaf4a0aaa39bec8c481e25a | | secret | 6a2142613c504c42a94ba2b82147dc28 | | trust_id | None | | user_id | 40a7140e424f493d8165abc652dc731c | +------------+--------------------------------------------------------+", "import boto3 access_key = b924dfc87d454d15896691182fdeb0ef secret_key = 6a2142613c504c42a94ba2b82147dc28 client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.get_session_token( DurationSeconds=43200 )", "s3client = boto3.client('s3', aws_access_key_id = response['Credentials']['AccessKeyId'], aws_secret_access_key = response['Credentials']['SecretAccessKey'], aws_session_token = response['Credentials']['SessionToken'], endpoint_url=https://www.example.com/s3, region_name='') bucket = s3client.create_bucket(Bucket='my-new-shiny-bucket') response = s3client.list_buckets() for bucket in response[\"Buckets\"]: print \"{name}\\t{created}\".format( name = bucket['Name'], created = bucket['CreationDate'], )", "radosgw-admin caps add --uid=\" USER \" --caps=\"roles=*\"", "radosgw-admin caps add --uid=\"gwadmin\" --caps=\"roles=*\"", "radosgw-admin role create --role-name= ROLE_NAME --path= PATH --assume-role-policy-doc= TRUST_POLICY_DOC", "radosgw-admin role create --role-name=S3Access --path=/application_abc/component_xyz/ --assume-role-policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Principal\\\":\\{\\\"AWS\\\":\\[\\\"arn:aws:iam:::user/TESTER\\\"\\]\\},\\\"Action\\\":\\[\\\"sts:AssumeRole\\\"\\]\\}\\]\\}", "radosgw-admin role-policy put --role-name= ROLE_NAME --policy-name= POLICY_NAME --policy-doc= PERMISSION_POLICY_DOC", "radosgw-admin role-policy put --role-name=S3Access --policy-name=Policy --policy-doc=\\{\\\"Version\\\":\\\"2012-10-17\\\",\\\"Statement\\\":\\[\\{\\\"Effect\\\":\\\"Allow\\\",\\\"Action\\\":\\[\\\"s3:*\\\"\\],\\\"Resource\\\":\\\"arn:aws:s3:::example_bucket\\\"\\}\\]\\}", "radosgw-admin user info --uid=gwuser | grep -A1 access_key", "import boto3 access_key = 11BS02LGFB6AL6H1ADMW secret_key = vzCEkuryfn060dfee4fgQPqFrncKEIkh3ZcdOANY client = boto3.client('sts', aws_access_key_id=access_key, aws_secret_access_key=secret_key, endpoint_url=https://www.example.com/rgw, region_name='', ) response = client.assume_role( RoleArn='arn:aws:iam:::role/application_abc/component_xyz/S3Access', RoleSessionName='Bob', DurationSeconds=3600 )", "class SigV4Auth(BaseSigner): \"\"\" Sign a request with Signature V4. \"\"\" REQUIRES_REGION = True def __init__(self, credentials, service_name, region_name): self.credentials = credentials # We initialize these value here so the unit tests can have # valid values. But these will get overriden in ``add_auth`` # later for real requests. self._region_name = region_name if service_name == 'sts': 1 self._service_name = 's3' 2 else: 3 self._service_name = service_name 4", "def _modify_request_before_signing(self, request): if 'Authorization' in request.headers: del request.headers['Authorization'] self._set_necessary_date_headers(request) if self.credentials.token: if 'X-Amz-Security-Token' in request.headers: del request.headers['X-Amz-Security-Token'] request.headers['X-Amz-Security-Token'] = self.credentials.token if not request.context.get('payload_signing_enabled', True): if 'X-Amz-Content-SHA256' in request.headers: del request.headers['X-Amz-Content-SHA256'] request.headers['X-Amz-Content-SHA256'] = UNSIGNED_PAYLOAD 1 else: 2 request.headers['X-Amz-Content-SHA256'] = self.payload(request)", "client.put_bucket_notification_configuration( Bucket=bucket_name, NotificationConfiguration={ 'TopicConfigurations': [ { 'Id': notification_name, 'TopicArn': topic_arn, 'Events': ['s3:ObjectCreated:*', 's3:ObjectRemoved:*', 's3:ObjectLifecycle:Expiration:*'] }]})", "Get / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "Get /testbucket?notification=testnotificationID HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<NotificationConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <TopicConfiguration> <Id></Id> <Topic></Topic> <Event></Event> <Filter> <S3Key> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Key> <S3Metadata> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Metadata> <S3Tags> <FilterRule> <Name></Name> <Value></Value> </FilterRule> </S3Tags> </Filter> </TopicConfiguration> </NotificationConfiguration>", "DELETE / BUCKET ?notification= NOTIFICATION_ID HTTP/1.1", "DELETE /testbucket?notification=testnotificationID HTTP/1.1", "GET /mybucket HTTP/1.1 Host: cname.domain.com", "GET / HTTP/1.1 Host: mybucket.cname.domain.com", "GET / HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?max-keys=25 HTTP/1.1 Host: cname.domain.com", "PUT / BUCKET HTTP/1.1 Host: cname.domain.com x-amz-acl: public-read-write Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?website-configuration=HTTP/1.1", "PUT /testbucket?website-configuration=HTTP/1.1", "GET / BUCKET ?website-configuration=HTTP/1.1", "GET /testbucket?website-configuration=HTTP/1.1", "DELETE / BUCKET ?website-configuration=HTTP/1.1", "DELETE /testbucket?website-configuration=HTTP/1.1", "PUT / BUCKET ?replication HTTP/1.1", "PUT /testbucket?replication HTTP/1.1", "GET / BUCKET ?replication HTTP/1.1", "GET /testbucket?replication HTTP/1.1", "DELETE / BUCKET ?replication HTTP/1.1", "DELETE /testbucket?replication HTTP/1.1", "DELETE / BUCKET HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "<LifecycleConfiguration> <Rule> <Prefix/> <Status>Enabled</Status> <Expiration> <Days>10</Days> </Expiration> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Prefix>keypre/</Prefix> </Filter> </Rule> <Rule> <Status>Enabled</Status> <Filter> <Prefix>mypre/</Prefix> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <Tag> <Key>key</Key> <Value>value</Value> </Tag> </Filter> </Rule> </LifecycleConfiguration>", "<LifecycleConfiguration> <Rule> <Status>Enabled</Status> <Filter> <And> <Prefix>key-prefix</Prefix> <Tag> <Key>key1</Key> <Value>value1</Value> </Tag> <Tag> <Key>key2</Key> <Value>value2</Value> </Tag> </And> </Filter> </Rule> </LifecycleConfiguration>", "GET / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET <LifecycleConfiguration> <Rule> <Expiration> <Days>10</Days> </Expiration> </Rule> <Rule> </Rule> </LifecycleConfiguration>", "DELETE / BUCKET ?lifecycle HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?location HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versioning HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?versioning HTTP/1.1", "PUT /testbucket?versioning HTTP/1.1", "GET / BUCKET ?acl HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?acl HTTP/1.1", "GET / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET ?cors HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?versions HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "HEAD / BUCKET HTTP/1.1 Host: cname.domain.com Date: date Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "GET / BUCKET ?uploads HTTP/1.1", "cat > examplepol { \"Version\": \"2012-10-17\", \"Statement\": [{ \"Effect\": \"Allow\", \"Principal\": {\"AWS\": [\"arn:aws:iam::usfolks:user/fred\"]}, \"Action\": \"s3:PutObjectAcl\", \"Resource\": [ \"arn:aws:s3:::happybucket/*\" ] }] } s3cmd setpolicy examplepol s3://happybucket s3cmd delpolicy s3://happybucket", "GET / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "PUT / BUCKET ?requestPayment HTTP/1.1 Host: cname.domain.com", "https://rgw.domain.com/tenant:bucket", "from boto.s3.connection import S3Connection, OrdinaryCallingFormat c = S3Connection( aws_access_key_id=\"TESTER\", aws_secret_access_key=\"test123\", host=\"rgw.domain.com\", calling_format = OrdinaryCallingFormat() ) bucket = c.get_bucket(\"tenant:bucket\")", "{ \"Principal\": \"*\", \"Resource\": \"*\", \"Action\": \"s3:PutObject\", \"Effect\": \"Allow\", \"Condition\": { \"StringLike\": {\"aws:SourceVpc\": \"vpc-*\"}} }", "{ \"Principal\": \"*\", \"Resource\": \"*\", \"Action\": \"s3:PutObject\", \"Effect\": \"Allow\", \"Condition\": {\"StringEquals\": {\"aws:SourceVpc\": \"vpc-91237329\"}} }", "GET /v20180820/configuration/publicAccessBlock HTTP/1.1 Host: cname.domain.com x-amz-account-id: _ACCOUNTID_", "PUT /?publicAccessBlock HTTP/1.1 Host: Bucket.s3.amazonaws.com Content-MD5: ContentMD5 x-amz-sdk-checksum-algorithm: ChecksumAlgorithm x-amz-expected-bucket-owner: ExpectedBucketOwner <?xml version=\"1.0\" encoding=\"UTF-8\"?> <PublicAccessBlockConfiguration xmlns=\"http://s3.amazonaws.com/doc/2006-03-01/\"> <BlockPublicAcls>boolean</BlockPublicAcls> <IgnorePublicAcls>boolean</IgnorePublicAcls> <BlockPublicPolicy>boolean</BlockPublicPolicy> <RestrictPublicBuckets>boolean</RestrictPublicBuckets> </PublicAccessBlockConfiguration>", "DELETE /v20180820/configuration/publicAccessBlock HTTP/1.1 Host: s3-control.amazonaws.com x-amz-account-id: AccountId", "GET / BUCKET / OBJECT HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "GET / BUCKET / OBJECT ?partNumber= PARTNUMBER &versionId= VersionId HTTP/1.1 Host: Bucket.s3.amazonaws.com If-Match: IfMatch If-Modified-Since: IfModifiedSince If-None-Match: IfNoneMatch If-Unmodified-Since: IfUnmodifiedSince Range: Range", "HEAD / BUCKET / OBJECT HTTP/1.1", "HEAD / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "PUT / BUCKET ?object-lock HTTP/1.1", "PUT /testbucket?object-lock HTTP/1.1", "GET / BUCKET ?object-lock HTTP/1.1", "GET /testbucket?object-lock HTTP/1.1", "PUT / BUCKET / OBJECT ?legal-hold&versionId= HTTP/1.1", "PUT /testbucket/testobject?legal-hold&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?legal-hold&versionId= HTTP/1.1", "GET /testbucket/testobject?legal-hold&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT ?retention&versionId= HTTP/1.1", "PUT /testbucket/testobject?retention&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?retention&versionId= HTTP/1.1", "GET /testbucket/testobject?retention&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "PUT /testbucket/testobject?tagging&versionId= HTTP/1.1", "GET / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "GET /testbucket/testobject?tagging&versionId= HTTP/1.1", "DELETE / BUCKET / OBJECT ?tagging&versionId= HTTP/1.1", "DELETE /testbucket/testobject?tagging&versionId= HTTP/1.1", "PUT / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT HTTP/1.1", "DELETE / BUCKET / OBJECT ?versionId= VERSION_ID HTTP/1.1", "POST / BUCKET / OBJECT ?delete HTTP/1.1", "GET / BUCKET / OBJECT ?acl HTTP/1.1", "GET / BUCKET / OBJECT ?versionId= VERSION_ID &acl HTTP/1.1", "PUT / BUCKET / OBJECT ?acl", "PUT / DEST_BUCKET / DEST_OBJECT HTTP/1.1 x-amz-copy-source: SOURCE_BUCKET / SOURCE_OBJECT", "POST / BUCKET / OBJECT HTTP/1.1", "OPTIONS / OBJECT HTTP/1.1", "POST / BUCKET / OBJECT ?uploads", "PUT / BUCKET / OBJECT ?partNumber=&uploadId= UPLOAD_ID HTTP/1.1", "GET / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "POST / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "PUT / BUCKET / OBJECT ?partNumber=PartNumber&uploadId= UPLOAD_ID HTTP/1.1 Host: cname.domain.com Authorization: AWS ACCESS_KEY : HASH_OF_HEADER_AND_SECRET", "DELETE / BUCKET / OBJECT ?uploadId= UPLOAD_ID HTTP/1.1", "select customerid from s3Object where age>30 and age<65;", "POST / BUCKET / KEY ?select&select-type=2 HTTP/1.1\\r\\n", "POST /testbucket/sample1csv?select&select-type=2 HTTP/1.1\\r\\n POST /testbucket/sample1parquet?select&select-type=2 HTTP/1.1\\r\\n", "{:event-type,records} {:content-type,application/octet-stream} {:message-type,event}", "aws --endpoint- URL http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"CSV\": {\"FieldDelimiter\": \",\" , \"QuoteCharacter\": \"\\\"\" , \"RecordDelimiter\" : \"\\n\" , \"QuoteEscapeCharacter\" : \"\\\\\" , \"FileHeaderInfo\": \"USE\" }, \"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key OBJECT_NAME .csv --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"CSV\": {\"FieldDelimiter\": \",\" , \"QuoteCharacter\": \"\\\"\" , \"RecordDelimiter\" : \"\\n\" , \"QuoteEscapeCharacter\" : \"\\\\\" , \"FileHeaderInfo\": \"USE\" }, \"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key testobject.csv --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"Parquet\": {}, {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key OBJECT_NAME .parquet --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"Parquet\": {}, {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}' --key testobject.parquet --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint- URL http://localhost:80 s3api select-object-content --bucket BUCKET_NAME --expression-type 'SQL' --input-serialization '{\"JSON\": {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}}' --key OBJECT_NAME .json --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "aws --endpoint-url http://localhost:80 s3api select-object-content --bucket testbucket --expression-type 'SQL' --input-serialization '{\"JSON\": {\"CompressionType\": \"NONE\"}' --output-serialization '{\"CSV\": {}}}' --key testobject.json --expression \"select count(0) from s3object where int(_1)<10;\" output.csv", "import pprint import boto3 from botocore.exceptions import ClientError def run_s3select(bucket,key,query,column_delim=\",\",row_delim=\"\\n\",quot_char='\"',esc_char='\\\\',csv_header_info=\"NONE\"): s3 = boto3.client('s3', endpoint_url=endpoint, aws_access_key_id=access_key, region_name=region_name, aws_secret_access_key=secret_key) result = \"\" try: r = s3.select_object_content( Bucket=bucket, Key=key, ExpressionType='SQL', InputSerialization = {\"CSV\": {\"RecordDelimiter\" : row_delim, \"FieldDelimiter\" : column_delim,\"QuoteEscapeCharacter\": esc_char, \"QuoteCharacter\": quot_char, \"FileHeaderInfo\": csv_header_info}, \"CompressionType\": \"NONE\"}, OutputSerialization = {\"CSV\": {}}, Expression=query, RequestProgress = {\"Enabled\": progress}) except ClientError as c: result += str(c) return result for event in r['Payload']: if 'Records' in event: result = \"\" records = event['Records']['Payload'].decode('utf-8') result += records if 'Progress' in event: print(\"progress\") pprint.pprint(event['Progress'],width=1) if 'Stats' in event: print(\"Stats\") pprint.pprint(event['Stats'],width=1) if 'End' in event: print(\"End\") pprint.pprint(event['End'],width=1) return result run_s3select( \"my_bucket\", \"my_csv_object\", \"select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300;\")", "select int(_1) as a1, int(_2) as a2 , (a1+a2) as a3 from s3object where a3>100 and a3<300;\")", "4-byte magic number \"PAR1\" <Column 1 Chunk 1 + Column Metadata> <Column 2 Chunk 1 + Column Metadata> <Column N Chunk 1 + Column Metadata> <Column 1 Chunk 2 + Column Metadata> <Column 2 Chunk 2 + Column Metadata> <Column N Chunk 2 + Column Metadata> <Column 1 Chunk M + Column Metadata> <Column 2 Chunk M + Column Metadata> <Column N Chunk M + Column Metadata> File Metadata 4-byte length in bytes of file metadata 4-byte magic number \"PAR1\"", "{ \"firstName\": \"Joe\", \"lastName\": \"Jackson\", \"gender\": \"male\", \"age\": \"twenty\" }, { \"firstName\": \"Joe_2\", \"lastName\": \"Jackson_2\", \"gender\": \"male\", \"age\": 21 }, \"phoneNumbers\": [ { \"type\": \"home1\", \"number\": \"734928_1\",\"addr\": 11 }, { \"type\": \"home2\", \"number\": \"734928_2\",\"addr\": 22 } ], \"key_after_array\": \"XXX\", \"description\" : { \"main_desc\" : \"value_1\", \"second_desc\" : \"value_2\" } the from-clause define a single row. _1 points to root object level. _1.age appears twice in Documnet-row, the last value is used for the operation. query = \"select _1.firstname,_1.key_after_array,_1.age+4,_1.description.main_desc,_1.description.second_desc from s3object[*].aa.bb.cc;\"; expected_result = Joe_2,XXX,25,value_1,value_2", "[cephuser@host01 ~]USD git clone https://github.com/ceph/s3select.git [cephuser@host01 ~]USD cd s3select", "[cephuser@host01 s3select]USD cat container/trino/hms_trino.yaml version: '3' services: hms: image: galsl/hms:dev container_name: hms environment: # S3_ENDPOINT the CEPH/RGW end-point-url - S3_ENDPOINT=http://rgw_ip:port - S3_ACCESS_KEY=abc - S3_SECRET_KEY=abc # the container starts with booting the hive metastore command: sh -c '. ~/.bashrc; start_hive_metastore' ports: - 9083:9083 networks: - trino_hms trino: image: trinodb/trino:405 container_name: trino volumes: # the trino directory contains the necessary configuration - ./trino:/etc/trino ports: - 8080:8080 networks: - trino_hms networks: trino_hm", "[cephuser@host01 s3select]USD cat container/trino/trino/catalog/hive.properties connector.name=hive hive.metastore.uri=thrift://hms:9083 #hive.metastore.warehouse.dir=s3a://hive/ hive.allow-drop-table=true hive.allow-rename-table=true hive.allow-add-column=true hive.allow-drop-column=true hive.allow-rename-column=true hive.non-managed-table-writes-enabled=true hive.s3select-pushdown.enabled=true hive.s3.aws-access-key=abc hive.s3.aws-secret-key=abc should modify per s3-endpoint-url hive.s3.endpoint=http://rgw_ip:port #hive.s3.max-connections=1 #hive.s3select-pushdown.max-connections=1 hive.s3.connect-timeout=100s hive.s3.socket-timeout=100s hive.max-splits-per-second=10000 hive.max-split-size=128MB", "[cephuser@host01 s3select]USD sudo docker compose -f ./container/trino/hms_trino.yaml up -d", "[cephuser@host01 s3select]USD sudo docker exec -it trino /bin/bash trino@66f753905e82:/USD trino trino> create schema hive.csvbkt1schema; trino> create table hive.csvbkt1schema.polariondatacsv(c1 varchar,c2 varchar, c3 varchar, c4 varchar, c5 varchar, c6 varchar, c7 varchar, c8 varchar, c9 varchar) WITH ( external_location = 's3a://csvbkt1/',format = 'CSV'); trino> select * from hive.csvbkt1schema.polariondatacsv;" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/7/html/developer_guide/ceph-object-gateway-and-the-s3-api
Chapter 2. Preparing to update a cluster
Chapter 2. Preparing to update a cluster 2.1. Preparing to update to OpenShift Container Platform 4.17 Learn more about administrative tasks that cluster admins must perform to successfully initialize an update, as well as optional guidelines for ensuring a successful update. 2.1.1. Kubernetes API removals There are no Kubernetes API removals in OpenShift Container Platform 4.17. 2.1.2. Assessing the risk of conditional updates A conditional update is an update target that is available but not recommended due to a known risk that applies to your cluster. The Cluster Version Operator (CVO) periodically queries the OpenShift Update Service (OSUS) for the most recent data about update recommendations, and some potential update targets might have risks associated with them. The CVO evaluates the conditional risks, and if the risks are not applicable to the cluster, then the target version is available as a recommended update path for the cluster. If the risk is determined to be applicable, or if for some reason CVO cannot evaluate the risk, then the update target is available to the cluster as a conditional update. When you encounter a conditional update while you are trying to update to a target version, you must assess the risk of updating your cluster to that version. Generally, if you do not have a specific need to update to that target version, it is best to wait for a recommended update path from Red Hat. However, if you have a strong reason to update to that version, for example, if you need to fix an important CVE, then the benefit of fixing the CVE might outweigh the risk of the update being problematic for your cluster. You can complete the following tasks to determine whether you agree with the Red Hat assessment of the update risk: Complete extensive testing in a non-production environment to the extent that you are comfortable completing the update in your production environment. Follow the links provided in the conditional update description, investigate the bug, and determine if it is likely to cause issues for your cluster. If you need help understanding the risk, contact Red Hat Support. Additional resources Evaluation of update availability 2.1.3. etcd backups before cluster updates etcd backups record the state of your cluster and all of its resource objects. You can use backups to attempt restoring the state of a cluster in disaster scenarios where you cannot recover a cluster in its currently dysfunctional state. In the context of updates, you can attempt an etcd restoration of the cluster if an update introduced catastrophic conditions that cannot be fixed without reverting to the cluster version. etcd restorations might be destructive and destabilizing to a running cluster, use them only as a last resort. Warning Due to their high consequences, etcd restorations are not intended to be used as a rollback solution. Rolling your cluster back to a version is not supported. If your update is failing to complete, contact Red Hat support. There are several factors that affect the viability of an etcd restoration. For more information, see "Backing up etcd data" and "Restoring to a cluster state". Additional resources Backing up etcd Restoring to a cluster state 2.1.4. Best practices for cluster updates OpenShift Container Platform provides a robust update experience that minimizes workload disruptions during an update. Updates will not begin unless the cluster is in an upgradeable state at the time of the update request. This design enforces some key conditions before initiating an update, but there are a number of actions you can take to increase your chances of a successful cluster update. 2.1.4.1. Choose versions recommended by the OpenShift Update Service The OpenShift Update Service (OSUS) provides update recommendations based on cluster characteristics such as the cluster's subscribed channel. The Cluster Version Operator saves these recommendations as either recommended or conditional updates. While it is possible to attempt an update to a version that is not recommended by OSUS, following a recommended update path protects users from encountering known issues or unintended consequences on the cluster. Choose only update targets that are recommended by OSUS to ensure a successful update. 2.1.4.2. Address all critical alerts on the cluster Critical alerts must always be addressed as soon as possible, but it is especially important to address these alerts and resolve any problems before initiating a cluster update. Failing to address critical alerts before beginning an update can cause problematic conditions for the cluster. In the Administrator perspective of the web console, navigate to Observe Alerting to find critical alerts. 2.1.4.3. Ensure that the cluster is in an Upgradable state When one or more Operators have not reported their Upgradeable condition as True for more than an hour, the ClusterNotUpgradeable warning alert is triggered in the cluster. In most cases this alert does not block patch updates, but you cannot perform a minor version update until you resolve this alert and all Operators report Upgradeable as True . For more information about the Upgradeable condition, see "Understanding cluster Operator condition types" in the additional resources section. 2.1.4.3.1. SDN support removal OpenShift SDN network plugin was deprecated in versions 4.15 and 4.16. With this release, the SDN network plugin is no longer supported and the content has been removed from the documentation. If your OpenShift Container Platform cluster is still using the OpenShift SDN CNI, see Migrating from the OpenShift SDN network plugin . Important It is not possible to update a cluster to OpenShift Container Platform 4.17 if it is using the OpenShift SDN network plugin. You must migrate to the OVN-Kubernetes plugin before upgrading to OpenShift Container Platform 4.17. 2.1.4.4. Ensure that enough spare nodes are available A cluster should not be running with little to no spare node capacity, especially when initiating a cluster update. Nodes that are not running and available may limit a cluster's ability to perform an update with minimal disruption to cluster workloads. Depending on the configured value of the cluster's maxUnavailable spec, the cluster might not be able to apply machine configuration changes to nodes if there is an unavailable node. Additionally, if compute nodes do not have enough spare capacity, workloads might not be able to temporarily shift to another node while the first node is taken offline for an update. Make sure that you have enough available nodes in each worker pool, as well as enough spare capacity on your compute nodes, to increase the chance of successful node updates. Warning The default setting for maxUnavailable is 1 for all the machine config pools in OpenShift Container Platform. It is recommended to not change this value and update one control plane node at a time. Do not change this value to 3 for the control plane pool. 2.1.4.5. Ensure that the cluster's PodDisruptionBudget is properly configured You can use the PodDisruptionBudget object to define the minimum number or percentage of pod replicas that must be available at any given time. This configuration protects workloads from disruptions during maintenance tasks such as cluster updates. However, it is possible to configure the PodDisruptionBudget for a given topology in a way that prevents nodes from being drained and updated during a cluster update. When planning a cluster update, check the configuration of the PodDisruptionBudget object for the following factors: For highly available workloads, make sure there are replicas that can be temporarily taken offline without being prohibited by the PodDisruptionBudget . For workloads that are not highly available, make sure they are either not protected by a PodDisruptionBudget or have some alternative mechanism for draining these workloads eventually, such as periodic restart or guaranteed eventual termination. Additional resources Understanding cluster Operator condition types 2.2. Preparing to update a cluster with manually maintained credentials The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. For minor releases, for example, from 4.12 to 4.13, this status prevents you from updating until you have addressed any updated permissions and annotated the CloudCredential resource to indicate that the permissions are updated as needed for the version. This annotation changes the Upgradable status to True . For z-stream releases, for example, from 4.13.0 to 4.13.1, no permissions are added or changed, so the update is not blocked. Before updating a cluster with manually maintained credentials, you must accommodate any new or changed credentials in the release image for the version of OpenShift Container Platform you are updating to. 2.2.1. Update requirements for clusters with manually maintained credentials Before you update a cluster that uses manually maintained credentials with the Cloud Credential Operator (CCO), you must update the cloud provider resources for the new release. If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), use the ccoctl utility to update the resources. Clusters that were configured to use manual mode without the ccoctl utility require manual updates for the resources. After updating the cloud provider resources, you must update the upgradeable-to annotation for the cluster to indicate that it is ready to update. Note The process to update the cloud provider resources and the upgradeable-to annotation can only be completed by using command line tools. 2.2.1.1. Cloud credential configuration options and update requirements by platform type Some platforms only support using the CCO in one mode. For clusters that are installed on those platforms, the platform type determines the credentials update requirements. For platforms that support using the CCO in multiple modes, you must determine which mode the cluster is configured to use and take the required actions for that configuration. Figure 2.1. Credentials update requirements by platform type Red Hat OpenStack Platform (RHOSP) and VMware vSphere These platforms do not support using the CCO in manual mode. Clusters on these platforms handle changes in cloud provider resources automatically and do not require an update to the upgradeable-to annotation. Administrators of clusters on these platforms should skip the manually maintained credentials section of the update process. IBM Cloud and Nutanix Clusters installed on these platforms are configured using the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Indicate that the cluster is ready to update with the upgradeable-to annotation. Microsoft Azure Stack Hub These clusters use manual mode with long-term credentials and do not use the ccoctl utility. Administrators of clusters on these platforms must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Indicate that the cluster is ready to update with the upgradeable-to annotation. Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) Clusters installed on these platforms support multiple CCO modes. The required update process depends on the mode that the cluster is configured to use. If you are not sure what mode the CCO is configured to use on your cluster, you can use the web console or the CLI to determine this information. Additional resources Determining the Cloud Credential Operator mode by using the web console Determining the Cloud Credential Operator mode by using the CLI Extracting and preparing credentials request resources About the Cloud Credential Operator 2.2.1.2. Determining the Cloud Credential Operator mode by using the web console You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the web console. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. Procedure Log in to the OpenShift Container Platform web console as a user with the cluster-admin role. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select CloudCredential . On the CloudCredential details page, select the YAML tab. In the YAML block, check the value of spec.credentialsMode . The following values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, navigate to Workloads Secrets and look for the root secret for your cloud provider. Note Ensure that the Project dropdown is set to All Projects . Platform Secret name AWS aws-creds GCP gcp-credentials If you see one of these values, your cluster is using mint or passthrough mode with the root secret present. If you do not see these values, your cluster is using the CCO in mint mode with the root secret removed. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, you must check the cluster Authentication object YAML values. Navigate to Administration Cluster Settings . On the Cluster Settings page, select the Configuration tab. Under Configuration resource , select Authentication . On the Authentication details page, select the YAML tab. In the YAML block, check the value of the .spec.serviceAccountIssuer parameter. A value that contains a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty value ( '' ) indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Extracting and preparing credentials request resources 2.2.1.3. Determining the Cloud Credential Operator mode by using the CLI You can determine what mode the Cloud Credential Operator (CCO) is configured to use by using the CLI. Note Only Amazon Web Services (AWS), global Microsoft Azure, and Google Cloud Platform (GCP) clusters support multiple CCO modes. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator permissions. You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. To determine the mode that the CCO is configured to use, enter the following command: USD oc get cloudcredentials cluster \ -o=jsonpath={.spec.credentialsMode} The following output values are possible, though not all are supported on all platforms: '' : The CCO is operating in the default mode. In this configuration, the CCO operates in mint or passthrough mode, depending on the credentials provided during installation. Mint : The CCO is operating in mint mode. Passthrough : The CCO is operating in passthrough mode. Manual : The CCO is operating in manual mode. Important To determine the specific configuration of an AWS, GCP, or global Microsoft Azure cluster that has a spec.credentialsMode of '' , Mint , or Manual , you must investigate further. AWS and GCP clusters support using mint mode with the root secret deleted. If the cluster is specifically configured to use mint mode or uses mint mode by default, you must determine if the root secret is present on the cluster before updating. An AWS, GCP, or global Microsoft Azure cluster that uses manual mode might be configured to create and manage cloud credentials from outside of the cluster with AWS STS, GCP Workload Identity, or Microsoft Entra Workload ID. You can determine whether your cluster uses this strategy by examining the cluster Authentication object. AWS or GCP clusters that use mint mode only: To determine whether the cluster is operating without the root secret, run the following command: USD oc get secret <secret_name> \ -n=kube-system where <secret_name> is aws-creds for AWS or gcp-credentials for GCP. If the root secret is present, the output of this command returns information about the secret. An error indicates that the root secret is not present on the cluster. AWS, GCP, or global Microsoft Azure clusters that use manual mode only: To determine whether the cluster is configured to create and manage cloud credentials from outside of the cluster, run the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' This command displays the value of the .spec.serviceAccountIssuer parameter in the cluster Authentication object. An output of a URL that is associated with your cloud provider indicates that the CCO is using manual mode with short-term credentials for components. These clusters are configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. An empty output indicates that the cluster is using the CCO in manual mode but was not configured using the ccoctl utility. steps If you are updating a cluster that has the CCO operating in mint or passthrough mode and the root secret is present, you do not need to update any cloud provider resources and can continue to the part of the update process. If your cluster is using the CCO in mint mode with the root secret removed, you must reinstate the credential secret with the administrator-level credential before continuing to the part of the update process. If your cluster was configured using the CCO utility ( ccoctl ), you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Configure the ccoctl utility for the new release and use it to update the cloud provider resources. Update the upgradeable-to annotation to indicate that the cluster is ready to update. If your cluster is using the CCO in manual mode but was not configured using the ccoctl utility, you must take the following actions: Extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Manually update the cloud provider resources for the new release. Update the upgradeable-to annotation to indicate that the cluster is ready to update. Additional resources Extracting and preparing credentials request resources 2.2.2. Extracting and preparing credentials request resources Before updating a cluster that uses the Cloud Credential Operator (CCO) in manual mode, you must extract and prepare the CredentialsRequest custom resources (CRs) for the new release. Prerequisites Install the OpenShift CLI ( oc ) that matches the version for your updated version. Log in to the cluster as user with cluster-admin privileges. Procedure Obtain the pull spec for the update that you want to apply by running the following command: USD oc adm upgrade The output of this command includes pull specs for the available updates similar to the following: Partial example output ... Recommended updates: VERSION IMAGE 4.17.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 ... Set a USDRELEASE_IMAGE variable with the release image that you want to use by running the following command: USD RELEASE_IMAGE=<update_pull_spec> where <update_pull_spec> is the pull spec for the release image that you want to use. For example: quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032 Extract the list of CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image by running the following command: USD oc adm release extract \ --from=USDRELEASE_IMAGE \ --credentials-requests \ --included \ 1 --to=<path_to_directory_for_credentials_requests> 2 1 The --included parameter includes only the manifests that your specific cluster configuration requires for the target release. 2 Specify the path to the directory where you want to store the CredentialsRequest objects. If the specified directory does not exist, this command creates it. This command creates a YAML file for each CredentialsRequest object. For each CredentialsRequest CR in the release image, ensure that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. This field is where the generated secrets that hold the credentials configuration are stored. Sample AWS CredentialsRequest object apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: "*" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1 1 This field indicates the namespace which must exist to hold the generated secret. The CredentialsRequest CRs for other platforms have a similar format with different platform-specific values. For any CredentialsRequest CR for which the cluster does not already have a namespace with the name specified in spec.secretRef.namespace , create the namespace by running the following command: USD oc create namespace <component_namespace> steps If the cloud credential management for your cluster was configured using the CCO utility ( ccoctl ), configure the ccoctl utility for a cluster update and use it to update your cloud provider resources. If your cluster was not configured with the ccoctl utility, manually update your cloud provider resources. Additional resources Configuring the Cloud Credential Operator utility for a cluster update Manually updating cloud provider resources 2.2.3. Configuring the Cloud Credential Operator utility for a cluster update To upgrade a cluster that uses the Cloud Credential Operator (CCO) in manual mode to create and manage cloud credentials from outside of the cluster, extract and prepare the CCO utility ( ccoctl ) binary. Note The ccoctl utility is a Linux binary that must run in a Linux environment. Prerequisites You have access to an OpenShift Container Platform account with cluster administrator access. You have installed the OpenShift CLI ( oc ). Your cluster was configured using the ccoctl utility to create and manage cloud credentials from outside of the cluster. You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. Procedure Set a variable for the OpenShift Container Platform release image by running the following command: USD RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image}) Obtain the CCO container image from the OpenShift Container Platform release image by running the following command: USD CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret) Note Ensure that the architecture of the USDRELEASE_IMAGE matches the architecture of the environment in which you will use the ccoctl tool. Extract the ccoctl binary from the CCO container image within the OpenShift Container Platform release image by running the following command: USD oc image extract USDCCO_IMAGE \ --file="/usr/bin/ccoctl.<rhel_version>" \ 1 -a ~/.pull-secret 1 For <rhel_version> , specify the value that corresponds to the version of Red Hat Enterprise Linux (RHEL) that the host uses. If no value is specified, ccoctl.rhel8 is used by default. The following values are valid: rhel8 : Specify this value for hosts that use RHEL 8. rhel9 : Specify this value for hosts that use RHEL 9. Change the permissions to make ccoctl executable by running the following command: USD chmod 775 ccoctl.<rhel_version> Verification To verify that ccoctl is ready to use, display the help file. Use a relative file name when you run the command, for example: USD ./ccoctl.rhel9 Example output OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use "ccoctl [command] --help" for more information about a command. 2.2.4. Updating cloud provider resources with the Cloud Credential Operator utility The process for upgrading an OpenShift Container Platform cluster that was configured using the CCO utility ( ccoctl ) is similar to creating the cloud provider resources during installation. Note On AWS clusters, some ccoctl commands make AWS API calls to create or modify AWS resources. You can use the --dry-run flag to avoid making API calls. Using this flag creates JSON files on the local file system instead. You can review and modify the JSON files and then apply them with the AWS CLI tool using the --cli-input-json parameters. Prerequisites You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. You have extracted and configured the ccoctl binary from the release image. Procedure Use the ccoctl tool to process all CredentialsRequest objects by running the command for your cloud provider. The following commands process CredentialsRequest objects: Example 2.1. Amazon Web Services (AWS) USD ccoctl aws create-all \ 1 --name=<name> \ 2 --region=<aws_region> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 4 --output-dir=<path_to_ccoctl_output_dir> \ 5 --create-private-s3-bucket 6 1 To create the AWS resources individually, use the "Creating AWS resources individually" procedure in the "Installing a cluster on AWS with customizations" content. This option might be useful if you need to review the JSON files that the ccoctl tool creates before modifying AWS resources, or if the process the ccoctl tool uses to create AWS resources automatically does not meet the requirements of your organization. 2 Specify the name used to tag any cloud resources that are created for tracking. 3 Specify the AWS region in which cloud resources will be created. 4 Specify the directory containing the files for the component CredentialsRequest objects. 5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 6 Optional: By default, the ccoctl utility stores the OpenID Connect (OIDC) configuration files in a public S3 bucket and uses the S3 URL as the public OIDC endpoint. To store the OIDC configuration in a private S3 bucket that is accessed by the IAM identity provider through a public CloudFront distribution URL instead, use the --create-private-s3-bucket parameter. Example 2.2. Google Cloud Platform (GCP) USD ccoctl gcp create-all \ --name=<name> \ 1 --region=<gcp_region> \ 2 --project=<gcp_project_id> \ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \ 4 --output-dir=<path_to_ccoctl_output_dir> 5 1 Specify the user-defined name for all created GCP resources used for tracking. 2 Specify the GCP region in which cloud resources will be created. 3 Specify the GCP project ID in which cloud resources will be created. 4 Specify the directory containing the files of CredentialsRequest manifests to create GCP service accounts. 5 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. Example 2.3. IBM Cloud USD ccoctl ibmcloud create-service-id \ --credentials-requests-dir=<path_to_credential_requests_directory> \ 1 --name=<cluster_name> \ 2 --output-dir=<installation_directory> \ 3 --resource-group-name=<resource_group_name> 4 1 Specify the directory containing the files for the component CredentialsRequest objects. 2 Specify the name of the OpenShift Container Platform cluster. 3 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 4 Optional: Specify the name of the resource group used for scoping the access policies. Example 2.4. Microsoft Azure USD ccoctl azure create-managed-identities \ --name <azure_infra_name> \ 1 --output-dir ./output_dir \ --region <azure_region> \ 2 --subscription-id <azure_subscription_id> \ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> \ --issuer-url "USD{OIDC_ISSUER_URL}" \ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \ 5 --installation-resource-group-name "USD{AZURE_INSTALL_RG}" 6 1 The value of the name parameter is used to create an Azure resource group. To use an existing Azure resource group instead of creating a new one, specify the --oidc-resource-group-name argument with the existing group name as its value. 2 Specify the region of the existing cluster. 3 Specify the subscription ID of the existing cluster. 4 Specify the OIDC issuer URL from the existing cluster. You can obtain this value by running the following command: USD oc get authentication cluster \ -o jsonpath \ --template='{ .spec.serviceAccountIssuer }' 5 Specify the name of the resource group that contains the DNS zone. 6 Specify the Azure resource group name. You can obtain this value by running the following command: USD oc get infrastructure cluster \ -o jsonpath \ --template '{ .status.platformStatus.azure.resourceGroupName }' Example 2.5. Nutanix USD ccoctl nutanix create-shared-secrets \ --credentials-requests-dir=<path_to_credentials_requests_directory> \ 1 --output-dir=<ccoctl_output_dir> \ 2 --credentials-source-filepath=<path_to_credentials_file> 3 1 Specify the path to the directory that contains the files for the component CredentialsRequests objects. 2 Optional: Specify the directory in which you want the ccoctl utility to create objects. By default, the utility creates objects in the directory in which the commands are run. 3 Optional: Specify the directory that contains the credentials data YAML file. By default, ccoctl expects this file to be in <home_directory>/.nutanix/credentials . For each CredentialsRequest object, ccoctl creates the required provider resources and a permissions policy as defined in each CredentialsRequest object from the OpenShift Container Platform release image. Apply the secrets to your cluster by running the following command: USD ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {} Verification You can verify that the required provider resources and permissions policies are created by querying the cloud provider. For more information, refer to your cloud provider documentation on listing roles or service accounts. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Indicating that the cluster is ready to upgrade 2.2.5. Manually updating cloud provider resources Before upgrading a cluster with manually maintained credentials, you must create secrets for any new credentials for the release image that you are upgrading to. You must also review the required permissions for existing credentials and accommodate any new permissions requirements in the new release for those components. Prerequisites You have extracted the CredentialsRequest custom resources (CRs) from the OpenShift Container Platform release image and ensured that a namespace that matches the text in the spec.secretRef.namespace field exists in the cluster. Procedure Create YAML files with secrets for any CredentialsRequest custom resources that the new release image adds. The secrets must be stored using the namespace and secret name defined in the spec.secretRef for each CredentialsRequest object. Example 2.6. Sample AWS YAML files Sample AWS CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: "*" ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample AWS Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key> Example 2.7. Sample Azure YAML files Note Global Azure and Azure Stack Hub use the same CredentialsRequest object and secret formats. Sample Azure CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample Azure Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region> Example 2.8. Sample GCP YAML files Sample GCP CredentialsRequest object with secrets apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator ... spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true ... secretRef: name: <component_secret> namespace: <component_namespace> ... Sample GCP Secret object apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file> If the CredentialsRequest custom resources for any existing credentials that are stored in secrets have changed permissions requirements, update the permissions as required. steps Update the upgradeable-to annotation to indicate that the cluster is ready to upgrade. Additional resources Manually creating long-term credentials for AWS Manually creating long-term credentials for Azure Manually creating long-term credentials for Azure Stack Hub Manually creating long-term credentials for GCP Indicating that the cluster is ready to upgrade 2.2.6. Indicating that the cluster is ready to upgrade The Cloud Credential Operator (CCO) Upgradable status for a cluster with manually maintained credentials is False by default. Prerequisites For the release image that you are upgrading to, you have processed any new credentials manually or by using the Cloud Credential Operator utility ( ccoctl ). You have installed the OpenShift CLI ( oc ). Procedure Log in to oc on the cluster as a user with the cluster-admin role. Edit the CloudCredential resource to add an upgradeable-to annotation within the metadata field by running the following command: USD oc edit cloudcredential cluster Text to add ... metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number> ... Where <version_number> is the version that you are upgrading to, in the format x.y.z . For example, use 4.12.2 for OpenShift Container Platform 4.12.2. It may take several minutes after adding the annotation for the upgradeable status to change. Verification In the Administrator perspective of the web console, navigate to Administration Cluster Settings . To view the CCO status details, click cloud-credential in the Cluster Operators list. If the Upgradeable status in the Conditions section is False , verify that the upgradeable-to annotation is free of typographical errors. When the Upgradeable status in the Conditions section is True , begin the OpenShift Container Platform upgrade. 2.3. Preflight validation for Kernel Module Management (KMM) Modules Before performing an upgrade on the cluster with applied KMM modules, you must verify that kernel modules installed using KMM are able to be installed on the nodes after the cluster upgrade and possible kernel upgrade. Preflight attempts to validate every Module loaded in the cluster, in parallel. Preflight does not wait for validation of one Module to complete before starting validation of another Module . 2.3.1. Validation kickoff Preflight validation is triggered by creating a PreflightValidationOCP resource in the cluster. This spec contains two fields: releaseImage Mandatory field that provides the name of the release image for the OpenShift Container Platform version the cluster is upgraded to. pushBuiltImage If true , then the images created during the Build and Sign validation are pushed to their repositories. This field is false by default. 2.3.2. Validation lifecycle Preflight validation attempts to validate every module loaded in the cluster. Preflight stops running validation on a Module resource after the validation is successful. If module validation fails, you can change the module definitions and Preflight tries to validate the module again in the loop. If you want to run Preflight validation for an additional kernel, then you should create another PreflightValidationOCP resource for that kernel. After all the modules have been validated, it is recommended to delete the PreflightValidationOCP resource. 2.3.3. Validation status A PreflightValidationOCP resource reports the status and progress of each module in the cluster that it attempts or has attempted to validate in its .status.modules list. Elements of that list contain the following fields: lastTransitionTime The last time the Module resource status transitioned from one status to another. This should be when the underlying status has changed. If that is not known, then using the time when the API field changed is acceptable. name The name of the Module resource. namespace The namespace of the Module resource. statusReason Verbal explanation regarding the status. verificationStage Describes the validation stage being executed: image : Image existence verification build : Build process verification sign : Sign process verification verificationStatus The status of the Module verification: true : Verified false : Verification failed error : Error during the verification process unknown : Verification has not started 2.3.4. Preflight validation stages per Module Preflight runs the following validations on every KMM Module present in the cluster: Image validation stage Build validation stage Sign validation stage 2.3.4.1. Image validation stage Image validation is always the first stage of the preflight validation to be executed. If image validation is successful, no other validations are run on that specific module. Image validation consists of two stages: Image existence and accessibility. The code tries to access the image defined for the upgraded kernel in the module and get its manifests. Verify the presence of the kernel module defined in the Module in the correct path for future modprobe execution. If this validation is successful, it probably means that the kernel module was compiled with the correct Linux headers. The correct path is <dirname>/lib/modules/<upgraded_kernel>/ . 2.3.4.2. Build validation stage Build validation is executed only when image validation has failed and there is a build section in the Module that is relevant for the upgraded kernel. Build validation attempts to run the build job and validate that it finishes successfully. Note You must specify the kernel version when running depmod , as shown here: USD RUN depmod -b /opt USD{KERNEL_VERSION} If the PushBuiltImage flag is defined in the PreflightValidationOCP custom resource (CR), it also tries to push the resulting image into its repository. The resulting image name is taken from the definition of the containerImage field of the Module CR. Note If the sign section is defined for the upgraded kernel, then the resulting image will not be the containerImage field of the Module CR, but a temporary image name, because the resulting image should be the product of Sign flow. 2.3.4.3. Sign validation stage Sign validation is executed only when image validation has failed. There is a sign section in the Module resource that is relevant for the upgrade kernel, and build validation finishes successfully in case there was a build section in the Module relevant for the upgraded kernel. Sign validation attempts to run the sign job and validate that it finishes successfully. If the PushBuiltImage flag is defined in the PreflightValidationOCP CR, sign validation also tries to push the resulting image to its registry. The resulting image is always the image defined in the ContainerImage field of the Module . The input image is either the output of the Build stage, or an image defined in the UnsignedImage field. Note If a build section exists, the sign section input image is the build section's output image. Therefore, in order for the input image to be available for the sign section, the PushBuiltImage flag must be defined in the PreflightValidationOCP CR. 2.3.5. Example PreflightValidationOCP resource This section shows an example of the PreflightValidationOCP resource in the YAML format. The example verifies all of the currently present modules against the upcoming kernel version included in the OpenShift Container Platform release 4.11.18, which the following release image points to: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 Because .spec.pushBuiltImage is set to true , KMM pushes the resulting images of Build/Sign in to the defined repositories. apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true
[ "oc get cloudcredentials cluster -o=jsonpath={.spec.credentialsMode}", "oc get secret <secret_name> -n=kube-system", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc adm upgrade", "Recommended updates: VERSION IMAGE 4.17.0 quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032", "RELEASE_IMAGE=<update_pull_spec>", "quay.io/openshift-release-dev/ocp-release@sha256:6a899c54dda6b844bb12a247e324a0f6cde367e880b73ba110c056df6d018032", "oc adm release extract --from=USDRELEASE_IMAGE --credentials-requests --included \\ 1 --to=<path_to_directory_for_credentials_requests> 2", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: cloud-credential-operator-iam-ro namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - iam:GetUser - iam:GetUserPolicy - iam:ListAccessKeys resource: \"*\" secretRef: name: cloud-credential-operator-iam-ro-creds namespace: openshift-cloud-credential-operator 1", "oc create namespace <component_namespace>", "RELEASE_IMAGE=USD(oc get clusterversion -o jsonpath={..desired.image})", "CCO_IMAGE=USD(oc adm release info --image-for='cloud-credential-operator' USDRELEASE_IMAGE -a ~/.pull-secret)", "oc image extract USDCCO_IMAGE --file=\"/usr/bin/ccoctl.<rhel_version>\" \\ 1 -a ~/.pull-secret", "chmod 775 ccoctl.<rhel_version>", "./ccoctl.rhel9", "OpenShift credentials provisioning tool Usage: ccoctl [command] Available Commands: aws Manage credentials objects for AWS cloud azure Manage credentials objects for Azure gcp Manage credentials objects for Google cloud help Help about any command ibmcloud Manage credentials objects for {ibm-cloud-title} nutanix Manage credentials objects for Nutanix Flags: -h, --help help for ccoctl Use \"ccoctl [command] --help\" for more information about a command.", "ccoctl aws create-all \\ 1 --name=<name> \\ 2 --region=<aws_region> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> \\ 5 --create-private-s3-bucket 6", "ccoctl gcp create-all --name=<name> \\ 1 --region=<gcp_region> \\ 2 --project=<gcp_project_id> \\ 3 --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 4 --output-dir=<path_to_ccoctl_output_dir> 5", "ccoctl ibmcloud create-service-id --credentials-requests-dir=<path_to_credential_requests_directory> \\ 1 --name=<cluster_name> \\ 2 --output-dir=<installation_directory> \\ 3 --resource-group-name=<resource_group_name> 4", "ccoctl azure create-managed-identities --name <azure_infra_name> \\ 1 --output-dir ./output_dir --region <azure_region> \\ 2 --subscription-id <azure_subscription_id> \\ 3 --credentials-requests-dir <path_to_directory_for_credentials_requests> --issuer-url \"USD{OIDC_ISSUER_URL}\" \\ 4 --dnszone-resource-group-name <azure_dns_zone_resourcegroup_name> \\ 5 --installation-resource-group-name \"USD{AZURE_INSTALL_RG}\" 6", "oc get authentication cluster -o jsonpath --template='{ .spec.serviceAccountIssuer }'", "oc get infrastructure cluster -o jsonpath --template '{ .status.platformStatus.azure.resourceGroupName }'", "ccoctl nutanix create-shared-secrets --credentials-requests-dir=<path_to_credentials_requests_directory> \\ 1 --output-dir=<ccoctl_output_dir> \\ 2 --credentials-source-filepath=<path_to_credentials_file> 3", "ls <path_to_ccoctl_output_dir>/manifests/*-credentials.yaml | xargs -I{} oc apply -f {}", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AWSProviderSpec statementEntries: - effect: Allow action: - s3:CreateBucket - s3:DeleteBucket resource: \"*\" secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: aws_access_key_id: <base64_encoded_aws_access_key_id> aws_secret_access_key: <base64_encoded_aws_secret_access_key>", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: AzureProviderSpec roleBindings: - role: Contributor secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: azure_subscription_id: <base64_encoded_azure_subscription_id> azure_client_id: <base64_encoded_azure_client_id> azure_client_secret: <base64_encoded_azure_client_secret> azure_tenant_id: <base64_encoded_azure_tenant_id> azure_resource_prefix: <base64_encoded_azure_resource_prefix> azure_resourcegroup: <base64_encoded_azure_resourcegroup> azure_region: <base64_encoded_azure_region>", "apiVersion: cloudcredential.openshift.io/v1 kind: CredentialsRequest metadata: name: <component_credentials_request> namespace: openshift-cloud-credential-operator spec: providerSpec: apiVersion: cloudcredential.openshift.io/v1 kind: GCPProviderSpec predefinedRoles: - roles/iam.securityReviewer - roles/iam.roleViewer skipServiceCheck: true secretRef: name: <component_secret> namespace: <component_namespace>", "apiVersion: v1 kind: Secret metadata: name: <component_secret> namespace: <component_namespace> data: service_account.json: <base64_encoded_gcp_service_account_file>", "oc edit cloudcredential cluster", "metadata: annotations: cloudcredential.openshift.io/upgradeable-to: <version_number>", "RUN depmod -b /opt USD{KERNEL_VERSION}", "quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863", "apiVersion: kmm.sigs.x-k8s.io/v1beta2 kind: PreflightValidationOCP metadata: name: preflight spec: releaseImage: quay.io/openshift-release-dev/ocp-release@sha256:22e149142517dfccb47be828f012659b1ccf71d26620e6f62468c264a7ce7863 pushBuiltImage: true" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/updating_clusters/preparing-to-update-a-cluster
Chapter 10. Adding metadata to instances
Chapter 10. Adding metadata to instances Warning The content for this feature is available in this release as a Documentation Preview , and therefore is not fully verified by Red Hat. Use it only for testing, and do not use in a production environment. The Compute (nova) service uses metadata to pass configuration information to instances on launch. The instance can access the metadata by using a config drive or the metadata service. Config drive By default, every instance has a config drive. Config drives are special drives that you can attach to an instance when it boots. The config drive is presented to the instance as a read-only drive. The instance can mount this drive and read files from it to get information that is normally available through the metadata service. Metadata service The Compute service provides the metadata service as a REST API, which can be used to retrieve data specific to an instance. Instances access this service at 169.254.169.254 or at fe80::a9fe:a9fe . 10.1. Types of instance metadata Cloud users, cloud administrators, and the Compute service can pass metadata to instances: Cloud user provided data Cloud users can specify additional data to use when they launch an instance, such as a shell script that the instance runs on boot. The cloud user can pass data to instances by using the user data feature, and by passing key-value pairs as required properties when creating or updating an instance. Cloud administrator provided data The Red Hat OpenStack Services on OpenShift (RHOSO) administrator uses the vendordata feature to pass data to instances. The Compute service provides the vendordata modules StaticJSON and DynamicJSON to allow administrators to pass metadata to instances: StaticJSON : (Default) Use for metadata that is the same for all instances. DynamicJSON : Use for metadata that is different for each instance. This module makes a request to an external REST service to determine what metadata to add to an instance. Vendordata configuration is located in one of the following read-only files on the instance: /openstack/{version}/vendor_data.json /openstack/{version}/vendor_data2.json Compute service provided data The Compute service uses its internal implementation of the metadata service to pass information to the instance, such as the requested hostname for the instance, and the availability zone the instance is in. This happens by default and requires no configuration by the cloud user or administrator.
null
https://docs.redhat.com/en/documentation/red_hat_openstack_services_on_openshift/18.0/html/configuring_the_compute_service_for_instance_creation/assembly_adding-metadata-to-instances_instance-metadata
Chapter 4. Configuring automatic registration and management
Chapter 4. Configuring automatic registration and management You can automatically register your systems with Red Hat by using an activation key for the image. You can add an activation key to the image during the image building process. For that, follow the steps to enabling automatic registration with the Red Hat Remote Host Configuration (rhc) client. Prerequisites You must have a Red Hat Hybrid Cloud Console account. You must have a Remote Host Configuration and Management org ID and activation key for your RHEL subscription. If you have Organization Administrator access for your account, you can set up an activation key at the Activation keys page. You started to create a new OSTree image in the Hybrid Cloud Console. See Building a RHEL image with custom repositories . Procedure In the Activation Keys page, perform the following steps: From the Activation key to use for this image dropdown menu, select one activation key to use for the image from your Organization ID. No activation keys found - If you do not have an activation key from your Organization ID, you can choose the "default activation key", a preset key with the basic configuration, by completing the following step: Click the Create activation key button. The Select activation key dropdown menu enables the "activation-key-default". Manage the activation key by accessing Activation keys . You can edit the System Purpose , the Workload , and also add Additional repositories . Click . Note After you register a RHEL for Edge device, it is not automatically removed after a period of inactivity. If you want to unregister and remove the device from the Red Hat Hybrid Cloud Console , you must unregister your device to remove it. To unregister the device, go to the Red Hat Hybrid Cloud Console and click Red Hat Insights > Inventory > Images , find your system device, select it, and click Delete . Additional resources Creating an activation key
null
https://docs.redhat.com/en/documentation/edge_management/1-latest/html/create_rhel_for_edge_images_and_configure_automated_management/rhem-registration_
Chapter 9. Designing a Secure Directory
Chapter 9. Designing a Secure Directory How the data in Red Hat Directory Server are secured affects all of the design areas. Any security design needs to protect the data contained by the directory and meet the security and privacy needs of the users and applications. This chapter describes how to analyze the security needs and explains how to design the directory to meet these needs. 9.1. About Security Threats There are many potential threats to the security of the directory. Understanding the most common threats helps outline the overall security design. Threats to directory security fall into three main categories: Unauthorized access Unauthorized tampering Denial of service 9.1.1. Unauthorized Access Protecting the directory from unauthorized access may seem straightforward, but implementing a secure solution may be more complex than it first appears. A number of potential access points exist on the directory information delivery path where an unauthorized client may gain access to data. For example, an unauthorized client can use another client's credentials to access the data. This is particularly likely when the directory uses unprotected passwords. An unauthorized client can also eavesdrop on the information exchanged between a legitimate client and Directory Server. Unauthorized access can occur from inside the company or, if the company is connected to an extranet or to the Internet, from outside the company. The following scenarios describe just a few examples of how an unauthorized client might access the directory data. The authentication methods, password policies, and access control mechanisms provided by the Directory Server offer efficient ways of preventing unauthorized access. See the following sections for more information: Section 9.4, "Selecting Appropriate Authentication Methods" Section 9.6, "Designing a Password Policy" Section 9.7, "Designing Access Control" 9.1.2. Unauthorized Tampering If intruders gain access to the directory or intercept communications between Directory Server and a client application, they have the potential to modify (or tamper with) the directory data. The directory service is useless if the data can no longer be trusted by clients or if the directory itself cannot trust the modifications and queries it receives from clients. For example, if the directory cannot detect tampering, an attacker could change a client's request to the server (or not forward it) and change the server's response to the client. TLS and similar technologies can solve this problem by signing information at either end of the connection. For more information about using TLS with Directory Server, see Section 9.9, "Securing Server Connections" . 9.1.3. Denial of Service In a denial of service attack, the attacker's goal is to prevent the directory from providing service to its clients. For example, an attacker might use all of the system's resources, thereby preventing these resources from being used by anyone else. Directory Server can prevent denial of service attacks by setting limits on the resources allocated to a particular bind DN. For more information about setting resource limits based on the user's bind DN, see the "User Account Management" chapter in the Red Hat Directory Server Administration Guide .
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/11/html/deployment_guide/designing_a_secure_directory
Making open source more inclusive
Making open source more inclusive Red Hat is committed to replacing problematic language in our code and documentation. We are beginning with these four terms: master, slave, blacklist, and whitelist. Due to the enormity of this endeavor, these changes will be gradually implemented over upcoming releases. For more details on making our language more inclusive, see our CTO Chris Wright's message .
null
https://docs.redhat.com/en/documentation/red_hat_software_certification/2025/html/red_hat_openstack_certification_policy_guide/con-conscious-language-message
7.232. spice-gtk
7.232. spice-gtk 7.232.1. RHBA-2013:0343 - spice-gtk bug fix and enhancement update Updated spice-gtk packages that fix several bugs and add various enhancements are now available for Red Hat Enterprise Linux 6. The spice-gtk packages provide a GTK2 widget for SPICE clients. Both the virt-manager and virt-viewer utilities can make use of this widget to access virtual machines using the SPICE protocol. Note The spice-gtk packages have been upgraded to upstream version 0.14, which provides a number of bug fixes and enhancements over the version. The following list includes notable enhancements: Windows USB redirection support Seamless migration Better multi-monitor or resolution setting support Improved handling of key-press and key-release events in high latency situations BZ# 842354 Bug Fixes BZ# 834283 When part of a key combination matched the grab sequence, the last key of the combination was sometimes not sent to the guest. As a consequence, the Left Ctrl+Alt+Del key combination was not passed to guests. This update ensures that all the keys are sent to the SPICE server even if they are part of a combination. Now, when a key combination matches the grab sequence, the procedure works as expected. BZ# 813865 Previously, when a Uniform Resource Identifier (URI) contained an IPv6 address, errors occurred when parsing URIs in remote-viewer . As a consequence, remote-viewer could not be started from the command line with an IPv6 URI. Parsing of URIs containing IPv6 addresses is now fixed and it is possible to connect to an IPv6 address when starting remote-viewer from the command line. BZ# 812347 High network jitter caused some key strokes to enter multiple characters instead of one. Improvements on the SPICE protocol have been made to avoid unwanted character repetition. BZ# 818848 When the QEMU application was started with the --spice-disable-effects option and an invalid value, spice-gtk did not print any error message, which could confuse users. This bug is now fixed and QEMU exits when an invalid value is encountered. BZ# 881072 Previously, an attempt to close connection to a display failed until one of the remaining windows got resized. Consequently, a previously closed window could be opened again without user's intention. Reopening of the closed display is now fixed and closing the remote-viewer windows works as expected. BZ# 835997 Previously, SPICE motion messages were not properly synchronized between client and server after migration. As a consequence, mouse cursor state could get out of sync after migration. This update ensures SPICE motion messages are synchronized between client and server and mouse cursor state no longer gets out of sync. BZ# 846666 Previously, the following error code was returned in various scenarios: This code made debugging of connections failures cumbersome. With this update, the corresponding error message is printed for each of the different scenarios. BZ# 818847 When using the --spice-color-value option with an invalid value, an error message is displayed. However, previously, the message was not clear enough. After the update, when using the --spice-color-value option with an invalid value, SPICE returns an error message including a suggestion of the value. BZ# 843134 After connecting to an agent-less guest with 16-bit color depth, the initial screen was black and got drawn on change only. This bug is now fixed and the guest screen is rendered fully upon connection to an agent-less guest with 16-bit color depth. BZ# 867885 Disabling client-side mouse acceleration temporarily when the pointer was in server mode and grabbed caused the mouse pointer to "jump" over the guest desktop at any faster movement. This bug is now fixed and the mouse pointer moves in a guest as supposed in a physical client. BZ# 851090 Previously, the Ctrl+Shift composite key did not work, resulting in the same actions being triggered by different composite keys. This bug is now fixed and Ctrl+Shift works as expected. BZ# 858228 Previously, when no host subject was specified, the remote-viewer tool failed to connect with the following error message: With this update, when no host subject is specified, remote-viewer treats it like an empty host subject and verifies a common name CN= from the subject field with hostname. BZ# 858232 Under certain circumstances, an unclear warning message was returned, incorrectly suggesting that a needless network connection was attempted. The error message has been improved to correctly reflect the state. BZ# 859392 Previously, for security reasons, users were prompted to enter the root password when trying to redirect a USB device from a Red Hat Enterprise Linux 6.4 client to a SPICE guest. However, regular users do not have the root password. As this behavior is controlled by PolicyKit, changes in the /usr/share/polkit-1/actions/org.spice-space.lowlevelusbaccess.policy file have been made to allow access to the raw USB device without prompting for a password. A warning about the security implications of this have been included in the documentation. BZ#807771 Previously, implementation of the CONTROLLER_SEND_CAD event was missing in the spice-gtk controller. As a consequence, checking the box the "Pass Ctrl+Alt+Del to virtual machine box" in the user interface did not produce any result. Implementation for CONTROLLER_SEND_CAD has been added to the underlying source code and users can now tick the checkbox for Ctrl+Alt+Del to be intercepted on the virtual guest. BZ# 861332 After a non-seamless migration of virtual machines with redirected USB devices, SPICE did not evaluate the USB state correctly. With this update, the related functions called from the channel_reset() function can rely on the state accurately, reflecting the USB state. BZ# 804187 When there was no device to redirect, the redirection dialogue window did not provide clear enough information. With this update, a help message indicating that there is no device to redirect is included in the dialogue window as well as additional related guidance. BZ# 868237 In some situations, SPICE attempted to send the 00 scan codes to virtual machines, which resulted in the unknown key pressed error messages being printed by the client. After this update, SPICE no longer sends the 00 scan codes to the spice-server . Enhancements BZ# 846911 The SPICE migration pathway was almost equivalent to automatically connecting the client to the migration target and starting the session from scratch. This pathway resulted in unrecoverable data loss, mainly USB, smartcard or copy-paste data that was on its way from the client to the guest and vice versa, when the non-live phase of the migration started. This update prevents data loss and the migration process completes successfully in this scenario. BZ# 842411 RandR multi-monitor support for Linux guests and arbitrary resolution support for Linux and Windows guests have been added to the spice-gtk package. It is now possible to dynamically add new screens while using a virtual machine. Also, after resizing the window of the SPICE client, the resolution of the guest is automatically adjusted to match the size of the window. BZ#820964 Auto-discovery of already plugged-in USB devices on Red Hat Enterprise Linux clients by the USB Redirector has been added to the spice-gtk package. BZ# 834504 This update adds more informative error messages to the spice-gtk package; the messages deal with host subject mismatch when invalid SSL certificates or SSL options are passed to QEMU to the spice-gtk package. Users of spice-gtk are advised to upgrade to these updated packages, which fix these bugs and add these enhancements. 7.232.2. RHSA-2013:1273 - Important: spice-gtk security update Updated spice-gtk packages that fix one security issue are now available for Red Hat Enterprise Linux 6. The Red Hat Security Response Team has rated this update as having important security impact. A Common Vulnerability Scoring System (CVSS) base score, which gives a detailed severity rating, is available from the CVE link associated with the description below. The spice-gtk packages provide a GIMP Toolkit (GTK+) widget for SPICE (Simple Protocol for Independent Computing Environments) clients. Both Virtual Machine Manager and Virtual Machine Viewer can make use of this widget to access virtual machines using the SPICE protocol. Security Fix CVE-2013-4324 spice-gtk communicated with PolicyKit for authorization via an API that is vulnerable to a race condition. This could lead to intended PolicyKit authorizations being bypassed. This update modifies spice-gtk to communicate with PolicyKit via a different API that is not vulnerable to the race condition. All users of spice-gtk are advised to upgrade to these updated packages, which contain a backported patch to correct this issue.
[ "main-1:0: SSL_connect: error:00000001:lib(0):func(0):reason(1)", "Spice-Warning **: ssl_verify.c:484:openssl_verify: ssl: subject '' verification failed" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/6.4_technical_notes/spice-gtk
Chapter 7. Using CPU Manager
Chapter 7. Using CPU Manager CPU Manager manages groups of CPUs and constrains workloads to specific CPUs. CPU Manager is useful for workloads that have some of these attributes: Require as much CPU time as possible. Are sensitive to processor cache misses. Are low-latency network applications. Coordinate with other processes and benefit from sharing a single processor cache. 7.1. Setting up CPU Manager Procedure Optional: Label a node: # oc label node perf-node.example.com cpumanager=true Edit the MachineConfigPool of the nodes where CPU Manager should be enabled. In this example, all workers have CPU Manager enabled: # oc edit machineconfigpool worker Add a label to the worker machine config pool: metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled Create a KubeletConfig , cpumanager-kubeletconfig.yaml , custom resource (CR). Refer to the label created in the step to have the correct nodes updated with the new kubelet config. See the machineConfigPoolSelector section: apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 Specify a policy: none . This policy explicitly enables the existing default CPU affinity scheme, providing no affinity beyond what the scheduler does automatically. static . This policy allows pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node. 2 Optional. Specify the CPU Manager reconcile frequency. The default is 5s . Create the dynamic kubelet config: # oc create -f cpumanager-kubeletconfig.yaml This adds the CPU Manager feature to the kubelet config and, if needed, the Machine Config Operator (MCO) reboots the node. To enable CPU Manager, a reboot is not needed. Check for the merged kubelet config: # oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7 Example output "ownerReferences": [ { "apiVersion": "machineconfiguration.openshift.io/v1", "kind": "KubeletConfig", "name": "cpumanager-enabled", "uid": "7ed5616d-6b72-11e9-aae1-021e1ce18878" } ] Check the worker for the updated kubelet.conf : # oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager Example output cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2 1 2 These settings were defined when you created the KubeletConfig CR. Create a pod that requests a core or multiple cores. Both limits and requests must have their CPU value set to a whole integer. That is the number of cores that will be dedicated to this pod: # cat cpumanager-pod.yaml Example output apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: "1G" limits: cpu: 1 memory: "1G" nodeSelector: cpumanager: "true" Create the pod: # oc create -f cpumanager-pod.yaml Verify that the pod is scheduled to the node that you labeled: # oc describe pod cpumanager Example output Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx ... Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G ... QoS Class: Guaranteed Node-Selectors: cpumanager=true Verify that the cgroups are set up correctly. Get the process ID (PID) of the pause process: # ├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause Pods of quality of service (QoS) tier Guaranteed are placed within the kubepods.slice . Pods of other QoS tiers end up in child cgroups of kubepods : # cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope # for i in `ls cpuset.cpus tasks` ; do echo -n "USDi "; cat USDi ; done Example output cpuset.cpus 1 tasks 32706 Check the allowed CPU list for the task: # grep ^Cpus_allowed_list /proc/32706/status Example output Cpus_allowed_list: 1 Verify that another pod (in this case, the pod in the burstable QoS tier) on the system cannot run on the core allocated for the Guaranteed pod: # cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 # oc describe node perf-node.example.com Example output ... Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%) This VM has two CPU cores. The system-reserved setting reserves 500 millicores, meaning that half of one core is subtracted from the total capacity of the node to arrive at the Node Allocatable amount. You can see that Allocatable CPU is 1500 millicores. This means you can run one of the CPU Manager pods since each will take one whole core. A whole core is equivalent to 1000 millicores. If you try to schedule a second pod, the system will accept the pod, but it will never be scheduled: NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s
[ "oc label node perf-node.example.com cpumanager=true", "oc edit machineconfigpool worker", "metadata: creationTimestamp: 2020-xx-xxx generation: 3 labels: custom-kubelet: cpumanager-enabled", "apiVersion: machineconfiguration.openshift.io/v1 kind: KubeletConfig metadata: name: cpumanager-enabled spec: machineConfigPoolSelector: matchLabels: custom-kubelet: cpumanager-enabled kubeletConfig: cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "oc create -f cpumanager-kubeletconfig.yaml", "oc get machineconfig 99-worker-XXXXXX-XXXXX-XXXX-XXXXX-kubelet -o json | grep ownerReference -A7", "\"ownerReferences\": [ { \"apiVersion\": \"machineconfiguration.openshift.io/v1\", \"kind\": \"KubeletConfig\", \"name\": \"cpumanager-enabled\", \"uid\": \"7ed5616d-6b72-11e9-aae1-021e1ce18878\" } ]", "oc debug node/perf-node.example.com sh-4.2# cat /host/etc/kubernetes/kubelet.conf | grep cpuManager", "cpuManagerPolicy: static 1 cpuManagerReconcilePeriod: 5s 2", "cat cpumanager-pod.yaml", "apiVersion: v1 kind: Pod metadata: generateName: cpumanager- spec: containers: - name: cpumanager image: gcr.io/google_containers/pause-amd64:3.0 resources: requests: cpu: 1 memory: \"1G\" limits: cpu: 1 memory: \"1G\" nodeSelector: cpumanager: \"true\"", "oc create -f cpumanager-pod.yaml", "oc describe pod cpumanager", "Name: cpumanager-6cqz7 Namespace: default Priority: 0 PriorityClassName: <none> Node: perf-node.example.com/xxx.xx.xx.xxx Limits: cpu: 1 memory: 1G Requests: cpu: 1 memory: 1G QoS Class: Guaranteed Node-Selectors: cpumanager=true", "├─init.scope │ └─1 /usr/lib/systemd/systemd --switched-root --system --deserialize 17 └─kubepods.slice ├─kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice │ ├─crio-b5437308f1a574c542bdf08563b865c0345c8f8c0b0a655612c.scope │ └─32706 /pause", "cd /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-pod69c01f8e_6b74_11e9_ac0f_0a2b62178a22.slice/crio-b5437308f1ad1a7db0574c542bdf08563b865c0345c86e9585f8c0b0a655612c.scope for i in `ls cpuset.cpus tasks` ; do echo -n \"USDi \"; cat USDi ; done", "cpuset.cpus 1 tasks 32706", "grep ^Cpus_allowed_list /proc/32706/status", "Cpus_allowed_list: 1", "cat /sys/fs/cgroup/cpuset/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podc494a073_6b77_11e9_98c0_06bba5c387ea.slice/crio-c56982f57b75a2420947f0afc6cafe7534c5734efc34157525fa9abbf99e3849.scope/cpuset.cpus 0 oc describe node perf-node.example.com", "Capacity: attachable-volumes-aws-ebs: 39 cpu: 2 ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 8162900Ki pods: 250 Allocatable: attachable-volumes-aws-ebs: 39 cpu: 1500m ephemeral-storage: 124768236Ki hugepages-1Gi: 0 hugepages-2Mi: 0 memory: 7548500Ki pods: 250 ------- ---- ------------ ---------- --------------- ------------- --- default cpumanager-6cqz7 1 (66%) 1 (66%) 1G (12%) 1G (12%) 29m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1440m (96%) 1 (66%)", "NAME READY STATUS RESTARTS AGE cpumanager-6cqz7 1/1 Running 0 33m cpumanager-7qc2t 0/1 Pending 0 11s" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.7/html/scalability_and_performance/using-cpu-manager
Appendix B. Audit System Reference
Appendix B. Audit System Reference B.1. Audit Event Fields Table B.1, "Event Fields" lists all currently-supported Audit event fields. An event field is the value preceding the equal sign in the Audit log files. Table B.1. Event Fields Event Field Explanation a0 , a1 , a2 , a3 Records the first four arguments of the system call, encoded in hexadecimal notation. acct Records a user's account name. addr Records the IPv4 or IPv6 address. This field usually follows a hostname field and contains the address the host name resolves to. arch Records information about the CPU architecture of the system, encoded in hexadecimal notation. auid Records the Audit user ID. This ID is assigned to a user upon login and is inherited by every process even when the user's identity changes (for example, by switching user accounts with su - john ). capability Records the number of bits that were used to set a particular Linux capability. For more information on Linux capabilities, see the capabilities (7) man page. cap_fi Records data related to the setting of an inherited file system-based capability. cap_fp Records data related to the setting of a permitted file system-based capability. cap_pe Records data related to the setting of an effective process-based capability. cap_pi Records data related to the setting of an inherited process-based capability. cap_pp Records data related to the setting of a permitted process-based capability. cgroup Records the path to the cgroup that contains the process at the time the Audit event was generated. cmd Records the entire command line that is executed. This is useful in case of shell interpreters where the exe field records, for example, /bin/bash as the shell interpreter and the cmd field records the rest of the command line that is executed, for example helloworld.sh --help . comm Records the command that is executed. This is useful in case of shell interpreters where the exe field records, for example, /bin/bash as the shell interpreter and the comm field records the name of the script that is executed, for example helloworld.sh . cwd Records the path to the directory in which a system call was invoked. data Records data associated with TTY records. dev Records the minor and major ID of the device that contains the file or directory recorded in an event. devmajor Records the major device ID. devminor Records the minor device ID. egid Records the effective group ID of the user who started the analyzed process. euid Records the effective user ID of the user who started the analyzed process. exe Records the path to the executable that was used to invoke the analyzed process. exit Records the exit code returned by a system call. This value varies by system call. You can interpret the value to its human-readable equivalent with the following command: ausearch --interpret --exit exit_code family Records the type of address protocol that was used, either IPv4 or IPv6. filetype Records the type of the file. flags Records the file system name flags. fsgid Records the file system group ID of the user who started the analyzed process. fsuid Records the file system user ID of the user who started the analyzed process. gid Records the group ID. hostname Records the host name. icmptype Records the type of a Internet Control Message Protocol (ICMP) package that is received. Audit messages containing this field are usually generated by iptables . id Records the user ID of an account that was changed. inode Records the inode number associated with the file or directory recorded in an Audit event. inode_gid Records the group ID of the inode's owner. inode_uid Records the user ID of the inode's owner. items Records the number of path records that are attached to this record. key Records the user defined string associated with a rule that generated a particular event in the Audit log. list Records the Audit rule list ID. The following is a list of known IDs: 0 - user 1 - task 4 - exit 5 - exclude mode Records the file or directory permissions, encoded in numerical notation. msg Records a time stamp and a unique ID of a record, or various event-specific <name> = <value> pairs provided by the kernel or user space applications. msgtype Records the message type that is returned in case of a user-based AVC denial. The message type is determined by D-Bus. name Records the full path of the file or directory that was passed to the system call as an argument. new-disk Records the name of a new disk resource that is assigned to a virtual machine. new-mem Records the amount of a new memory resource that is assigned to a virtual machine. new-vcpu Records the number of a new virtual CPU resource that is assigned to a virtual machine. new-net Records the MAC address of a new network interface resource that is assigned to a virtual machine. new_gid Records a group ID that is assigned to a user. oauid Records the user ID of the user that has logged in to access the system (as opposed to, for example, using su ) and has started the target process. This field is exclusive to the record of type OBJ_PID . ocomm Records the command that was used to start the target process.This field is exclusive to the record of type OBJ_PID . opid Records the process ID of the target process. This field is exclusive to the record of type OBJ_PID . oses Records the session ID of the target process. This field is exclusive to the record of type OBJ_PID . ouid Records the real user ID of the target process obj Records the SELinux context of an object. An object can be a file, a directory, a socket, or anything that is receiving the action of a subject. obj_gid Records the group ID of an object. obj_lev_high Records the high SELinux level of an object. obj_lev_low Records the low SELinux level of an object. obj_role Records the SELinux role of an object. obj_uid Records the UID of an object obj_user Records the user that is associated with an object. ogid Records the object owner's group ID. old-disk Records the name of an old disk resource when a new disk resource is assigned to a virtual machine. old-mem Records the amount of an old memory resource when a new amount of memory is assigned to a virtual machine. old-vcpu Records the number of an old virtual CPU resource when a new virtual CPU is assigned to a virtual machine. old-net Records the MAC address of an old network interface resource when a new network interface is assigned to a virtual machine. old_prom Records the value of the network promiscuity flag. ouid Records the real user ID of the user who started the target process. path Records the full path of the file or directory that was passed to the system call as an argument in case of AVC-related Audit events perm Records the file permission that was used to generate an event (that is, read, write, execute, or attribute change) pid The pid field semantics depend on the origin of the value in this field. In fields generated from user-space, this field holds a process ID. In fields generated by the kernel, this field holds a thread ID. The thread ID is equal to process ID for single-threaded processes. Note that the value of this thread ID is different from the values of pthread_t IDs used in user-space. For more information, see the gettid (2) man page. ppid Records the Parent Process ID (PID). prom Records the network promiscuity flag. proto Records the networking protocol that was used. This field is specific to Audit events generated by iptables . res Records the result of the operation that triggered the Audit event. result Records the result of the operation that triggered the Audit event. saddr Records the socket address. sauid Records the sender Audit login user ID. This ID is provided by D-Bus as the kernel is unable to see which user is sending the original auid . ses Records the session ID of the session from which the analyzed process was invoked. sgid Records the set group ID of the user who started the analyzed process. sig Records the number of a signal that causes a program to end abnormally. Usually, this is a sign of a system intrusion. subj Records the SELinux context of a subject. A subject can be a process, a user, or anything that is acting upon an object. subj_clr Records the SELinux clearance of a subject. subj_role Records the SELinux role of a subject. subj_sen Records the SELinux sensitivity of a subject. subj_user Records the user that is associated with a subject. success Records whether a system call was successful or failed. suid Records the set user ID of the user who started the analyzed process. syscall Records the type of the system call that was sent to the kernel. terminal Records the terminal name (without /dev/ ). tty Records the name of the controlling terminal. The value (none) is used if the process has no controlling terminal. uid Records the real user ID of the user who started the analyzed process. vm Records the name of a virtual machine from which the Audit event originated.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/security_guide/app-Audit_Reference
Chapter 7. Applying autoscaling to an OpenShift Container Platform cluster
Chapter 7. Applying autoscaling to an OpenShift Container Platform cluster Applying autoscaling to an OpenShift Container Platform cluster involves deploying a cluster autoscaler and then deploying machine autoscalers for each machine type in your cluster. Important You can configure the cluster autoscaler only in clusters where the Machine API Operator is operational. 7.1. About the cluster autoscaler The cluster autoscaler adjusts the size of an OpenShift Container Platform cluster to meet its current deployment needs. It uses declarative, Kubernetes-style arguments to provide infrastructure management that does not rely on objects of a specific cloud provider. The cluster autoscaler has a cluster scope, and is not associated with a particular namespace. The cluster autoscaler increases the size of the cluster when there are pods that fail to schedule on any of the current worker nodes due to insufficient resources or when another node is necessary to meet deployment needs. The cluster autoscaler does not increase the cluster resources beyond the limits that you specify. The cluster autoscaler computes the total memory, CPU, and GPU on all nodes the cluster, even though it does not manage the control plane nodes. These values are not single-machine oriented. They are an aggregation of all the resources in the entire cluster. For example, if you set the maximum memory resource limit, the cluster autoscaler includes all the nodes in the cluster when calculating the current memory usage. That calculation is then used to determine if the cluster autoscaler has the capacity to add more worker resources. Important Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition that you create is large enough to account for the total possible number of machines in your cluster. This value must encompass the number of control plane machines and the possible number of compute machines that you might scale to. Every 10 seconds, the cluster autoscaler checks which nodes are unnecessary in the cluster and removes them. The cluster autoscaler considers a node for removal if the following conditions apply: The node utilization is less than the node utilization level threshold for the cluster. The node utilization level is the sum of the requested resources divided by the allocated resources for the node. If you do not specify a value in the ClusterAutoscaler custom resource, the cluster autoscaler uses a default value of 0.5 , which corresponds to 50% utilization. The cluster autoscaler can move all pods running on the node to the other nodes. The Kubernetes scheduler is responsible for scheduling pods on the nodes. The cluster autoscaler does not have scale down disabled annotation. If the following types of pods are present on a node, the cluster autoscaler will not remove the node: Pods with restrictive pod disruption budgets (PDBs). Kube-system pods that do not run on the node by default. Kube-system pods that do not have a PDB or have a PDB that is too restrictive. Pods that are not backed by a controller object such as a deployment, replica set, or stateful set. Pods with local storage. Pods that cannot be moved elsewhere because of a lack of resources, incompatible node selectors or affinity, matching anti-affinity, and so on. Unless they also have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "true" annotation, pods that have a "cluster-autoscaler.kubernetes.io/safe-to-evict": "false" annotation. For example, you set the maximum CPU limit to 64 cores and configure the cluster autoscaler to only create machines that have 8 cores each. If your cluster starts with 30 cores, the cluster autoscaler can add up to 4 more nodes with 32 cores, for a total of 62. If you configure the cluster autoscaler, additional usage restrictions apply: Do not modify the nodes that are in autoscaled node groups directly. All nodes within the same node group have the same capacity and labels and run the same system pods. Specify requests for your pods. If you have to prevent pods from being deleted too quickly, configure appropriate PDBs. Confirm that your cloud provider quota is large enough to support the maximum node pools that you configure. Do not run additional node group autoscalers, especially the ones offered by your cloud provider. The horizontal pod autoscaler (HPA) and the cluster autoscaler modify cluster resources in different ways. The HPA changes the deployment's or replica set's number of replicas based on the current CPU load. If the load increases, the HPA creates new replicas, regardless of the amount of resources available to the cluster. If there are not enough resources, the cluster autoscaler adds resources so that the HPA-created pods can run. If the load decreases, the HPA stops some replicas. If this action causes some nodes to be underutilized or completely empty, the cluster autoscaler deletes the unnecessary nodes. The cluster autoscaler takes pod priorities into account. The Pod Priority and Preemption feature enables scheduling pods based on priorities if the cluster does not have enough resources, but the cluster autoscaler ensures that the cluster has resources to run all pods. To honor the intention of both features, the cluster autoscaler includes a priority cutoff function. You can use this cutoff to schedule "best-effort" pods, which do not cause the cluster autoscaler to increase resources but instead run only when spare resources are available. Pods with priority lower than the cutoff value do not cause the cluster to scale up or prevent the cluster from scaling down. No new nodes are added to run the pods, and nodes running these pods might be deleted to free resources. Cluster autoscaling is supported for the platforms that have machine API available on it. 7.2. Configuring the cluster autoscaler First, deploy the cluster autoscaler to manage automatic resource scaling in your OpenShift Container Platform cluster. Note Because the cluster autoscaler is scoped to the entire cluster, you can make only one cluster autoscaler for the cluster. 7.2.1. Cluster autoscaler resource definition This ClusterAutoscaler resource definition shows the parameters and sample values for the cluster autoscaler. apiVersion: "autoscaling.openshift.io/v1" kind: "ClusterAutoscaler" metadata: name: "default" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: "0.4" 16 1 Specify the priority that a pod must exceed to cause the cluster autoscaler to deploy additional nodes. Enter a 32-bit integer value. The podPriorityThreshold value is compared to the value of the PriorityClass that you assign to each pod. 2 Specify the maximum number of nodes to deploy. This value is the total number of machines that are deployed in your cluster, not just the ones that the autoscaler controls. Ensure that this value is large enough to account for all of your control plane and compute machines and the total number of replicas that you specify in your MachineAutoscaler resources. 3 Specify the minimum number of cores to deploy in the cluster. 4 Specify the maximum number of cores to deploy in the cluster. 5 Specify the minimum amount of memory, in GiB, in the cluster. 6 Specify the maximum amount of memory, in GiB, in the cluster. 7 Optional: Specify the type of GPU node to deploy. Only nvidia.com/gpu and amd.com/gpu are valid types. 8 Specify the minimum number of GPUs to deploy in the cluster. 9 Specify the maximum number of GPUs to deploy in the cluster. 10 In this section, you can specify the period to wait for each action by using any valid ParseDuration interval, including ns , us , ms , s , m , and h . 11 Specify whether the cluster autoscaler can remove unnecessary nodes. 12 Optional: Specify the period to wait before deleting a node after a node has recently been added . If you do not specify a value, the default value of 10m is used. 13 Optional: Specify the period to wait before deleting a node after a node has recently been deleted . If you do not specify a value, the default value of 0s is used. 14 Optional: Specify the period to wait before deleting a node after a scale down failure occurred. If you do not specify a value, the default value of 3m is used. 15 Optional: Specify the period before an unnecessary node is eligible for deletion. If you do not specify a value, the default value of 10m is used. 16 Optional: Specify the node utilization level below which an unnecessary node is eligible for deletion. The node utilization level is the sum of the requested resources divided by the allocated resources for the node, and must be a value greater than "0" but less than "1" . If you do not specify a value, the cluster autoscaler uses a default value of "0.5" , which corresponds to 50% utilization. This value must be expressed as a string. Note When performing a scaling operation, the cluster autoscaler remains within the ranges set in the ClusterAutoscaler resource definition, such as the minimum and maximum number of cores to deploy or the amount of memory in the cluster. However, the cluster autoscaler does not correct the current values in your cluster to be within those ranges. The minimum and maximum CPUs, memory, and GPU values are determined by calculating those resources on all nodes in the cluster, even if the cluster autoscaler does not manage the nodes. For example, the control plane nodes are considered in the total memory in the cluster, even though the cluster autoscaler does not manage the control plane nodes. 7.2.2. Deploying a cluster autoscaler To deploy a cluster autoscaler, you create an instance of the ClusterAutoscaler resource. Procedure Create a YAML file for a ClusterAutoscaler resource that contains the custom resource definition. Create the custom resource in the cluster by running the following command: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the custom resource file. steps After you configure the cluster autoscaler, you must configure at least one machine autoscaler . 7.3. About the machine autoscaler The machine autoscaler adjusts the number of Machines in the machine sets that you deploy in an OpenShift Container Platform cluster. You can scale both the default worker machine set and any other machine sets that you create. The machine autoscaler makes more Machines when the cluster runs out of resources to support more deployments. Any changes to the values in MachineAutoscaler resources, such as the minimum or maximum number of instances, are immediately applied to the machine set they target. Important You must deploy a machine autoscaler for the cluster autoscaler to scale your machines. The cluster autoscaler uses the annotations on machine sets that the machine autoscaler sets to determine the resources that it can scale. If you define a cluster autoscaler without also defining machine autoscalers, the cluster autoscaler will never scale your cluster. 7.4. Configuring machine autoscalers After you deploy the cluster autoscaler, deploy MachineAutoscaler resources that reference the machine sets that are used to scale the cluster. Important You must deploy at least one MachineAutoscaler resource after you deploy the ClusterAutoscaler resource. Note You must configure separate resources for each machine set. Remember that machine sets are different in each region, so consider whether you want to enable machine scaling in multiple regions. The machine set that you scale must have at least one machine in it. 7.4.1. Machine autoscaler resource definition This MachineAutoscaler resource definition shows the parameters and sample values for the machine autoscaler. apiVersion: "autoscaling.openshift.io/v1beta1" kind: "MachineAutoscaler" metadata: name: "worker-us-east-1a" 1 namespace: "openshift-machine-api" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6 1 Specify the machine autoscaler name. To make it easier to identify which machine set this machine autoscaler scales, specify or include the name of the machine set to scale. The machine set name takes the following form: <clusterid>-<machineset>-<region> . 2 Specify the minimum number machines of the specified type that must remain in the specified zone after the cluster autoscaler initiates cluster scaling. If running in AWS, GCP, Azure, RHOSP, or vSphere, this value can be set to 0 . For other providers, do not set this value to 0 . You can save on costs by setting this value to 0 for use cases such as running expensive or limited-usage hardware that is used for specialized workloads, or by scaling a machine set with extra large machines. The cluster autoscaler scales the machine set down to zero if the machines are not in use. Important Do not set the spec.minReplicas value to 0 for the three compute machine sets that are created during the OpenShift Container Platform installation process for an installer provisioned infrastructure. 3 Specify the maximum number machines of the specified type that the cluster autoscaler can deploy in the specified zone after it initiates cluster scaling. Ensure that the maxNodesTotal value in the ClusterAutoscaler resource definition is large enough to allow the machine autoscaler to deploy this number of machines. 4 In this section, provide values that describe the existing machine set to scale. 5 The kind parameter value is always MachineSet . 6 The name value must match the name of an existing machine set, as shown in the metadata.name parameter value. 7.4.2. Deploying a machine autoscaler To deploy a machine autoscaler, you create an instance of the MachineAutoscaler resource. Procedure Create a YAML file for a MachineAutoscaler resource that contains the custom resource definition. Create the custom resource in the cluster by running the following command: USD oc create -f <filename>.yaml 1 1 <filename> is the name of the custom resource file. 7.5. Disabling autoscaling You can disable an individual machine autoscaler in your cluster or disable autoscaling on the cluster entirely. 7.5.1. Disabling a machine autoscaler To disable a machine autoscaler, you delete the corresponding MachineAutoscaler custom resource (CR). Note Disabling a machine autoscaler does not disable the cluster autoscaler. To disable the cluster autoscaler, follow the instructions in "Disabling the cluster autoscaler". Procedure List the MachineAutoscaler CRs for the cluster by running the following command: USD oc get MachineAutoscaler -n openshift-machine-api Example output NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m Optional: Create a YAML file backup of the MachineAutoscaler CR by running the following command: USD oc get MachineAutoscaler/<machine_autoscaler_name> \ 1 -n openshift-machine-api \ -o yaml> <machine_autoscaler_name_backup>.yaml 2 1 <machine_autoscaler_name> is the name of the CR that you want to delete. 2 <machine_autoscaler_name_backup> is the name for the backup of the CR. Delete the MachineAutoscaler CR by running the following command: USD oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api Example output machineautoscaler.autoscaling.openshift.io "compute-us-east-1a" deleted Verification To verify that the machine autoscaler is disabled, run the following command: USD oc get MachineAutoscaler -n openshift-machine-api The disabled machine autoscaler does not appear in the list of machine autoscalers. steps If you need to re-enable the machine autoscaler, use the <machine_autoscaler_name_backup>.yaml backup file and follow the instructions in "Deploying a machine autoscaler". Additional resources Disabling the cluster autoscaler Deploying a machine autoscaler 7.5.2. Disabling the cluster autoscaler To disable the cluster autoscaler, you delete the corresponding ClusterAutoscaler resource. Note Disabling the cluster autoscaler disables autoscaling on the cluster, even if the cluster has existing machine autoscalers. Procedure List the ClusterAutoscaler resource for the cluster by running the following command: USD oc get ClusterAutoscaler Example output NAME AGE default 42m Optional: Create a YAML file backup of the ClusterAutoscaler CR by running the following command: USD oc get ClusterAutoscaler/default \ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2 1 default is the name of the ClusterAutoscaler CR. 2 <cluster_autoscaler_backup_name> is the name for the backup of the CR. Delete the ClusterAutoscaler CR by running the following command: USD oc delete ClusterAutoscaler/default Example output clusterautoscaler.autoscaling.openshift.io "default" deleted Verification To verify that the cluster autoscaler is disabled, run the following command: USD oc get ClusterAutoscaler Expected output No resources found steps Disabling the cluster autoscaler by deleting the ClusterAutoscaler CR prevents the cluster from autoscaling but does not delete any existing machine autoscalers on the cluster. To clean up unneeded machine autoscalers, see "Disabling a machine autoscaler". If you need to re-enable the cluster autoscaler, use the <cluster_autoscaler_name_backup>.yaml backup file and follow the instructions in "Deploying a cluster autoscaler". Additional resources Disabling the machine autoscaler Deploying a cluster autoscaler 7.6. Additional resources Including pod priority in pod scheduling decisions in OpenShift Container Platform
[ "apiVersion: \"autoscaling.openshift.io/v1\" kind: \"ClusterAutoscaler\" metadata: name: \"default\" spec: podPriorityThreshold: -10 1 resourceLimits: maxNodesTotal: 24 2 cores: min: 8 3 max: 128 4 memory: min: 4 5 max: 256 6 gpus: - type: nvidia.com/gpu 7 min: 0 8 max: 16 9 - type: amd.com/gpu min: 0 max: 4 scaleDown: 10 enabled: true 11 delayAfterAdd: 10m 12 delayAfterDelete: 5m 13 delayAfterFailure: 30s 14 unneededTime: 5m 15 utilizationThreshold: \"0.4\" 16", "oc create -f <filename>.yaml 1", "apiVersion: \"autoscaling.openshift.io/v1beta1\" kind: \"MachineAutoscaler\" metadata: name: \"worker-us-east-1a\" 1 namespace: \"openshift-machine-api\" spec: minReplicas: 1 2 maxReplicas: 12 3 scaleTargetRef: 4 apiVersion: machine.openshift.io/v1beta1 kind: MachineSet 5 name: worker-us-east-1a 6", "oc create -f <filename>.yaml 1", "oc get MachineAutoscaler -n openshift-machine-api", "NAME REF KIND REF NAME MIN MAX AGE compute-us-east-1a MachineSet compute-us-east-1a 1 12 39m compute-us-west-1a MachineSet compute-us-west-1a 2 4 37m", "oc get MachineAutoscaler/<machine_autoscaler_name> \\ 1 -n openshift-machine-api -o yaml> <machine_autoscaler_name_backup>.yaml 2", "oc delete MachineAutoscaler/<machine_autoscaler_name> -n openshift-machine-api", "machineautoscaler.autoscaling.openshift.io \"compute-us-east-1a\" deleted", "oc get MachineAutoscaler -n openshift-machine-api", "oc get ClusterAutoscaler", "NAME AGE default 42m", "oc get ClusterAutoscaler/default \\ 1 -o yaml> <cluster_autoscaler_backup_name>.yaml 2", "oc delete ClusterAutoscaler/default", "clusterautoscaler.autoscaling.openshift.io \"default\" deleted", "oc get ClusterAutoscaler", "No resources found" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.11/html/machine_management/applying-autoscaling
Introduction
Introduction Welcome to the Reference Guide . The Reference Guide contains useful information about the Red Hat Enterprise Linux system. From fundamental concepts, such as the structure of the file system, to the finer points of system security and authentication control, we hope you find this book to be a valuable resource. This guide is for you if you want to learn a bit more about how the Red Hat Enterprise Linux system works. Topics that you can explore within this manual include the following: The boot process The file system structure The X Window System Network services Security tools 1. Changes To This Manual This manual has been reorganized for clarity and updated for the latest features of Red Hat Enterprise Linux 4.5.0. Some of the changes include: A New Samba Chapter The new Samba chapter explains various Samba daemons and configuration options. Special thanks to John Terpstra for his hard work in helping to complete this chapter. A New SELinux Chapter The new SELinux chapter explains various SELinux files and configuration options. Special thanks to Karsten Wade for his hard work in helping to complete this chapter. An Updated proc File System Chapter The proc file system chapter includes updated information in regards to the 2.6 kernel. Special thanks to Arjan van de Ven for his hard work in helping to complete this chapter. An Updated Network File System (NFS) Chapter The Network File System (NFS) chapter has been revised and reorganized to include NFSv4. An Updated The X Window System Chapter The X Window System chapter has been revised to include information on the X11R6.8 release developed by the X.Org team. Before reading this guide, you should be familiar with the contents of the Installation Guide concerning installation issues, the Red Hat Enterprise Linux Introduction to System Adminitration for basic administration concepts, the System Administrators Guide for general customization instructions, and the Security Guide for security related instructions. This guide contains information about topics for advanced users.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/ch-intro
Chapter 1. OpenID Connect (OIDC) and OAuth2 client and filters
Chapter 1. OpenID Connect (OIDC) and OAuth2 client and filters You can use Quarkus extensions for OpenID Connect and OAuth 2.0 access token management, focusing on acquiring, refreshing, and propagating tokens. This includes the following: Using quarkus-oidc-client , quarkus-rest-client-oidc-filter and quarkus-resteasy-client-oidc-filter extensions to acquire and refresh access tokens from OpenID Connect and OAuth 2.0 compliant Authorization Servers such as Keycloak . Using quarkus-rest-client-oidc-token-propagation and quarkus-resteasy-client-oidc-token-propagation extensions to propagate the current Bearer or Authorization Code Flow access tokens. The access tokens managed by these extensions can be used as HTTP Authorization Bearer tokens to access the remote services. Also see OpenID Connect client and token propagation quickstart . 1.1. OidcClient Add the following dependency: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc-client</artifactId> </dependency> The quarkus-oidc-client extension provides a reactive io.quarkus.oidc.client.OidcClient , which can be used to acquire and refresh tokens using SmallRye Mutiny Uni and Vert.x WebClient . OidcClient is initialized at build time with the IDP token endpoint URL, which can be auto-discovered or manually configured. OidcClient uses this endpoint to acquire access tokens by using token grants such as client_credentials or password and refresh the tokens by using a refresh_token grant. 1.1.1. Token endpoint configuration By default, the token endpoint address is discovered by adding a /.well-known/openid-configuration path to the configured quarkus.oidc-client.auth-server-url . For example, given this Keycloak URL: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus OidcClient will discover that the token endpoint URL is http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens . Alternatively, if the discovery endpoint is unavailable or you want to save on the discovery endpoint round-trip, you can disable the discovery and configure the token endpoint address with a relative path value. For example: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus quarkus.oidc-client.discovery-enabled=false # Token endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens quarkus.oidc-client.token-path=/protocol/openid-connect/tokens A more compact way to configure the token endpoint URL without the discovery is to set quarkus.oidc-client.token-path to an absolute URL: quarkus.oidc-client.token-path=http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens Setting quarkus.oidc-client.auth-server-url and quarkus.oidc-client.discovery-enabled is not required in this case. 1.1.2. Supported token grants The main token grants that OidcClient can use to acquire the tokens are the client_credentials (default) and password grants. 1.1.2.1. Client credentials grant Here is how OidcClient can be configured to use the client_credentials grant: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret The client_credentials grant allows setting extra parameters for the token request by using quarkus.oidc-client.grant-options.client.<param-name>=<value> . Here is how to set the intended token recipient by using the audience parameter: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret # 'client' is a shortcut for `client_credentials` quarkus.oidc-client.grant.type=client quarkus.oidc-client.grant-options.client.audience=https://example.com/api 1.1.2.2. Password grant Here is how OidcClient can be configured to use the password grant: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice It can be further customized by using a quarkus.oidc-client.grant-options.password configuration prefix, similar to how the client credentials grant can be customized. 1.1.2.3. Other grants OidcClient can also help acquire the tokens by using grants that require some extra input parameters that cannot be captured in the configuration. These grants are refresh_token (with the external refresh token), authorization_code , and two grants which can be used to exchange the current access token, namely, urn:ietf:params:oauth:grant-type:token-exchange and urn:ietf:params:oauth:grant-type:jwt-bearer . If you need to acquire an access token and have posted an existing refresh token to the current Quarkus endpoint, you must use the refresh_token grant. This grant employs an out-of-band refresh token to obtain a new token set. In this case, configure OidcClient as follows: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=refresh Then you can use the OidcClient.refreshTokens method with a provided refresh token to get the access token. Using the urn:ietf:params:oauth:grant-type:token-exchange or urn:ietf:params:oauth:grant-type:jwt-bearer grants might be required if you are building a complex microservices application and want to avoid the same Bearer token be propagated to and used by more than one service. See Token Propagation for Quarkus REST and Token Propagation for RESTEasy Classic for more details. Using OidcClient to support the authorization code grant might be required if, for some reason, you cannot use the Quarkus OIDC extension to support Authorization Code Flow. If there is a very good reason for you to implement Authorization Code Flow, then you can configure OidcClient as follows: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=code Then, you can use the OidcClient.accessTokens method to accept a Map of extra properties and pass the current code and redirect_uri parameters to exchange the authorization code for the tokens. OidcClient also supports the urn:openid:params:grant-type:ciba grant: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=ciba Then, you can use the OidcClient.accessTokens method to accept a Map of extra properties and pass the auth_req_id parameter to exchange the token authorization code. 1.1.2.4. Grant scopes You might need to request that a specific set of scopes be associated with an issued access token. Use a dedicated quarkus.oidc-client.scopes list property, for example: quarkus.oidc-client.scopes=email,phone 1.1.3. Use OidcClient directly You can use OidcClient directly to acquire access tokens and set them in an HTTP Authorization header as a Bearer scheme value. For example, let's assume the Quarkus endpoint has to access a microservice that returns a user name. First, create a REST client: package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.HeaderParam; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; @RegisterRestClient @Path("/") public interface RestClientWithTokenHeaderParam { @GET @Produces("text/plain") @Path("userName") Uni<String> getUserName(@HeaderParam("Authorization") String authorization); } Now, use OidcClient to acquire the tokens and propagate them: package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RestClient; import io.quarkus.oidc.client.runtime.TokensHelper; import io.quarkus.oidc.client.OidcClient; import io.smallrye.mutiny.Uni; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; @Path("/service") public class OidcClientResource { @Inject OidcClient client; TokensHelper tokenHelper = new TokensHelper(); 1 @Inject @RestClient RestClientWithTokenHeaderParam restClient; @GET @Path("user-name") @Produces("text/plain") public Uni<String> getUserName() { return tokenHelper.getTokens(client).onItem() .transformToUni(tokens -> restClient.getUserName("Bearer " + tokens.getAccessToken())); } } 1 io.quarkus.oidc.client.runtime.TokensHelper manages the access token acquisition and refresh. 1.1.4. Inject tokens You can inject Tokens that use OidcClient internally. Tokens can be used to acquire the access tokens and refresh them if necessary: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.client.Tokens; @Path("/service") public class OidcClientResource { @Inject Tokens tokens; @GET public String getResponse() { // Get the access token, which might have been refreshed. String accessToken = tokens.getAccessToken(); // Use the access token to configure MP RestClient Authorization header/etc } } 1.1.5. Use OidcClients io.quarkus.oidc.client.OidcClients is a container of OidcClient s - it includes a default OidcClient and named clients which can be configured like this: quarkus.oidc-client.client-enabled=false quarkus.oidc-client.jwt-secret.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.jwt-secret.client-id=quarkus-app quarkus.oidc-client.jwt-secret.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow In this case, the default client is disabled with a client-enabled=false property. The jwt-secret client can be accessed like this: import org.eclipse.microprofile.rest.client.inject.RestClient; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClients; import io.quarkus.oidc.client.runtime.TokensHelper; @Path("/clients") public class OidcClientResource { @Inject OidcClients clients; TokensHelper tokenHelper = new TokensHelper(); @Inject @RestClient RestClientWithTokenHeaderParam restClient; 1 @GET @Path("user-name") @Produces("text/plain") public Uni<String> getUserName() { OidcClient client = clients.getClient("jwt-secret"); return tokenHelper.getTokens(client).onItem() .transformToUni(tokens -> restClient.getUserName("Bearer " + tokens.getAccessToken())); } } 1 See the RestClientWithTokenHeaderParam declaration in the Use OidcClient directly section. Note If you also use OIDC multitenancy , and each OIDC tenant has its own associated OidcClient , you can use a Vert.x RoutingContext tenant-id attribute. For example: import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClients; import io.vertx.ext.web.RoutingContext; @Path("/clients") public class OidcClientResource { @Inject OidcClients clients; @Inject RoutingContext context; @GET public String getResponse() { String tenantId = context.get("tenant-id"); // named OIDC tenant and client configurations use the same key: OidcClient client = clients.getClient(tenantId); //Use this client to get the token } } You can also create a new OidcClient programmatically. For example, let's assume you must create it at startup time: package org.acme.security.openid.connect.client; import java.util.Map; import org.eclipse.microprofile.config.inject.ConfigProperty; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClientConfig; import io.quarkus.oidc.client.OidcClientConfig.Grant.Type; import io.quarkus.oidc.client.OidcClients; import io.quarkus.runtime.StartupEvent; import io.smallrye.mutiny.Uni; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import jakarta.inject.Inject; @ApplicationScoped public class OidcClientCreator { @Inject OidcClients oidcClients; @ConfigProperty(name = "quarkus.oidc.auth-server-url") String oidcProviderAddress; private volatile OidcClient oidcClient; public void startup(@Observes StartupEvent event) { createOidcClient().subscribe().with(client -> {oidcClient = client;}); } public OidcClient getOidcClient() { return oidcClient; } private Uni<OidcClient> createOidcClient() { OidcClientConfig cfg = new OidcClientConfig(); cfg.setId("myclient"); cfg.setAuthServerUrl(oidcProviderAddress); cfg.setClientId("backend-service"); cfg.getCredentials().setSecret("secret"); cfg.getGrant().setType(Type.PASSWORD); cfg.setGrantOptions(Map.of("password", Map.of("username", "alice", "password", "alice"))); return oidcClients.newClient(cfg); } } Now, you can use this client like this: import org.eclipse.microprofile.rest.client.inject.RestClient; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.client.runtime.TokensHelper; @Path("/clients") public class OidcClientResource { @Inject OidcClientCreator oidcClientCreator; TokensHelper tokenHelper = new TokensHelper(); @Inject @RestClient RestClientWithTokenHeaderParam restClient; 1 @GET @Path("user-name") @Produces("text/plain") public Uni<String> getUserName() { return tokenHelper.getTokens(oidcClientCreator.getOidcClient()).onItem() .transformToUni(tokens -> restClient.getUserName("Bearer " + tokens.getAccessToken())); } } 1 See the RestClientWithTokenHeaderParam declaration in the Use OidcClient directly section. 1.1.6. Inject named OidcClient and tokens In case of multiple configured OidcClient objects, you can specify the OidcClient injection target by the extra qualifier @NamedOidcClient instead of working with OidcClients : package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RestClient; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.client.NamedOidcClient; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.runtime.TokensHelper; @Path("/clients") public class OidcClientResource { @Inject @NamedOidcClient("jwt-secret") OidcClient client; TokensHelper tokenHelper = new TokensHelper(); @Inject @RestClient RestClientWithTokenHeaderParam restClient; 1 @GET @Path("user-name") @Produces("text/plain") public Uni<String> getUserName() { return tokenHelper.getTokens(client).onItem() .transformToUni(tokens -> restClient.getUserName("Bearer " + tokens.getAccessToken())); } } 1 See the RestClientWithTokenHeaderParam declaration in the Use OidcClient directly section. The same qualifier can be used to specify the OidcClient used for a Tokens injection: import java.io.IOException; import jakarta.annotation.Priority; import jakarta.enterprise.context.RequestScoped; import jakarta.inject.Inject; import jakarta.ws.rs.Priorities; import jakarta.ws.rs.client.ClientRequestContext; import jakarta.ws.rs.client.ClientRequestFilter; import jakarta.ws.rs.core.HttpHeaders; import jakarta.ws.rs.ext.Provider; import io.quarkus.oidc.client.NamedOidcClient; import io.quarkus.oidc.client.Tokens; @Provider @Priority(Priorities.AUTHENTICATION) @RequestScoped public class OidcClientRequestCustomFilter implements ClientRequestFilter { @Inject @NamedOidcClient("jwt-secret") Tokens tokens; @Override public void filter(ClientRequestContext requestContext) throws IOException { requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + tokens.getAccessToken()); } } 1.1.7. Use OidcClient in RestClient Reactive ClientFilter Add the following Maven Dependency: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-oidc-filter</artifactId> </dependency> Note It will also bring io.quarkus:quarkus-oidc-client . quarkus-rest-client-oidc-filter extension provides io.quarkus.oidc.client.filter.OidcClientRequestReactiveFilter . It works similarly to the way OidcClientRequestFilter does (see Use OidcClient in MicroProfile RestClient client filter ) - it uses OidcClient to acquire the access token, refresh it if needed, and set it as an HTTP Authorization Bearer scheme value. The difference is that it works with Reactive RestClient and implements a non-blocking client filter that does not block the current IO thread when acquiring or refreshing the tokens. OidcClientRequestReactiveFilter delays an initial token acquisition until it is executed to avoid blocking an IO thread. You can selectively register OidcClientRequestReactiveFilter by using either io.quarkus.oidc.client.reactive.filter.OidcClientFilter or org.eclipse.microprofile.rest.client.annotation.RegisterProvider annotations: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter @Path("/") public interface ProtectedResourceService { @GET Uni<String> getUserName(); } or import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.reactive.filter.OidcClientRequestReactiveFilter; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(OidcClientRequestReactiveFilter.class) @Path("/") public interface ProtectedResourceService { @GET Uni<String> getUserName(); } OidcClientRequestReactiveFilter uses a default OidcClient by default. A named OidcClient can be selected with a quarkus.rest-client-oidc-filter.client-name configuration property. You can also select OidcClient by setting the value attribute of the @OidcClientFilter annotation. The client name set through annotation has higher priority than the quarkus.rest-client-oidc-filter.client-name configuration property. For example, given this jwt-secret named OIDC client declaration, you can refer to this client like this: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter("jwt-secret") @Path("/") public interface ProtectedResourceService { @GET Uni<String> getUserName(); } 1.1.8. Use OidcClient in RestClient ClientFilter Add the following Maven Dependency: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-client-oidc-filter</artifactId> </dependency> Note It will also bring io.quarkus:quarkus-oidc-client . quarkus-resteasy-client-oidc-filter extension provides io.quarkus.oidc.client.filter.OidcClientRequestFilter Jakarta REST ClientRequestFilter which uses OidcClient to acquire the access token, refresh it if needed, and set it as an HTTP Authorization Bearer scheme value. By default, this filter will get OidcClient to acquire the first pair of access and refresh tokens at its initialization time. If the access tokens are short-lived and refresh tokens are unavailable, then the token acquisition should be delayed with quarkus.oidc-client.early-tokens-acquisition=false . You can selectively register OidcClientRequestFilter by using either io.quarkus.oidc.client.filter.OidcClientFilter or org.eclipse.microprofile.rest.client.annotation.RegisterProvider annotations: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } or import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientRequestFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(OidcClientRequestFilter.class) @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } Alternatively, OidcClientRequestFilter can be registered automatically with all MP Rest or Jakarta REST clients if the quarkus.resteasy-client-oidc-filter.register-filter=true property is set. OidcClientRequestFilter uses a default OidcClient by default. A named OidcClient can be selected with a quarkus.resteasy-client-oidc-filter.client-name configuration property. You can also select OidcClient by setting the value attribute of the @OidcClientFilter annotation. The client name set through annotation has higher priority than the quarkus.resteasy-client-oidc-filter.client-name configuration property. For example, given this jwt-secret named OIDC client declaration, you can refer to this client like this: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter("jwt-secret") @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } 1.1.9. Use a custom RestClient ClientFilter If you prefer, you can use your own custom filter and inject Tokens : import java.io.IOException; import jakarta.annotation.Priority; import jakarta.inject.Inject; import jakarta.ws.rs.Priorities; import jakarta.ws.rs.client.ClientRequestContext; import jakarta.ws.rs.client.ClientRequestFilter; import jakarta.ws.rs.core.HttpHeaders; import jakarta.ws.rs.ext.Provider; import io.quarkus.oidc.client.Tokens; @Provider @Priority(Priorities.AUTHENTICATION) public class OidcClientRequestCustomFilter implements ClientRequestFilter { @Inject Tokens tokens; @Override public void filter(ClientRequestContext requestContext) throws IOException { requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, "Bearer " + tokens.getAccessToken()); } } The Tokens producer will acquire and refresh the tokens, and the custom filter will decide how and when to use the token. You can also inject named Tokens , see Inject named OidcClient and Tokens 1.1.10. Refreshing access tokens OidcClientRequestReactiveFilter , OidcClientRequestFilter and Tokens producers will refresh the current expired access token if the refresh token is available. Additionally, the quarkus.oidc-client.refresh-token-time-skew property can be used for a preemptive access token refreshment to avoid sending nearly expired access tokens that might cause HTTP 401 errors. For example, if this property is set to 3S and the access token will expire in less than 3 seconds, then this token will be auto-refreshed. If the access token needs to be refreshed, but no refresh token is available, then an attempt is made to acquire a new token by using a configured grant, such as client_credentials . Some OpenID Connect Providers will not return a refresh token in a client_credentials grant response. For example, starting from Keycloak 12, a refresh token will not be returned by default for client_credentials . The providers might also restrict the number of times a refresh token can be used. 1.1.11. Revoking access tokens If your OpenId Connect provider, such as Keycloak, supports a token revocation endpoint, then OidcClient#revokeAccessToken can be used to revoke the current access token. The revocation endpoint URL will be discovered alongside the token request URI or can be configured with quarkus.oidc-client.revoke-path . You might want to have the access token revoked if using this token with a REST client fails with an HTTP 401 status code or if the access token has already been used for a long time and you would like to refresh it. This can be achieved by requesting a token refresh by using a refresh token. However, if the refresh token is unavailable, you can refresh it by revoking it first and then requesting a new access token. 1.1.12. OidcClient authentication OidcClient has to authenticate to the OpenID Connect Provider for the client_credentials and other grant requests to succeed. All the OIDC Client Authentication options are supported, for example: client_secret_basic : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=mysecret or quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.client-secret.value=mysecret Or with the secret retrieved from a CredentialsProvider : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app # This key is used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc-client.credentials.client-secret.provider.key=mysecret-key # This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc # Set it only if more than one CredentialsProvider can be registered quarkus.oidc-client.credentials.client-secret.provider.name=oidc-credentials-provider client_secret_post : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.client-secret.value=mysecret quarkus.oidc-client.credentials.client-secret.method=post client_secret_jwt , signature algorithm is HS256 : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow Or with the secret retrieved from a CredentialsProvider , signature algorithm is HS256 : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app # This is a key that will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc-client.credentials.jwt.secret-provider.key=mysecret-key # This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc # Set it only if more than one CredentialsProvider can be registered quarkus.oidc-client.credentials.jwt.secret-provider.name=oidc-credentials-provider private_key_jwt with the PEM key inlined in application.properties, and where the signature algorithm is RS256 : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key=Base64-encoded private key representation private_key_jwt with the PEM key file, signature algorithm is RS256 : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key-file=privateKey.pem private_key_jwt with the keystore file, signature algorithm is RS256 : quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key-store-file=keystore.jks quarkus.oidc-client.credentials.jwt.key-store-password=mypassword quarkus.oidc-client.credentials.jwt.key-password=mykeypassword # Private key alias inside the keystore quarkus.oidc-client.credentials.jwt.key-id=mykeyAlias Using client_secret_jwt or private_key_jwt authentication methods ensures that no client secret goes over the wire. 1.1.12.1. Additional JWT authentication options If either client_secret_jwt or private_key_jwt authentication methods are used, then the JWT signature algorithm, key identifier, audience, subject, and issuer can be customized, for example: # private_key_jwt client authentication quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key-file=privateKey.pem # This is a token key identifier 'kid' header - set it if your OpenID Connect provider requires it. # Note that if the key is represented in a JSON Web Key (JWK) format with a `kid` property, then # using 'quarkus.oidc-client.credentials.jwt.token-key-id' is unnecessary. quarkus.oidc-client.credentials.jwt.token-key-id=mykey # Use the RS512 signature algorithm instead of the default RS256 quarkus.oidc-client.credentials.jwt.signature-algorithm=RS512 # The token endpoint URL is the default audience value; use the base address URL instead: quarkus.oidc-client.credentials.jwt.audience=USD{quarkus.oidc-client.auth-server-url} # custom subject instead of the client ID: quarkus.oidc-client.credentials.jwt.subject=custom-subject # custom issuer instead of the client ID: quarkus.oidc-client.credentials.jwt.issuer=custom-issuer 1.1.12.2. JWT Bearer RFC7523 explains how JWT Bearer tokens can be used to authenticate clients, see the Using JWTs for Client Authentication section for more information. It can be enabled as follows: quarkus.oidc-client.auth-server-url=USD{auth-server-url} quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.source=bearer , the JWT bearer token must be provided as a client_assertion parameter to the OIDC client. You can use OidcClient methods for acquiring or refreshing tokens which accept additional grant parameters, for example, oidcClient.getTokens(Map.of("client_assertion", "ey... ")) . If you work work with the OIDC client filters then you must register a custom filter which will provide this assertion. Here is an example of the Quarkus REST (formerly RESTEasy Reactive) custom filter: package io.quarkus.it.keycloak; import java.util.Map; import io.quarkus.oidc.client.reactive.filter.runtime.AbstractOidcClientRequestReactiveFilter; import io.quarkus.oidc.common.runtime.OidcConstants; import jakarta.annotation.Priority; import jakarta.ws.rs.Priorities; @Priority(Priorities.AUTHENTICATION) public class OidcClientRequestCustomFilter extends AbstractOidcClientRequestReactiveFilter { @Override protected Map<String, String> additionalParameters() { return Map.of(OidcConstants.CLIENT_ASSERTION, "ey..."); } } Here is an example of the RESTEasy Classic custom filter: package io.quarkus.it.keycloak; import java.util.Map; import io.quarkus.oidc.client.filter.runtime.AbstractOidcClientRequestFilter; import io.quarkus.oidc.common.runtime.OidcConstants; import jakarta.annotation.Priority; import jakarta.ws.rs.Priorities; @Priority(Priorities.AUTHENTICATION) public class OidcClientRequestCustomFilter extends AbstractOidcClientRequestFilter { @Override protected Map<String, String> additionalParameters() { return Map.of(OidcConstants.CLIENT_ASSERTION, "ey..."); } } 1.1.12.3. Apple POST JWT Apple OpenID Connect Provider uses a client_secret_post method where a secret is a JWT produced with a private_key_jwt authentication method but with Apple account-specific issuer and subject properties. quarkus-oidc-client supports a non-standard client_secret_post_jwt authentication method, which can be configured as follows: quarkus.oidc-client.auth-server-url=USD{apple.url} quarkus.oidc-client.client-id=USD{apple.client-id} quarkus.oidc-client.credentials.client-secret.method=post-jwt quarkus.oidc-client.credentials.jwt.key-file=ecPrivateKey.pem quarkus.oidc-client.credentials.jwt.signature-algorithm=ES256 quarkus.oidc-client.credentials.jwt.subject=USD{apple.subject} quarkus.oidc-client.credentials.jwt.issuer=USD{apple.issuer} 1.1.12.4. Mutual TLS Some OpenID Connect Providers require that a client is authenticated as part of the mutual TLS ( mTLS ) authentication process. quarkus-oidc-client can be configured as follows to support mTLS : quarkus.oidc-client.tls.verification=certificate-validation # Keystore configuration quarkus.oidc-client.tls.key-store-file=client-keystore.jks quarkus.oidc-client.tls.key-store-password=USD{key-store-password} # Add more keystore properties if needed: #quarkus.oidc-client.tls.key-store-alias=keyAlias #quarkus.oidc-client.tls.key-store-alias-password=keyAliasPassword # Truststore configuration quarkus.oidc-client.tls.trust-store-file=client-truststore.jks quarkus.oidc-client.tls.trust-store-password=USD{trust-store-password} # Add more truststore properties if needed: #quarkus.oidc-client.tls.trust-store-alias=certAlias 1.1.13. Testing Start by adding the following dependencies to your test project: <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.awaitility</groupId> <artifactId>awaitility</artifactId> <scope>test</scope> </dependency> 1.1.13.1. Wiremock Add the following dependencies to your test project: <dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock</artifactId> <scope>test</scope> <version>USD{wiremock.version}</version> 1 </dependency> 1 Use a proper Wiremock version. All available versions can be found here . Write a Wiremock-based QuarkusTestResourceLifecycleManager , for example: package io.quarkus.it.keycloak; import static com.github.tomakehurst.wiremock.client.WireMock.matching; import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.wireMockConfig; import java.util.HashMap; import java.util.Map; import com.github.tomakehurst.wiremock.WireMockServer; import com.github.tomakehurst.wiremock.client.WireMock; import com.github.tomakehurst.wiremock.core.Options.ChunkedEncodingPolicy; import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; public class KeycloakRealmResourceManager implements QuarkusTestResourceLifecycleManager { private WireMockServer server; @Override public Map<String, String> start() { server = new WireMockServer(wireMockConfig().dynamicPort().useChunkedTransferEncoding(ChunkedEncodingPolicy.NEVER)); server.start(); server.stubFor(WireMock.post("/tokens") .withRequestBody(matching("grant_type=password&username=alice&password=alice")) .willReturn(WireMock .aResponse() .withHeader("Content-Type", "application/json") .withBody( "{\"access_token\":\"access_token_1\", \"expires_in\":4, \"refresh_token\":\"refresh_token_1\"}"))); server.stubFor(WireMock.post("/tokens") .withRequestBody(matching("grant_type=refresh_token&refresh_token=refresh_token_1")) .willReturn(WireMock .aResponse() .withHeader("Content-Type", "application/json") .withBody( "{\"access_token\":\"access_token_2\", \"expires_in\":4, \"refresh_token\":\"refresh_token_1\"}"))); Map<String, String> conf = new HashMap<>(); conf.put("keycloak.url", server.baseUrl()); return conf; } @Override public synchronized void stop() { if (server != null) { server.stop(); server = null; } } } Prepare the REST test endpoints. You can have the test front-end endpoint, which uses the injected MP REST client with a registered OidcClient filter, call the downstream endpoint. This endpoint echoes the token back. For example, see the integration-tests/oidc-client-wiremock in the main Quarkus repository. Set application.properties , for example: # Use the 'keycloak.url' property set by the test KeycloakRealmResourceManager quarkus.oidc-client.auth-server-url=USD{keycloak.url:replaced-by-test-resource} quarkus.oidc-client.discovery-enabled=false quarkus.oidc-client.token-path=/tokens quarkus.oidc-client.client-id=quarkus-service-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice And finally, write the test code. Given the Wiremock-based resource above, the first test invocation should return the access_token_1 access token, which will expire in 4 seconds. Use awaitility to wait for about 5 seconds, and now the test invocation should return the access_token_2 access token, which confirms the expired access_token_1 access token has been refreshed. 1.1.13.2. Keycloak If you work with Keycloak, you can use the same approach described in the OpenID Connect Bearer Token Integration testing Keycloak section. 1.1.14. How to check the errors in the logs Enable io.quarkus.oidc.client.runtime.OidcClientImpl TRACE level logging to see more details about the token acquisition and refresh errors: quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientImpl".level=TRACE quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientImpl".min-level=TRACE Enable io.quarkus.oidc.client.runtime.OidcClientRecorder TRACE level logging to see more details about the OidcClient initialization errors: quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientRecorder".level=TRACE quarkus.log.category."io.quarkus.oidc.client.runtime.OidcClientRecorder".min-level=TRACE 1.2. OIDC request filters You can filter OIDC requests made by Quarkus to the OIDC provider by registering one or more OidcRequestFilter implementations, which can update or add new request headers. For example, a filter can analyze the request body and add its digest as a new header value: package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.core.http.HttpMethod; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable public class OidcRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProperties) { HttpMethod method = request.method(); String uri = request.uri(); if (method == HttpMethod.POST && uri.endsWith("/service") && buffer != null) { request.putHeader("Digest", calculateDigest(buffer.toString())); } } private String calculateDigest(String bodyString) { // Apply the required digest algorithm to the body string } } 1.3. Token Propagation for Quarkus REST The quarkus-rest-client-oidc-token-propagation extension provides a REST Client filter, io.quarkus.oidc.token.propagation.reactive.AccessTokenRequestReactiveFilter , that simplifies the propagation of authentication information. This client propagates the bearer token present in the currently active request or the token acquired from the authorization code flow mechanism as the HTTP Authorization header's Bearer scheme value. You can selectively register AccessTokenRequestReactiveFilter by using either io.quarkus.oidc.token.propagation.AccessToken or org.eclipse.microprofile.rest.client.annotation.RegisterProvider annotation, for example: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessToken; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @AccessToken @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } or import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.reactive.AccessTokenRequestReactiveFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(AccessTokenRequestReactiveFilter.class) @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } Additionally, AccessTokenRequestReactiveFilter can support a complex application that needs to exchange the tokens before propagating them. If you work with Keycloak or another OIDC provider that supports a Token Exchange token grant, then you can configure AccessTokenRequestReactiveFilter to exchange the token like this: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=exchange quarkus.oidc-client.grant-options.exchange.audience=quarkus-app-exchange quarkus.resteasy-client-oidc-token-propagation.exchange-token=true 1 1 Please note that the exchange-token configuration property is ignored when the OidcClient name is set with the io.quarkus.oidc.token.propagation.AccessToken#exchangeTokenClient annotation attribute. Note AccessTokenRequestReactiveFilter will use OidcClient to exchange the current token, and you can use quarkus.oidc-client.grant-options.exchange to set the additional exchange properties expected by your OpenID Connect Provider. If you work with providers such as Azure that require using JWT bearer token grant to exchange the current token, then you can configure AccessTokenRequestReactiveFilter to exchange the token like this: quarkus.oidc-client.auth-server-url=USD{azure.provider.url} quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=jwt quarkus.oidc-client.grant-options.jwt.requested_token_use=on_behalf_of quarkus.oidc-client.scopes=https://graph.microsoft.com/user.read,offline_access quarkus.resteasy-client-oidc-token-propagation.exchange-token=true AccessTokenRequestReactiveFilter uses a default OidcClient by default. A named OidcClient can be selected with a quarkus.rest-client-oidc-token-propagation.client-name configuration property or with the io.quarkus.oidc.token.propagation.AccessToken#exchangeTokenClient annotation attribute. 1.4. Token Propagation for RESTEasy Classic The quarkus-resteasy-client-oidc-token-propagation extension provides two Jakarta REST jakarta.ws.rs.client.ClientRequestFilter class implementations that simplify the propagation of authentication information. io.quarkus.oidc.token.propagation.AccessTokenRequestFilter propagates the Bearer token present in the current active request or the token acquired from the Authorization code flow mechanism , as the HTTP Authorization header's Bearer scheme value. The io.quarkus.oidc.token.propagation.JsonWebTokenRequestFilter provides the same functionality but, in addition, provides support for JWT tokens. When you need to propagate the current Authorization Code Flow access token, then the immediate token propagation will work well - as the code flow access tokens (as opposed to ID tokens) are meant to be propagated for the current Quarkus endpoint to access the remote services on behalf of the currently authenticated user. However, the direct end-to-end Bearer token propagation should be avoided. For example, Client Service A Service B where Service B receives a token sent by Client to Service A . In such cases, Service B cannot distinguish if the token came from Service A or from Client directly. For Service B to verify the token came from Service A , it should be able to assert a new issuer and audience claims. Additionally, a complex application might need to exchange or update the tokens before propagating them. For example, the access context might be different when Service A is accessing Service B . In this case, Service A might be granted a narrow or completely different set of scopes to access Service B . The following sections show how AccessTokenRequestFilter and JsonWebTokenRequestFilter can help. 1.4.1. RestClient AccessTokenRequestFilter AccessTokenRequestFilter treats all tokens as Strings, and as such, it can work with both JWT and opaque tokens. You can selectively register AccessTokenRequestFilter by using either io.quarkus.oidc.token.propagation.AccessToken or org.eclipse.microprofile.rest.client.annotation.RegisterProvider , for example: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessToken; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @AccessToken @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } or import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessTokenRequestFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(AccessTokenRequestFilter.class) @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } Alternatively, AccessTokenRequestFilter can be registered automatically with all MP Rest or Jakarta REST clients if the quarkus.resteasy-client-oidc-token-propagation.register-filter property is set to true and quarkus.resteasy-client-oidc-token-propagation.json-web-token property is set to false (which is a default value). 1.4.1.1. Exchange token before propagation If the current access token needs to be exchanged before propagation and you work with Keycloak or other OpenID Connect Provider which supports a Token Exchange token grant, then you can configure AccessTokenRequestFilter like this: quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=exchange quarkus.oidc-client.grant-options.exchange.audience=quarkus-app-exchange quarkus.resteasy-client-oidc-token-propagation.exchange-token=true If you work with providers such as Azure that require using JWT bearer token grant to exchange the current token, then you can configure AccessTokenRequestFilter to exchange the token like this: quarkus.oidc-client.auth-server-url=USD{azure.provider.url} quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=jwt quarkus.oidc-client.grant-options.jwt.requested_token_use=on_behalf_of quarkus.oidc-client.scopes=https://graph.microsoft.com/user.read,offline_access quarkus.resteasy-client-oidc-token-propagation.exchange-token=true Note AccessTokenRequestFilter will use OidcClient to exchange the current token, and you can use quarkus.oidc-client.grant-options.exchange to set the additional exchange properties expected by your OpenID Connect Provider. AccessTokenRequestFilter uses a default OidcClient by default. A named OidcClient can be selected with a quarkus.resteasy-client-oidc-token-propagation.client-name configuration property. 1.4.2. RestClient JsonWebTokenRequestFilter Using JsonWebTokenRequestFilter is recommended if you work with Bearer JWT tokens where these tokens can have their claims, such as issuer and audience modified and the updated tokens secured (for example, re-signed) again. It expects an injected org.eclipse.microprofile.jwt.JsonWebToken and, therefore, will not work with the opaque tokens. Also, if your OpenID Connect Provider supports a Token Exchange protocol, then it is recommended to use AccessTokenRequestFilter instead - as both JWT and opaque bearer tokens can be securely exchanged with AccessTokenRequestFilter . JsonWebTokenRequestFilter makes it easy for Service A implementations to update the injected org.eclipse.microprofile.jwt.JsonWebToken with the new issuer and audience claim values and secure the updated token again with a new signature. The only difficult step is ensuring that Service A has a signing key which should be provisioned from a secure file system or remote secure storage such as Vault. You can selectively register JsonWebTokenRequestFilter by using either io.quarkus.oidc.token.propagation.JsonWebToken or org.eclipse.microprofile.rest.client.annotation.RegisterProvider , for example: import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.JsonWebToken; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @JsonWebToken @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } or import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.JsonWebTokenRequestFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(JsonWebTokenRequestFilter.class) @Path("/") public interface ProtectedResourceService { @GET String getUserName(); } Alternatively, JsonWebTokenRequestFilter can be registered automatically with all MicroProfile REST or Jakarta REST clients if both quarkus.resteasy-client-oidc-token-propagation.register-filter and quarkus.resteasy-client-oidc-token-propagation.json-web-token properties are set to true . 1.4.2.1. Update token before propagation If the injected token needs to have its iss (issuer) or aud (audience) claims updated and secured again with a new signature, then you can configure JsonWebTokenRequestFilter like this: quarkus.resteasy-client-oidc-token-propagation.secure-json-web-token=true smallrye.jwt.sign.key.location=/privateKey.pem # Set a new issuer smallrye.jwt.new-token.issuer=http://frontend-resource # Set a new audience smallrye.jwt.new-token.audience=http://downstream-resource # Override the existing token issuer and audience claims if they are already set smallrye.jwt.new-token.override-matching-claims=true As mentioned, use AccessTokenRequestFilter if you work with Keycloak or an OpenID Connect Provider that supports a Token Exchange protocol. 1.4.3. Testing You can generate the tokens as described in OpenID Connect Bearer Token Integration testing section. Prepare the REST test endpoints. You can have the test front-end endpoint, which uses the injected MP REST client with a registered token propagation filter, call the downstream endpoint. For example, see the integration-tests/resteasy-client-oidc-token-propagation in the main Quarkus repository. 1.5. Configuration reference 1.5.1. OIDC client Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.oidc-client.enabled If the OIDC client extension is enabled. Environment variable: QUARKUS_OIDC_CLIENT_ENABLED boolean true quarkus.oidc-client.auth-server-url The base URL of the OpenID Connect (OIDC) server, for example, https://host:port/auth . Do not set this property if the public key verification ( public-key ) or certificate chain verification only ( certificate-chain ) is required. The OIDC discovery endpoint is called by default by appending a .well-known/openid-configuration path to this URL. For Keycloak, use https://host:port/realms/{realm} , replacing {realm} with the Keycloak realm name. Environment variable: QUARKUS_OIDC_CLIENT_AUTH_SERVER_URL string quarkus.oidc-client.discovery-enabled Discovery of the OIDC endpoints. If not enabled, you must configure the OIDC endpoint URLs individually. Environment variable: QUARKUS_OIDC_CLIENT_DISCOVERY_ENABLED boolean true quarkus.oidc-client.token-path The OIDC token endpoint that issues access and refresh tokens; specified as a relative path or absolute URL. Set if discovery-enabled is false or a discovered token endpoint path must be customized. Environment variable: QUARKUS_OIDC_CLIENT_TOKEN_PATH string quarkus.oidc-client.revoke-path The relative path or absolute URL of the OIDC token revocation endpoint. Environment variable: QUARKUS_OIDC_CLIENT_REVOKE_PATH string quarkus.oidc-client.client-id The client id of the application. Each application has a client id that is used to identify the application. Setting the client id is not required if application-type is service and no token introspection is required. Environment variable: QUARKUS_OIDC_CLIENT_CLIENT_ID string quarkus.oidc-client.client-name The client name of the application. It is meant to represent a human readable description of the application which you may provide when an application (client) is registered in an OpenId Connect provider's dashboard. For example, you can set this property to have more informative log messages which record an activity of the given client. Environment variable: QUARKUS_OIDC_CLIENT_CLIENT_NAME string quarkus.oidc-client.connection-delay The duration to attempt the initial connection to an OIDC server. For example, setting the duration to 20S allows 10 retries, each 2 seconds apart. This property is only effective when the initial OIDC connection is created. For dropped connections, use the connection-retry-count property instead. Environment variable: QUARKUS_OIDC_CLIENT_CONNECTION_DELAY Duration quarkus.oidc-client.connection-retry-count The number of times to retry re-establishing an existing OIDC connection if it is temporarily lost. Different from connection-delay , which applies only to initial connection attempts. For instance, if a request to the OIDC token endpoint fails due to a connection issue, it will be retried as per this setting. Environment variable: QUARKUS_OIDC_CLIENT_CONNECTION_RETRY_COUNT int 3 quarkus.oidc-client.connection-timeout The number of seconds after which the current OIDC connection request times out. Environment variable: QUARKUS_OIDC_CLIENT_CONNECTION_TIMEOUT Duration 10S quarkus.oidc-client.use-blocking-dns-lookup Whether DNS lookup should be performed on the worker thread. Use this option when you can see logged warnings about blocked Vert.x event loop by HTTP requests to OIDC server. Environment variable: QUARKUS_OIDC_CLIENT_USE_BLOCKING_DNS_LOOKUP boolean false quarkus.oidc-client.max-pool-size The maximum size of the connection pool used by the WebClient. Environment variable: QUARKUS_OIDC_CLIENT_MAX_POOL_SIZE int quarkus.oidc-client.credentials.secret The client secret used by the client_secret_basic authentication method. Must be set unless a secret is set in client-secret or jwt client authentication is required. You can use client-secret.value instead, but both properties are mutually exclusive. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_SECRET string quarkus.oidc-client.credentials.client-secret.value The client secret value. This value is ignored if credentials.secret is set. Must be set unless a secret is set in client-secret or jwt client authentication is required. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_CLIENT_SECRET_VALUE string quarkus.oidc-client.credentials.client-secret.provider.name The CredentialsProvider bean name, which should only be set if more than one CredentialsProvider is registered Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_CLIENT_SECRET_PROVIDER_NAME string quarkus.oidc-client.credentials.client-secret.provider.keyring-name The CredentialsProvider keyring name. The keyring name is only required when the CredentialsProvider being used requires the keyring name to look up the secret, which is often the case when a CredentialsProvider is shared by multiple extensions to retrieve credentials from a more dynamic source like a vault instance or secret manager Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_CLIENT_SECRET_PROVIDER_KEYRING_NAME string quarkus.oidc-client.credentials.client-secret.provider.key The CredentialsProvider client secret key Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_CLIENT_SECRET_PROVIDER_KEY string quarkus.oidc-client.credentials.client-secret.method The authentication method. If the clientSecret.value secret is set, this method is basic by default. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_CLIENT_SECRET_METHOD basic : client_secret_basic (default): The client id and secret are submitted with the HTTP Authorization Basic scheme. post : client_secret_post : The client id and secret are submitted as the client_id and client_secret form parameters. post-jwt : client_secret_jwt : The client id and generated JWT secret are submitted as the client_id and client_secret form parameters. query : client id and secret are submitted as HTTP query parameters. This option is only supported by the OIDC extension. quarkus.oidc-client.credentials.jwt.source JWT token source: OIDC provider client or an existing JWT bearer token. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SOURCE client , bearer client quarkus.oidc-client.credentials.jwt.secret If provided, indicates that JWT is signed using a secret key. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SECRET string quarkus.oidc-client.credentials.jwt.secret-provider.name The CredentialsProvider bean name, which should only be set if more than one CredentialsProvider is registered Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SECRET_PROVIDER_NAME string quarkus.oidc-client.credentials.jwt.secret-provider.keyring-name The CredentialsProvider keyring name. The keyring name is only required when the CredentialsProvider being used requires the keyring name to look up the secret, which is often the case when a CredentialsProvider is shared by multiple extensions to retrieve credentials from a more dynamic source like a vault instance or secret manager Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SECRET_PROVIDER_KEYRING_NAME string quarkus.oidc-client.credentials.jwt.secret-provider.key The CredentialsProvider client secret key Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SECRET_PROVIDER_KEY string quarkus.oidc-client.credentials.jwt.key String representation of a private key. If provided, indicates that JWT is signed using a private key in PEM or JWK format. You can use the signature-algorithm property to override the default key algorithm, RS256 . Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_KEY string quarkus.oidc-client.credentials.jwt.key-file If provided, indicates that JWT is signed using a private key in PEM or JWK format. You can use the signature-algorithm property to override the default key algorithm, RS256 . Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_KEY_FILE string quarkus.oidc-client.credentials.jwt.key-store-file If provided, indicates that JWT is signed using a private key from a keystore. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_KEY_STORE_FILE string quarkus.oidc-client.credentials.jwt.key-store-password A parameter to specify the password of the keystore file. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_KEY_STORE_PASSWORD string quarkus.oidc-client.credentials.jwt.key-id The private key id or alias. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_KEY_ID string quarkus.oidc-client.credentials.jwt.key-password The private key password. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_KEY_PASSWORD string quarkus.oidc-client.credentials.jwt.audience The JWT audience ( aud ) claim value. By default, the audience is set to the address of the OpenId Connect Provider's token endpoint. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_AUDIENCE string quarkus.oidc-client.credentials.jwt.token-key-id The key identifier of the signing key added as a JWT kid header. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_TOKEN_KEY_ID string quarkus.oidc-client.credentials.jwt.issuer The issuer of the signing key added as a JWT iss claim. The default value is the client id. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_ISSUER string quarkus.oidc-client.credentials.jwt.subject Subject of the signing key added as a JWT sub claim The default value is the client id. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SUBJECT string quarkus.oidc-client.credentials.jwt.claims."claim-name" Additional claims. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_CLAIMS__CLAIM_NAME_ Map<String,String> quarkus.oidc-client.credentials.jwt.signature-algorithm The signature algorithm used for the key-file property. Supported values: RS256 (default), RS384 , RS512 , PS256 , PS384 , PS512 , ES256 , ES384 , ES512 , HS256 , HS384 , HS512 . Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_SIGNATURE_ALGORITHM string quarkus.oidc-client.credentials.jwt.lifespan The JWT lifespan in seconds. This value is added to the time at which the JWT was issued to calculate the expiration time. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_LIFESPAN int 10 quarkus.oidc-client.credentials.jwt.assertion If true then the client authentication token is a JWT bearer grant assertion. Instead of producing 'client_assertion' and 'client_assertion_type' form properties, only 'assertion' is produced. This option is only supported by the OIDC client extension. Environment variable: QUARKUS_OIDC_CLIENT_CREDENTIALS_JWT_ASSERTION boolean false quarkus.oidc-client.proxy.host The host name or IP address of the Proxy. Note: If the OIDC adapter requires a Proxy to talk with the OIDC server (Provider), set this value to enable the usage of a Proxy. Environment variable: QUARKUS_OIDC_CLIENT_PROXY_HOST string quarkus.oidc-client.proxy.port The port number of the Proxy. The default value is 80 . Environment variable: QUARKUS_OIDC_CLIENT_PROXY_PORT int 80 quarkus.oidc-client.proxy.username The username, if the Proxy needs authentication. Environment variable: QUARKUS_OIDC_CLIENT_PROXY_USERNAME string quarkus.oidc-client.proxy.password The password, if the Proxy needs authentication. Environment variable: QUARKUS_OIDC_CLIENT_PROXY_PASSWORD string quarkus.oidc-client.tls.verification Certificate validation and hostname verification, which can be one of the following Verification values. Default is required . Environment variable: QUARKUS_OIDC_CLIENT_TLS_VERIFICATION required : Certificates are validated and hostname verification is enabled. This is the default value. certificate-validation : Certificates are validated but hostname verification is disabled. none : All certificates are trusted and hostname verification is disabled. quarkus.oidc-client.tls.key-store-file An optional keystore that holds the certificate information instead of specifying separate files. Environment variable: QUARKUS_OIDC_CLIENT_TLS_KEY_STORE_FILE path quarkus.oidc-client.tls.key-store-file-type The type of the keystore file. If not given, the type is automatically detected based on the file name. Environment variable: QUARKUS_OIDC_CLIENT_TLS_KEY_STORE_FILE_TYPE string quarkus.oidc-client.tls.key-store-provider The provider of the keystore file. If not given, the provider is automatically detected based on the keystore file type. Environment variable: QUARKUS_OIDC_CLIENT_TLS_KEY_STORE_PROVIDER string quarkus.oidc-client.tls.key-store-password The password of the keystore file. If not given, the default value, password , is used. Environment variable: QUARKUS_OIDC_CLIENT_TLS_KEY_STORE_PASSWORD string quarkus.oidc-client.tls.key-store-key-alias The alias of a specific key in the keystore. When SNI is disabled, if the keystore contains multiple keys and no alias is specified, the behavior is undefined. Environment variable: QUARKUS_OIDC_CLIENT_TLS_KEY_STORE_KEY_ALIAS string quarkus.oidc-client.tls.key-store-key-password The password of the key, if it is different from the key-store-password . Environment variable: QUARKUS_OIDC_CLIENT_TLS_KEY_STORE_KEY_PASSWORD string quarkus.oidc-client.tls.trust-store-file The truststore that holds the certificate information of the certificates to trust. Environment variable: QUARKUS_OIDC_CLIENT_TLS_TRUST_STORE_FILE path quarkus.oidc-client.tls.trust-store-password The password of the truststore file. Environment variable: QUARKUS_OIDC_CLIENT_TLS_TRUST_STORE_PASSWORD string quarkus.oidc-client.tls.trust-store-cert-alias The alias of the truststore certificate. Environment variable: QUARKUS_OIDC_CLIENT_TLS_TRUST_STORE_CERT_ALIAS string quarkus.oidc-client.tls.trust-store-file-type The type of the truststore file. If not given, the type is automatically detected based on the file name. Environment variable: QUARKUS_OIDC_CLIENT_TLS_TRUST_STORE_FILE_TYPE string quarkus.oidc-client.tls.trust-store-provider The provider of the truststore file. If not given, the provider is automatically detected based on the truststore file type. Environment variable: QUARKUS_OIDC_CLIENT_TLS_TRUST_STORE_PROVIDER string quarkus.oidc-client.id A unique OIDC client identifier. It must be set when OIDC clients are created dynamically and is optional in all other cases. Environment variable: QUARKUS_OIDC_CLIENT_ID string quarkus.oidc-client.client-enabled If this client configuration is enabled. Environment variable: QUARKUS_OIDC_CLIENT_CLIENT_ENABLED boolean true quarkus.oidc-client.scopes List of access token scopes Environment variable: QUARKUS_OIDC_CLIENT_SCOPES list of string quarkus.oidc-client.refresh-token-time-skew Refresh token time skew in seconds. If this property is enabled then the configured number of seconds is added to the current time when checking whether the access token should be refreshed. If the sum is greater than this access token's expiration time then a refresh is going to happen. Environment variable: QUARKUS_OIDC_CLIENT_REFRESH_TOKEN_TIME_SKEW Duration quarkus.oidc-client.absolute-expires-in If the access token 'expires_in' property should be checked as an absolute time value as opposed to a duration relative to the current time. Environment variable: QUARKUS_OIDC_CLIENT_ABSOLUTE_EXPIRES_IN boolean false quarkus.oidc-client.grant.type Grant type Environment variable: QUARKUS_OIDC_CLIENT_GRANT_TYPE client : 'client_credentials' grant requiring an OIDC client authentication only password : 'password' grant requiring both OIDC client and user ('username' and 'password') authentications code : 'authorization_code' grant requiring an OIDC client authentication as well as at least 'code' and 'redirect_uri' parameters which must be passed to OidcClient at the token request time. exchange : 'urn:ietf:params:oauth:grant-type:token-exchange' grant requiring an OIDC client authentication as well as at least 'subject_token' parameter which must be passed to OidcClient at the token request time. jwt : 'urn:ietf:params:oauth:grant-type:jwt-bearer' grant requiring an OIDC client authentication as well as at least an 'assertion' parameter which must be passed to OidcClient at the token request time. refresh : 'refresh_token' grant requiring an OIDC client authentication and a refresh token. Note, OidcClient supports this grant by default if an access token acquisition response contained a refresh token. However, in some cases, the refresh token is provided out of band, for example, it can be shared between several of the confidential client's services, etc. If 'quarkus.oidc-client.grant-type' is set to 'refresh' then OidcClient will only support refreshing the tokens. ciba : 'urn:openid:params:grant-type:ciba' grant requiring an OIDC client authentication as well as 'auth_req_id' parameter which must be passed to OidcClient at the token request time. device : 'urn:ietf:params:oauth:grant-type:device_code' grant requiring an OIDC client authentication as well as 'device_code' parameter which must be passed to OidcClient at the token request time. client quarkus.oidc-client.grant.access-token-property Access token property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT_GRANT_ACCESS_TOKEN_PROPERTY string access_token quarkus.oidc-client.grant.refresh-token-property Refresh token property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT_GRANT_REFRESH_TOKEN_PROPERTY string refresh_token quarkus.oidc-client.grant.expires-in-property Access token expiry property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT_GRANT_EXPIRES_IN_PROPERTY string expires_in quarkus.oidc-client.grant.refresh-expires-in-property Refresh token expiry property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT_GRANT_REFRESH_EXPIRES_IN_PROPERTY string refresh_expires_in quarkus.oidc-client.grant-options."grant-name" Grant options Environment variable: QUARKUS_OIDC_CLIENT_GRANT_OPTIONS__GRANT_NAME_ Map<String,Map<String,String>> quarkus.oidc-client.early-tokens-acquisition Requires that all filters which use 'OidcClient' acquire the tokens at the post-construct initialization time, possibly long before these tokens are used. This property should be disabled if the access token may expire before it is used for the first time and no refresh token is available. Environment variable: QUARKUS_OIDC_CLIENT_EARLY_TOKENS_ACQUISITION boolean true quarkus.oidc-client.headers."headers" Custom HTTP headers which have to be sent to the token endpoint Environment variable: QUARKUS_OIDC_CLIENT_HEADERS__HEADERS_ Map<String,String> Additional named clients Type Default quarkus.oidc-client."id".auth-server-url The base URL of the OpenID Connect (OIDC) server, for example, https://host:port/auth . Do not set this property if the public key verification ( public-key ) or certificate chain verification only ( certificate-chain ) is required. The OIDC discovery endpoint is called by default by appending a .well-known/openid-configuration path to this URL. For Keycloak, use https://host:port/realms/{realm} , replacing {realm} with the Keycloak realm name. Environment variable: QUARKUS_OIDC_CLIENT__ID__AUTH_SERVER_URL string quarkus.oidc-client."id".discovery-enabled Discovery of the OIDC endpoints. If not enabled, you must configure the OIDC endpoint URLs individually. Environment variable: QUARKUS_OIDC_CLIENT__ID__DISCOVERY_ENABLED boolean true quarkus.oidc-client."id".token-path The OIDC token endpoint that issues access and refresh tokens; specified as a relative path or absolute URL. Set if discovery-enabled is false or a discovered token endpoint path must be customized. Environment variable: QUARKUS_OIDC_CLIENT__ID__TOKEN_PATH string quarkus.oidc-client."id".revoke-path The relative path or absolute URL of the OIDC token revocation endpoint. Environment variable: QUARKUS_OIDC_CLIENT__ID__REVOKE_PATH string quarkus.oidc-client."id".client-id The client id of the application. Each application has a client id that is used to identify the application. Setting the client id is not required if application-type is service and no token introspection is required. Environment variable: QUARKUS_OIDC_CLIENT__ID__CLIENT_ID string quarkus.oidc-client."id".client-name The client name of the application. It is meant to represent a human readable description of the application which you may provide when an application (client) is registered in an OpenId Connect provider's dashboard. For example, you can set this property to have more informative log messages which record an activity of the given client. Environment variable: QUARKUS_OIDC_CLIENT__ID__CLIENT_NAME string quarkus.oidc-client."id".connection-delay The duration to attempt the initial connection to an OIDC server. For example, setting the duration to 20S allows 10 retries, each 2 seconds apart. This property is only effective when the initial OIDC connection is created. For dropped connections, use the connection-retry-count property instead. Environment variable: QUARKUS_OIDC_CLIENT__ID__CONNECTION_DELAY Duration quarkus.oidc-client."id".connection-retry-count The number of times to retry re-establishing an existing OIDC connection if it is temporarily lost. Different from connection-delay , which applies only to initial connection attempts. For instance, if a request to the OIDC token endpoint fails due to a connection issue, it will be retried as per this setting. Environment variable: QUARKUS_OIDC_CLIENT__ID__CONNECTION_RETRY_COUNT int 3 quarkus.oidc-client."id".connection-timeout The number of seconds after which the current OIDC connection request times out. Environment variable: QUARKUS_OIDC_CLIENT__ID__CONNECTION_TIMEOUT Duration 10S quarkus.oidc-client."id".use-blocking-dns-lookup Whether DNS lookup should be performed on the worker thread. Use this option when you can see logged warnings about blocked Vert.x event loop by HTTP requests to OIDC server. Environment variable: QUARKUS_OIDC_CLIENT__ID__USE_BLOCKING_DNS_LOOKUP boolean false quarkus.oidc-client."id".max-pool-size The maximum size of the connection pool used by the WebClient. Environment variable: QUARKUS_OIDC_CLIENT__ID__MAX_POOL_SIZE int quarkus.oidc-client."id".credentials.secret The client secret used by the client_secret_basic authentication method. Must be set unless a secret is set in client-secret or jwt client authentication is required. You can use client-secret.value instead, but both properties are mutually exclusive. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_SECRET string quarkus.oidc-client."id".credentials.client-secret.value The client secret value. This value is ignored if credentials.secret is set. Must be set unless a secret is set in client-secret or jwt client authentication is required. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_CLIENT_SECRET_VALUE string quarkus.oidc-client."id".credentials.client-secret.provider.name The CredentialsProvider bean name, which should only be set if more than one CredentialsProvider is registered Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_CLIENT_SECRET_PROVIDER_NAME string quarkus.oidc-client."id".credentials.client-secret.provider.keyring-name The CredentialsProvider keyring name. The keyring name is only required when the CredentialsProvider being used requires the keyring name to look up the secret, which is often the case when a CredentialsProvider is shared by multiple extensions to retrieve credentials from a more dynamic source like a vault instance or secret manager Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_CLIENT_SECRET_PROVIDER_KEYRING_NAME string quarkus.oidc-client."id".credentials.client-secret.provider.key The CredentialsProvider client secret key Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_CLIENT_SECRET_PROVIDER_KEY string quarkus.oidc-client."id".credentials.client-secret.method The authentication method. If the clientSecret.value secret is set, this method is basic by default. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_CLIENT_SECRET_METHOD basic : client_secret_basic (default): The client id and secret are submitted with the HTTP Authorization Basic scheme. post : client_secret_post : The client id and secret are submitted as the client_id and client_secret form parameters. post-jwt : client_secret_jwt : The client id and generated JWT secret are submitted as the client_id and client_secret form parameters. query : client id and secret are submitted as HTTP query parameters. This option is only supported by the OIDC extension. quarkus.oidc-client."id".credentials.jwt.source JWT token source: OIDC provider client or an existing JWT bearer token. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SOURCE client , bearer client quarkus.oidc-client."id".credentials.jwt.secret If provided, indicates that JWT is signed using a secret key. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SECRET string quarkus.oidc-client."id".credentials.jwt.secret-provider.name The CredentialsProvider bean name, which should only be set if more than one CredentialsProvider is registered Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SECRET_PROVIDER_NAME string quarkus.oidc-client."id".credentials.jwt.secret-provider.keyring-name The CredentialsProvider keyring name. The keyring name is only required when the CredentialsProvider being used requires the keyring name to look up the secret, which is often the case when a CredentialsProvider is shared by multiple extensions to retrieve credentials from a more dynamic source like a vault instance or secret manager Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SECRET_PROVIDER_KEYRING_NAME string quarkus.oidc-client."id".credentials.jwt.secret-provider.key The CredentialsProvider client secret key Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SECRET_PROVIDER_KEY string quarkus.oidc-client."id".credentials.jwt.key String representation of a private key. If provided, indicates that JWT is signed using a private key in PEM or JWK format. You can use the signature-algorithm property to override the default key algorithm, RS256 . Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_KEY string quarkus.oidc-client."id".credentials.jwt.key-file If provided, indicates that JWT is signed using a private key in PEM or JWK format. You can use the signature-algorithm property to override the default key algorithm, RS256 . Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_KEY_FILE string quarkus.oidc-client."id".credentials.jwt.key-store-file If provided, indicates that JWT is signed using a private key from a keystore. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_KEY_STORE_FILE string quarkus.oidc-client."id".credentials.jwt.key-store-password A parameter to specify the password of the keystore file. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_KEY_STORE_PASSWORD string quarkus.oidc-client."id".credentials.jwt.key-id The private key id or alias. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_KEY_ID string quarkus.oidc-client."id".credentials.jwt.key-password The private key password. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_KEY_PASSWORD string quarkus.oidc-client."id".credentials.jwt.audience The JWT audience ( aud ) claim value. By default, the audience is set to the address of the OpenId Connect Provider's token endpoint. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_AUDIENCE string quarkus.oidc-client."id".credentials.jwt.token-key-id The key identifier of the signing key added as a JWT kid header. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_TOKEN_KEY_ID string quarkus.oidc-client."id".credentials.jwt.issuer The issuer of the signing key added as a JWT iss claim. The default value is the client id. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_ISSUER string quarkus.oidc-client."id".credentials.jwt.subject Subject of the signing key added as a JWT sub claim The default value is the client id. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SUBJECT string quarkus.oidc-client."id".credentials.jwt.claims."claim-name" Additional claims. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_CLAIMS__CLAIM_NAME_ Map<String,String> quarkus.oidc-client."id".credentials.jwt.signature-algorithm The signature algorithm used for the key-file property. Supported values: RS256 (default), RS384 , RS512 , PS256 , PS384 , PS512 , ES256 , ES384 , ES512 , HS256 , HS384 , HS512 . Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_SIGNATURE_ALGORITHM string quarkus.oidc-client."id".credentials.jwt.lifespan The JWT lifespan in seconds. This value is added to the time at which the JWT was issued to calculate the expiration time. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_LIFESPAN int 10 quarkus.oidc-client."id".credentials.jwt.assertion If true then the client authentication token is a JWT bearer grant assertion. Instead of producing 'client_assertion' and 'client_assertion_type' form properties, only 'assertion' is produced. This option is only supported by the OIDC client extension. Environment variable: QUARKUS_OIDC_CLIENT__ID__CREDENTIALS_JWT_ASSERTION boolean false quarkus.oidc-client."id".proxy.host The host name or IP address of the Proxy. Note: If the OIDC adapter requires a Proxy to talk with the OIDC server (Provider), set this value to enable the usage of a Proxy. Environment variable: QUARKUS_OIDC_CLIENT__ID__PROXY_HOST string quarkus.oidc-client."id".proxy.port The port number of the Proxy. The default value is 80 . Environment variable: QUARKUS_OIDC_CLIENT__ID__PROXY_PORT int 80 quarkus.oidc-client."id".proxy.username The username, if the Proxy needs authentication. Environment variable: QUARKUS_OIDC_CLIENT__ID__PROXY_USERNAME string quarkus.oidc-client."id".proxy.password The password, if the Proxy needs authentication. Environment variable: QUARKUS_OIDC_CLIENT__ID__PROXY_PASSWORD string quarkus.oidc-client."id".tls.verification Certificate validation and hostname verification, which can be one of the following Verification values. Default is required . Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_VERIFICATION required : Certificates are validated and hostname verification is enabled. This is the default value. certificate-validation : Certificates are validated but hostname verification is disabled. none : All certificates are trusted and hostname verification is disabled. quarkus.oidc-client."id".tls.key-store-file An optional keystore that holds the certificate information instead of specifying separate files. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_KEY_STORE_FILE path quarkus.oidc-client."id".tls.key-store-file-type The type of the keystore file. If not given, the type is automatically detected based on the file name. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_KEY_STORE_FILE_TYPE string quarkus.oidc-client."id".tls.key-store-provider The provider of the keystore file. If not given, the provider is automatically detected based on the keystore file type. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_KEY_STORE_PROVIDER string quarkus.oidc-client."id".tls.key-store-password The password of the keystore file. If not given, the default value, password , is used. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_KEY_STORE_PASSWORD string quarkus.oidc-client."id".tls.key-store-key-alias The alias of a specific key in the keystore. When SNI is disabled, if the keystore contains multiple keys and no alias is specified, the behavior is undefined. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_KEY_STORE_KEY_ALIAS string quarkus.oidc-client."id".tls.key-store-key-password The password of the key, if it is different from the key-store-password . Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_KEY_STORE_KEY_PASSWORD string quarkus.oidc-client."id".tls.trust-store-file The truststore that holds the certificate information of the certificates to trust. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_TRUST_STORE_FILE path quarkus.oidc-client."id".tls.trust-store-password The password of the truststore file. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_TRUST_STORE_PASSWORD string quarkus.oidc-client."id".tls.trust-store-cert-alias The alias of the truststore certificate. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_TRUST_STORE_CERT_ALIAS string quarkus.oidc-client."id".tls.trust-store-file-type The type of the truststore file. If not given, the type is automatically detected based on the file name. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_TRUST_STORE_FILE_TYPE string quarkus.oidc-client."id".tls.trust-store-provider The provider of the truststore file. If not given, the provider is automatically detected based on the truststore file type. Environment variable: QUARKUS_OIDC_CLIENT__ID__TLS_TRUST_STORE_PROVIDER string quarkus.oidc-client."id".id A unique OIDC client identifier. It must be set when OIDC clients are created dynamically and is optional in all other cases. Environment variable: QUARKUS_OIDC_CLIENT__ID__ID string quarkus.oidc-client."id".client-enabled If this client configuration is enabled. Environment variable: QUARKUS_OIDC_CLIENT__ID__CLIENT_ENABLED boolean true quarkus.oidc-client."id".scopes List of access token scopes Environment variable: QUARKUS_OIDC_CLIENT__ID__SCOPES list of string quarkus.oidc-client."id".refresh-token-time-skew Refresh token time skew in seconds. If this property is enabled then the configured number of seconds is added to the current time when checking whether the access token should be refreshed. If the sum is greater than this access token's expiration time then a refresh is going to happen. Environment variable: QUARKUS_OIDC_CLIENT__ID__REFRESH_TOKEN_TIME_SKEW Duration quarkus.oidc-client."id".absolute-expires-in If the access token 'expires_in' property should be checked as an absolute time value as opposed to a duration relative to the current time. Environment variable: QUARKUS_OIDC_CLIENT__ID__ABSOLUTE_EXPIRES_IN boolean false quarkus.oidc-client."id".grant.type Grant type Environment variable: QUARKUS_OIDC_CLIENT__ID__GRANT_TYPE client : 'client_credentials' grant requiring an OIDC client authentication only password : 'password' grant requiring both OIDC client and user ('username' and 'password') authentications code : 'authorization_code' grant requiring an OIDC client authentication as well as at least 'code' and 'redirect_uri' parameters which must be passed to OidcClient at the token request time. exchange : 'urn:ietf:params:oauth:grant-type:token-exchange' grant requiring an OIDC client authentication as well as at least 'subject_token' parameter which must be passed to OidcClient at the token request time. jwt : 'urn:ietf:params:oauth:grant-type:jwt-bearer' grant requiring an OIDC client authentication as well as at least an 'assertion' parameter which must be passed to OidcClient at the token request time. refresh : 'refresh_token' grant requiring an OIDC client authentication and a refresh token. Note, OidcClient supports this grant by default if an access token acquisition response contained a refresh token. However, in some cases, the refresh token is provided out of band, for example, it can be shared between several of the confidential client's services, etc. If 'quarkus.oidc-client.grant-type' is set to 'refresh' then OidcClient will only support refreshing the tokens. ciba : 'urn:openid:params:grant-type:ciba' grant requiring an OIDC client authentication as well as 'auth_req_id' parameter which must be passed to OidcClient at the token request time. device : 'urn:ietf:params:oauth:grant-type:device_code' grant requiring an OIDC client authentication as well as 'device_code' parameter which must be passed to OidcClient at the token request time. client quarkus.oidc-client."id".grant.access-token-property Access token property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT__ID__GRANT_ACCESS_TOKEN_PROPERTY string access_token quarkus.oidc-client."id".grant.refresh-token-property Refresh token property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT__ID__GRANT_REFRESH_TOKEN_PROPERTY string refresh_token quarkus.oidc-client."id".grant.expires-in-property Access token expiry property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT__ID__GRANT_EXPIRES_IN_PROPERTY string expires_in quarkus.oidc-client."id".grant.refresh-expires-in-property Refresh token expiry property name in a token grant response Environment variable: QUARKUS_OIDC_CLIENT__ID__GRANT_REFRESH_EXPIRES_IN_PROPERTY string refresh_expires_in quarkus.oidc-client."id".grant-options."grant-name" Grant options Environment variable: QUARKUS_OIDC_CLIENT__ID__GRANT_OPTIONS__GRANT_NAME_ Map<String,Map<String,String>> quarkus.oidc-client."id".early-tokens-acquisition Requires that all filters which use 'OidcClient' acquire the tokens at the post-construct initialization time, possibly long before these tokens are used. This property should be disabled if the access token may expire before it is used for the first time and no refresh token is available. Environment variable: QUARKUS_OIDC_CLIENT__ID__EARLY_TOKENS_ACQUISITION boolean true quarkus.oidc-client."id".headers."headers" Custom HTTP headers which have to be sent to the token endpoint Environment variable: QUARKUS_OIDC_CLIENT__ID__HEADERS__HEADERS_ Map<String,String> About the Duration format To write duration values, use the standard java.time.Duration format. See the Duration#parse() Java API documentation for more information. You can also use a simplified format, starting with a number: If the value is only a number, it represents time in seconds. If the value is a number followed by ms , it represents time in milliseconds. In other cases, the simplified format is translated to the java.time.Duration format for parsing: If the value is a number followed by h , m , or s , it is prefixed with PT . If the value is a number followed by d , it is prefixed with P . 1.5.2. OIDC token propagation Configuration property fixed at build time - All other configuration properties are overridable at runtime Configuration property Type Default quarkus.rest-client-oidc-token-propagation.enabled If the OIDC Token Reactive Propagation is enabled. Environment variable: QUARKUS_REST_CLIENT_OIDC_TOKEN_PROPAGATION_ENABLED boolean true quarkus.rest-client-oidc-token-propagation.enabled-during-authentication Whether the token propagation is enabled during the SecurityIdentity augmentation. For example, you may need to use a REST client from SecurityIdentityAugmentor to propagate the current token to acquire additional roles for the SecurityIdentity . Note, this feature relies on a duplicated context. More information about Vert.x duplicated context can be found in this guide . Environment variable: QUARKUS_REST_CLIENT_OIDC_TOKEN_PROPAGATION_ENABLED_DURING_AUTHENTICATION boolean false quarkus.rest-client-oidc-token-propagation.exchange-token Exchange the current token with OpenId Connect Provider for a new token using either "urn:ietf:params:oauth:grant-type:token-exchange" or "urn:ietf:params:oauth:grant-type:jwt-bearer" token grant before propagating it. Environment variable: QUARKUS_REST_CLIENT_OIDC_TOKEN_PROPAGATION_EXCHANGE_TOKEN boolean false quarkus.rest-client-oidc-token-propagation.client-name Name of the configured OidcClient. Note this property is only used if the exchangeToken property is enabled. Environment variable: QUARKUS_REST_CLIENT_OIDC_TOKEN_PROPAGATION_CLIENT_NAME string 1.6. References OpenID Connect client and token propagation quickstart . OIDC Bearer token authentication OIDC code flow mechanism for protecting web applications Quarkus Security overview
[ "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-oidc-client</artifactId> </dependency>", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus quarkus.oidc-client.discovery-enabled=false Token endpoint: http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens quarkus.oidc-client.token-path=/protocol/openid-connect/tokens", "quarkus.oidc-client.token-path=http://localhost:8180/auth/realms/quarkus/protocol/openid-connect/tokens", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret 'client' is a shortcut for `client_credentials` quarkus.oidc-client.grant.type=client quarkus.oidc-client.grant-options.client.audience=https://example.com/api", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=refresh", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=code", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=ciba", "package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.HeaderParam; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; @RegisterRestClient @Path(\"/\") public interface RestClientWithTokenHeaderParam { @GET @Produces(\"text/plain\") @Path(\"userName\") Uni<String> getUserName(@HeaderParam(\"Authorization\") String authorization); }", "package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RestClient; import io.quarkus.oidc.client.runtime.TokensHelper; import io.quarkus.oidc.client.OidcClient; import io.smallrye.mutiny.Uni; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; @Path(\"/service\") public class OidcClientResource { @Inject OidcClient client; TokensHelper tokenHelper = new TokensHelper(); 1 @Inject @RestClient RestClientWithTokenHeaderParam restClient; @GET @Path(\"user-name\") @Produces(\"text/plain\") public Uni<String> getUserName() { return tokenHelper.getTokens(client).onItem() .transformToUni(tokens -> restClient.getUserName(\"Bearer \" + tokens.getAccessToken())); } }", "import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.client.Tokens; @Path(\"/service\") public class OidcClientResource { @Inject Tokens tokens; @GET public String getResponse() { // Get the access token, which might have been refreshed. String accessToken = tokens.getAccessToken(); // Use the access token to configure MP RestClient Authorization header/etc } }", "quarkus.oidc-client.client-enabled=false quarkus.oidc-client.jwt-secret.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.jwt-secret.client-id=quarkus-app quarkus.oidc-client.jwt-secret.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow", "import org.eclipse.microprofile.rest.client.inject.RestClient; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClients; import io.quarkus.oidc.client.runtime.TokensHelper; @Path(\"/clients\") public class OidcClientResource { @Inject OidcClients clients; TokensHelper tokenHelper = new TokensHelper(); @Inject @RestClient RestClientWithTokenHeaderParam restClient; 1 @GET @Path(\"user-name\") @Produces(\"text/plain\") public Uni<String> getUserName() { OidcClient client = clients.getClient(\"jwt-secret\"); return tokenHelper.getTokens(client).onItem() .transformToUni(tokens -> restClient.getUserName(\"Bearer \" + tokens.getAccessToken())); } }", "import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClients; import io.vertx.ext.web.RoutingContext; @Path(\"/clients\") public class OidcClientResource { @Inject OidcClients clients; @Inject RoutingContext context; @GET public String getResponse() { String tenantId = context.get(\"tenant-id\"); // named OIDC tenant and client configurations use the same key: OidcClient client = clients.getClient(tenantId); //Use this client to get the token } }", "package org.acme.security.openid.connect.client; import java.util.Map; import org.eclipse.microprofile.config.inject.ConfigProperty; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.OidcClientConfig; import io.quarkus.oidc.client.OidcClientConfig.Grant.Type; import io.quarkus.oidc.client.OidcClients; import io.quarkus.runtime.StartupEvent; import io.smallrye.mutiny.Uni; import jakarta.enterprise.context.ApplicationScoped; import jakarta.enterprise.event.Observes; import jakarta.inject.Inject; @ApplicationScoped public class OidcClientCreator { @Inject OidcClients oidcClients; @ConfigProperty(name = \"quarkus.oidc.auth-server-url\") String oidcProviderAddress; private volatile OidcClient oidcClient; public void startup(@Observes StartupEvent event) { createOidcClient().subscribe().with(client -> {oidcClient = client;}); } public OidcClient getOidcClient() { return oidcClient; } private Uni<OidcClient> createOidcClient() { OidcClientConfig cfg = new OidcClientConfig(); cfg.setId(\"myclient\"); cfg.setAuthServerUrl(oidcProviderAddress); cfg.setClientId(\"backend-service\"); cfg.getCredentials().setSecret(\"secret\"); cfg.getGrant().setType(Type.PASSWORD); cfg.setGrantOptions(Map.of(\"password\", Map.of(\"username\", \"alice\", \"password\", \"alice\"))); return oidcClients.newClient(cfg); } }", "import org.eclipse.microprofile.rest.client.inject.RestClient; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.client.runtime.TokensHelper; @Path(\"/clients\") public class OidcClientResource { @Inject OidcClientCreator oidcClientCreator; TokensHelper tokenHelper = new TokensHelper(); @Inject @RestClient RestClientWithTokenHeaderParam restClient; 1 @GET @Path(\"user-name\") @Produces(\"text/plain\") public Uni<String> getUserName() { return tokenHelper.getTokens(oidcClientCreator.getOidcClient()).onItem() .transformToUni(tokens -> restClient.getUserName(\"Bearer \" + tokens.getAccessToken())); } }", "package org.acme.security.openid.connect.client; import org.eclipse.microprofile.rest.client.inject.RestClient; import jakarta.inject.Inject; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import io.smallrye.mutiny.Uni; import io.quarkus.oidc.client.NamedOidcClient; import io.quarkus.oidc.client.OidcClient; import io.quarkus.oidc.client.runtime.TokensHelper; @Path(\"/clients\") public class OidcClientResource { @Inject @NamedOidcClient(\"jwt-secret\") OidcClient client; TokensHelper tokenHelper = new TokensHelper(); @Inject @RestClient RestClientWithTokenHeaderParam restClient; 1 @GET @Path(\"user-name\") @Produces(\"text/plain\") public Uni<String> getUserName() { return tokenHelper.getTokens(client).onItem() .transformToUni(tokens -> restClient.getUserName(\"Bearer \" + tokens.getAccessToken())); } }", "import java.io.IOException; import jakarta.annotation.Priority; import jakarta.enterprise.context.RequestScoped; import jakarta.inject.Inject; import jakarta.ws.rs.Priorities; import jakarta.ws.rs.client.ClientRequestContext; import jakarta.ws.rs.client.ClientRequestFilter; import jakarta.ws.rs.core.HttpHeaders; import jakarta.ws.rs.ext.Provider; import io.quarkus.oidc.client.NamedOidcClient; import io.quarkus.oidc.client.Tokens; @Provider @Priority(Priorities.AUTHENTICATION) @RequestScoped public class OidcClientRequestCustomFilter implements ClientRequestFilter { @Inject @NamedOidcClient(\"jwt-secret\") Tokens tokens; @Override public void filter(ClientRequestContext requestContext) throws IOException { requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, \"Bearer \" + tokens.getAccessToken()); } }", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-oidc-filter</artifactId> </dependency>", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter @Path(\"/\") public interface ProtectedResourceService { @GET Uni<String> getUserName(); }", "import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.reactive.filter.OidcClientRequestReactiveFilter; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(OidcClientRequestReactiveFilter.class) @Path(\"/\") public interface ProtectedResourceService { @GET Uni<String> getUserName(); }", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import io.smallrye.mutiny.Uni; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter(\"jwt-secret\") @Path(\"/\") public interface ProtectedResourceService { @GET Uni<String> getUserName(); }", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-resteasy-client-oidc-filter</artifactId> </dependency>", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientRequestFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(OidcClientRequestFilter.class) @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.client.filter.OidcClientFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @OidcClientFilter(\"jwt-secret\") @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "import java.io.IOException; import jakarta.annotation.Priority; import jakarta.inject.Inject; import jakarta.ws.rs.Priorities; import jakarta.ws.rs.client.ClientRequestContext; import jakarta.ws.rs.client.ClientRequestFilter; import jakarta.ws.rs.core.HttpHeaders; import jakarta.ws.rs.ext.Provider; import io.quarkus.oidc.client.Tokens; @Provider @Priority(Priorities.AUTHENTICATION) public class OidcClientRequestCustomFilter implements ClientRequestFilter { @Inject Tokens tokens; @Override public void filter(ClientRequestContext requestContext) throws IOException { requestContext.getHeaders().add(HttpHeaders.AUTHORIZATION, \"Bearer \" + tokens.getAccessToken()); } }", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=mysecret", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.client-secret.value=mysecret", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app This key is used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc-client.credentials.client-secret.provider.key=mysecret-key This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc Set it only if more than one CredentialsProvider can be registered quarkus.oidc-client.credentials.client-secret.provider.name=oidc-credentials-provider", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.client-secret.value=mysecret quarkus.oidc-client.credentials.client-secret.method=post", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.secret=AyM1SysPpbyDfgZld3umj1qzKObwVMkoqQ-EstJQLr_T-1qS0gZH75aKtMN3Yj0iPS4hcgUuTwjAzZr1Z9CAow", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app This is a key that will be used to retrieve a secret from the map of credentials returned from CredentialsProvider quarkus.oidc-client.credentials.jwt.secret-provider.key=mysecret-key This is the keyring provided to the CredentialsProvider when looking up the secret, set only if required by the CredentialsProvider implementation quarkus.oidc.credentials.client-secret.provider.keyring-name=oidc Set it only if more than one CredentialsProvider can be registered quarkus.oidc-client.credentials.jwt.secret-provider.name=oidc-credentials-provider", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key=Base64-encoded private key representation", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key-file=privateKey.pem", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key-store-file=keystore.jks quarkus.oidc-client.credentials.jwt.key-store-password=mypassword quarkus.oidc-client.credentials.jwt.key-password=mykeypassword Private key alias inside the keystore quarkus.oidc-client.credentials.jwt.key-id=mykeyAlias", "private_key_jwt client authentication quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus/ quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.key-file=privateKey.pem This is a token key identifier 'kid' header - set it if your OpenID Connect provider requires it. Note that if the key is represented in a JSON Web Key (JWK) format with a `kid` property, then using 'quarkus.oidc-client.credentials.jwt.token-key-id' is unnecessary. quarkus.oidc-client.credentials.jwt.token-key-id=mykey Use the RS512 signature algorithm instead of the default RS256 quarkus.oidc-client.credentials.jwt.signature-algorithm=RS512 The token endpoint URL is the default audience value; use the base address URL instead: quarkus.oidc-client.credentials.jwt.audience=USD{quarkus.oidc-client.auth-server-url} custom subject instead of the client ID: quarkus.oidc-client.credentials.jwt.subject=custom-subject custom issuer instead of the client ID: quarkus.oidc-client.credentials.jwt.issuer=custom-issuer", "quarkus.oidc-client.auth-server-url=USD{auth-server-url} quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.jwt.source=bearer", "package io.quarkus.it.keycloak; import java.util.Map; import io.quarkus.oidc.client.reactive.filter.runtime.AbstractOidcClientRequestReactiveFilter; import io.quarkus.oidc.common.runtime.OidcConstants; import jakarta.annotation.Priority; import jakarta.ws.rs.Priorities; @Priority(Priorities.AUTHENTICATION) public class OidcClientRequestCustomFilter extends AbstractOidcClientRequestReactiveFilter { @Override protected Map<String, String> additionalParameters() { return Map.of(OidcConstants.CLIENT_ASSERTION, \"ey...\"); } }", "package io.quarkus.it.keycloak; import java.util.Map; import io.quarkus.oidc.client.filter.runtime.AbstractOidcClientRequestFilter; import io.quarkus.oidc.common.runtime.OidcConstants; import jakarta.annotation.Priority; import jakarta.ws.rs.Priorities; @Priority(Priorities.AUTHENTICATION) public class OidcClientRequestCustomFilter extends AbstractOidcClientRequestFilter { @Override protected Map<String, String> additionalParameters() { return Map.of(OidcConstants.CLIENT_ASSERTION, \"ey...\"); } }", "quarkus.oidc-client.auth-server-url=USD{apple.url} quarkus.oidc-client.client-id=USD{apple.client-id} quarkus.oidc-client.credentials.client-secret.method=post-jwt quarkus.oidc-client.credentials.jwt.key-file=ecPrivateKey.pem quarkus.oidc-client.credentials.jwt.signature-algorithm=ES256 quarkus.oidc-client.credentials.jwt.subject=USD{apple.subject} quarkus.oidc-client.credentials.jwt.issuer=USD{apple.issuer}", "quarkus.oidc-client.tls.verification=certificate-validation Keystore configuration quarkus.oidc-client.tls.key-store-file=client-keystore.jks quarkus.oidc-client.tls.key-store-password=USD{key-store-password} Add more keystore properties if needed: #quarkus.oidc-client.tls.key-store-alias=keyAlias #quarkus.oidc-client.tls.key-store-alias-password=keyAliasPassword Truststore configuration quarkus.oidc-client.tls.trust-store-file=client-truststore.jks quarkus.oidc-client.tls.trust-store-password=USD{trust-store-password} Add more truststore properties if needed: #quarkus.oidc-client.tls.trust-store-alias=certAlias", "<dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-junit5</artifactId> <scope>test</scope> </dependency> <dependency> <groupId>org.awaitility</groupId> <artifactId>awaitility</artifactId> <scope>test</scope> </dependency>", "<dependency> <groupId>org.wiremock</groupId> <artifactId>wiremock</artifactId> <scope>test</scope> <version>USD{wiremock.version}</version> 1 </dependency>", "package io.quarkus.it.keycloak; import static com.github.tomakehurst.wiremock.client.WireMock.matching; import static com.github.tomakehurst.wiremock.core.WireMockConfiguration.wireMockConfig; import java.util.HashMap; import java.util.Map; import com.github.tomakehurst.wiremock.WireMockServer; import com.github.tomakehurst.wiremock.client.WireMock; import com.github.tomakehurst.wiremock.core.Options.ChunkedEncodingPolicy; import io.quarkus.test.common.QuarkusTestResourceLifecycleManager; public class KeycloakRealmResourceManager implements QuarkusTestResourceLifecycleManager { private WireMockServer server; @Override public Map<String, String> start() { server = new WireMockServer(wireMockConfig().dynamicPort().useChunkedTransferEncoding(ChunkedEncodingPolicy.NEVER)); server.start(); server.stubFor(WireMock.post(\"/tokens\") .withRequestBody(matching(\"grant_type=password&username=alice&password=alice\")) .willReturn(WireMock .aResponse() .withHeader(\"Content-Type\", \"application/json\") .withBody( \"{\\\"access_token\\\":\\\"access_token_1\\\", \\\"expires_in\\\":4, \\\"refresh_token\\\":\\\"refresh_token_1\\\"}\"))); server.stubFor(WireMock.post(\"/tokens\") .withRequestBody(matching(\"grant_type=refresh_token&refresh_token=refresh_token_1\")) .willReturn(WireMock .aResponse() .withHeader(\"Content-Type\", \"application/json\") .withBody( \"{\\\"access_token\\\":\\\"access_token_2\\\", \\\"expires_in\\\":4, \\\"refresh_token\\\":\\\"refresh_token_1\\\"}\"))); Map<String, String> conf = new HashMap<>(); conf.put(\"keycloak.url\", server.baseUrl()); return conf; } @Override public synchronized void stop() { if (server != null) { server.stop(); server = null; } } }", "Use the 'keycloak.url' property set by the test KeycloakRealmResourceManager quarkus.oidc-client.auth-server-url=USD{keycloak.url:replaced-by-test-resource} quarkus.oidc-client.discovery-enabled=false quarkus.oidc-client.token-path=/tokens quarkus.oidc-client.client-id=quarkus-service-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=password quarkus.oidc-client.grant-options.password.username=alice quarkus.oidc-client.grant-options.password.password=alice", "quarkus.log.category.\"io.quarkus.oidc.client.runtime.OidcClientImpl\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.client.runtime.OidcClientImpl\".min-level=TRACE", "quarkus.log.category.\"io.quarkus.oidc.client.runtime.OidcClientRecorder\".level=TRACE quarkus.log.category.\"io.quarkus.oidc.client.runtime.OidcClientRecorder\".min-level=TRACE", "package io.quarkus.it.keycloak; import jakarta.enterprise.context.ApplicationScoped; import io.quarkus.arc.Unremovable; import io.quarkus.oidc.common.OidcRequestContextProperties; import io.quarkus.oidc.common.OidcRequestFilter; import io.vertx.core.http.HttpMethod; import io.vertx.mutiny.core.buffer.Buffer; import io.vertx.mutiny.ext.web.client.HttpRequest; @ApplicationScoped @Unremovable public class OidcRequestCustomizer implements OidcRequestFilter { @Override public void filter(HttpRequest<Buffer> request, Buffer buffer, OidcRequestContextProperties contextProperties) { HttpMethod method = request.method(); String uri = request.uri(); if (method == HttpMethod.POST && uri.endsWith(\"/service\") && buffer != null) { request.putHeader(\"Digest\", calculateDigest(buffer.toString())); } } private String calculateDigest(String bodyString) { // Apply the required digest algorithm to the body string } }", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessToken; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @AccessToken @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.reactive.AccessTokenRequestReactiveFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(AccessTokenRequestReactiveFilter.class) @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=exchange quarkus.oidc-client.grant-options.exchange.audience=quarkus-app-exchange quarkus.resteasy-client-oidc-token-propagation.exchange-token=true 1", "quarkus.oidc-client.auth-server-url=USD{azure.provider.url} quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=jwt quarkus.oidc-client.grant-options.jwt.requested_token_use=on_behalf_of quarkus.oidc-client.scopes=https://graph.microsoft.com/user.read,offline_access quarkus.resteasy-client-oidc-token-propagation.exchange-token=true", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessToken; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @AccessToken @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.AccessTokenRequestFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(AccessTokenRequestFilter.class) @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "quarkus.oidc-client.auth-server-url=http://localhost:8180/auth/realms/quarkus quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=exchange quarkus.oidc-client.grant-options.exchange.audience=quarkus-app-exchange quarkus.resteasy-client-oidc-token-propagation.exchange-token=true", "quarkus.oidc-client.auth-server-url=USD{azure.provider.url} quarkus.oidc-client.client-id=quarkus-app quarkus.oidc-client.credentials.secret=secret quarkus.oidc-client.grant.type=jwt quarkus.oidc-client.grant-options.jwt.requested_token_use=on_behalf_of quarkus.oidc-client.scopes=https://graph.microsoft.com/user.read,offline_access quarkus.resteasy-client-oidc-token-propagation.exchange-token=true", "import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.JsonWebToken; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @JsonWebToken @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "import org.eclipse.microprofile.rest.client.annotation.RegisterProvider; import org.eclipse.microprofile.rest.client.inject.RegisterRestClient; import io.quarkus.oidc.token.propagation.JsonWebTokenRequestFilter; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; @RegisterRestClient @RegisterProvider(JsonWebTokenRequestFilter.class) @Path(\"/\") public interface ProtectedResourceService { @GET String getUserName(); }", "quarkus.resteasy-client-oidc-token-propagation.secure-json-web-token=true smallrye.jwt.sign.key.location=/privateKey.pem Set a new issuer smallrye.jwt.new-token.issuer=http://frontend-resource Set a new audience smallrye.jwt.new-token.audience=http://downstream-resource Override the existing token issuer and audience claims if they are already set smallrye.jwt.new-token.override-matching-claims=true" ]
https://docs.redhat.com/en/documentation/red_hat_build_of_quarkus/3.15/html/openid_connect_oidc_client_and_token_propagation/security-openid-connect-client-reference
Chapter 5. Performance Co-Pilot (PCP)
Chapter 5. Performance Co-Pilot (PCP) 5.1. PCP Overview and Resources Red Hat Enterprise Linux 7 provides support for Performance Co-Pilot ( PCP ), a suite of tools, services, and libraries for monitoring, visualizing, storing, and analyzing system-level performance measurements. Its light-weight distributed architecture makes it particularly well-suited for centralized analysis of complex systems. Performance metrics can be added using the Python, Perl, C++, and C interfaces. Analysis tools can use the client APIs (Python, C++, C) directly, and rich web applications can explore all available performance data using a JSON interface. PCP allows: the monitoring and management of real-time data the logging and retrieval of historical data You can use historical data to analyze patterns with issues by comparing live results with archived data. The Performance Metric Collection Daemon ( pmcd ) is responsible for collecting performance data on the host system, and various client tools, such as pminfo or pmstat , can be used to retrieve, display, archive, and process this data on the same host or over the network. The pcp package provides the command-line tools and underlying functionality. The graphical tool requires the pcp-gui package. For a list of system services and tools that are distributed with PCP, see Table A.1, "System Services Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7" and Table A.2, "Tools Distributed with Performance Co-Pilot in Red Hat Enterprise Linux 7" . Resources The manual page named PCPIntro serves as an introduction to Performance Co-Pilot. It provides a list of available tools as well as a description of available configuration options and a list of related manual pages. By default, comprehensive documentation is installed in the /usr/share/doc/pcp-doc/ directory, notably the Performance Co-Pilot User's and Administrator's Guide and Performance Co-Pilot Programmer's Guide . For information on PCP, see the Index of Performance Co-Pilot (PCP) articles, solutions, tutorials and white papers on the Red Hat Customer Portal. If you need to determine what PCP tool has the functionality of an older tool you are already familiar with, see the Side-by-side comparison of PCP tools with legacy tools Red Hat Knowledgebase article. See the official PCP documentation for an in-depth description of the Performance Co-Pilot and its usage. If you want to start using PCP on Red Hat Enterprise Linux quickly, see the PCP Quick Reference Guide . The official PCP website also contains a list of frequently asked questions .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/performance_tuning_guide/ch-performance-co-pilot
function::cpu_clock_s
function::cpu_clock_s Name function::cpu_clock_s - Number of seconds on the given cpu's clock Synopsis Arguments cpu Which processor's clock to read Description This function returns the number of seconds on the given cpu's clock. This is always monotonic comparing on the same cpu, but may have some drift between cpus (within about a jiffy).
[ "cpu_clock_s:long(cpu:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-cpu-clock-s
Chapter 2. Upgrading your broker
Chapter 2. Upgrading your broker 2.1. About upgrades Red Hat releases new versions of AMQ Broker to the Customer Portal . Update your brokers to the newest version to ensure that you have the latest enhancements and fixes. In general, Red Hat releases a new version of AMQ Broker in one of three ways: Major Release A major upgrade or migration is required when an application is transitioned from one major release to the , for example, from AMQ Broker 6 to AMQ Broker 7. This type of upgrade is not addressed in this guide. Minor Release AMQ Broker periodically provides minor releases, which are updates that include new features, as well as bug and security fixes. If you plan to upgrade from one AMQ Broker minor release to another, for example, from AMQ Broker 7.0 to AMQ Broker 7.1, code changes should not be required for applications that do not use private, unsupported, or tech preview components. Micro Release AMQ Broker also periodically provides micro releases that contain minor enhancements and fixes. Micro releases increment the minor release version by the last digit, for example from 7.0.1 to 7.0.2. A micro release should not require code changes, however, some releases may require configuration changes. 2.2. Upgrading older 7.x versions 2.2.1. Upgrading a broker instance from 7.0.x to 7.0.y The procedure for upgrading AMQ Broker from one version of 7.0 to another is similar to the one for installation: you download an archive from the Customer Portal and then extract it. The following subsections describe how to upgrade a 7.0.x broker for different operating systems. Upgrading from 7.0.x to 7.0.y on Linux Upgrading from 7.0.x to 7.0.y on Windows 2.2.1.1. Upgrading from 7.0.x to 7.0.y on Linux The name of the archive that you download could differ from what is used in the following examples. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.0 Release Notes . Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. The archive is kept in a compressed format. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. 2.2.1.2. Upgrading from 7.0.x to 7.0.y on Windows Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.0 Release Notes . Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. 2.2.2. Upgrading a broker instance from 7.0.x to 7.1.0 AMQ Broker 7.1.0 includes configuration files and settings that were not included with versions. Upgrading a broker instance from 7.0.x to 7.1.0 requires adding these new files and settings to your existing 7.0.x broker instances. The following subsections describe how to upgrade a 7.0.x broker instance to 7.1.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.0.x to 7.1.0 on Linux Upgrading from 7.0.x to 7.1.0 on Windows 2.2.2.1. Upgrading from 7.0.x to 7.1.0 on Linux Before you can upgrade a 7.0.x broker, you need to install Red Hat AMQ Broker 7.1.0 and create a temporary broker instance. This will generate the 7.1.0 configuration files required to upgrade a 7.0.x broker. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.1 Release Notes . Before upgrading your 7.0.x brokers, you must first install version 7.1. For steps on installing 7.1 on Linux, see Installing AMQ Broker . Procedure If it is running, stop the 7.0.x broker you want to upgrade: Back up the instance directory of the broker by copying it to the home directory of the current user. Open the file artemis.profile in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Update the ARTEMIS_HOME property so that its value refers to the installation directory for AMQ Broker 7.1.0: On the line below the one you updated, add the property ARTEMIS_INSTANCE_ URI and assign it a value that refers to the 7.0.x broker instance directory: Update the JAVA_ARGS property by adding the jolokia.policyLocation parameter and assigning it the following value: Create a 7.1.0 broker instance. The creation procedure generates the configuration files required to upgrade from 7.0.x to 7.1.0. In the following example, note that the instance is created in the directory upgrade_tmp : Copy configuration files from the etc directory of the temporary 7.1.0 instance into the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Copy the management.xml file: Copy the jolokia-access.xml file: Open up the bootstrap.xml file in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Comment out or delete the following two lines: Add the following to replace the two lines removed in the step: Start the broker that you upgraded: Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . 2.2.2.2. Upgrading from 7.0.x to 7.1.0 on Windows Before you can upgrade a 7.0.x broker, you need to install Red Hat AMQ Broker 7.1.0 and create a temporary broker instance. This will generate the 7.1.0 configuration files required to upgrade a 7.0.x broker. Prerequisites Before upgrading AMQ Broker, review the release notes for the target release. The release notes describe important enhancements, known issues, and changes to behavior in the target release. For more information, see the AMQ Broker 7.1 Release Notes . Before upgrading your 7.0.x brokers, you must first install version 7.1. For steps on installing 7.1 on Windows, see Installing AMQ Broker . Procedure If it is running, stop the 7.0.x broker you want to upgrade: Back up the instance directory of the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . Open the file artemis.profile in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Update the ARTEMIS_HOME property so that its value refers to the installation directory for AMQ Broker 7.1.0: On the line below the one you updated, add the property ARTEMIS_INSTANCE_ URI and assign it a value that refers to the 7.0.x broker instance directory: Update the JAVA_ARGS property by adding the jolokia.policyLocation parameter and assigning it the following value: Create a 7.1.0 broker instance. The creation procedure generates the configuration files required to upgrade from 7.0.x to 7.1.0. In the following example, note that the instance is created in the directory upgrade_tmp : Copy configuration files from the etc directory of the temporary 7.1.0 instance into the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Copy the management.xml file: Copy the jolokia-access.xml file: Open up the bootstrap.xml file in the <broker_instance_dir> /etc/ directory of the 7.0.x broker. Comment out or delete the following two lines: Add the following to replace the two lines removed in the step: Start the broker that you upgraded: Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . 2.2.3. Upgrading a broker instance from 7.1.x to 7.2.0 AMQ Broker 7.2.0 includes configuration files and settings that were not included with 7.0.x versions. If you are running 7.0.x instances, you must first upgrade those broker instances from 7.0.x to 7.1.0 before upgrading to 7.2.0. The following subsections describe how to upgrade a 7.1.x broker instance to 7.2.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.1.x to 7.2.0 on Linux Upgrading from 7.1.x to 7.2.0 on Windows 2.2.3.1. Upgrading from 7.1.x to 7.2.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.3.2. Upgrading from 7.1.x to 7.2.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.2.4. Upgrading a broker instance from 7.2.x to 7.3.0 The following subsections describe how to upgrade a 7.2.x broker instance to 7.3.0 for different operating systems. 2.2.4.1. Resolve exception due to deprecated dispatch console Starting in version 7.3.0, AMQ Broker no longer ships with the Hawtio dispatch console plugin dispatch-hawtio-console.war . Previously, the dispatch console was used to manage AMQ Interconnect. However, AMQ Interconnect now uses its own, standalone web console. This change affects the upgrade procedures in the sections that follow. If you take no further action before upgrading your broker instance to 7.3.0, the upgrade process produces an exception that looks like the following: You can safely ignore the preceding exception without affecting the success of your upgrade. However, if you would prefer not to see this exception during your upgrade, you must first remove a reference to the Hawtio dispatch console plugin in the bootstrap.xml file of your existing broker instance. The bootstrap.xml file is in the {instance_directory}/etc/ directory of your broker instance. The following example shows some of the contents of the bootstrap.xml file for a AMQ Broker 7.2.4 instance: To avoid an exception when upgrading AMQ Broker to version 7.3.0, delete the line <app url="dispatch-hawtio-console" war="dispatch-hawtio-console.war"/> , as shown in the preceding example. Then, save the modified bootstrap file and start the upgrade process, as described in the sections that follow. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.2.x to 7.3.0 on Linux Upgrading from 7.2.x to 7.3.0 on Windows 2.2.4.2. Upgrading from 7.2.x to 7.3.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. Move the archive to the directory created during the original installation of AMQ Broker. In the following example, the directory /opt/redhat is used. As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive by using the unzip command. Stop the broker if it is running. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> /log/artemis.log . Edit the <broker_instance_dir> /etc/artemis.profile configuration file to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> /log/artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.4.3. Upgrading from 7.2.x to 7.3.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal by following the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . Stop the broker if it is running by entering the following command. Back up the broker by using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, a line similar to the one below is displayed at the end of its log file, which can be found at <broker_instance_dir> \log\artemis.log . Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files to set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file to set the JAVA_ARGS environment variable to reference the correct log manager version. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file to set the bootstrap class path start argument to reference the correct log manager version. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the log file <broker_instance_dir> \log\artemis.log and find two lines similar to the ones below. Note the new version number that appears in the log after the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.2.5. Upgrading a broker instance from 7.3.0 to 7.4.0 The following subsections describe how to upgrade a 7.3.0 broker instance to 7.4.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.3.0 to 7.4.0 on Linux Upgrading from 7.3.0 to 7.4.0 on Windows 2.2.5.1. Upgrading from 7.3.0 to 7.4.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the <web> configuration element, add a reference to the metrics plugin file for AMQ Broker. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.2.5.2. Upgrading from 7.3.0 to 7.4.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the <web> configuration element, add a reference to the metrics plugin file for AMQ Broker. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.3. Upgrading a broker instance from 7.4.0 to 7.4.x Important AMQ Broker 7.4 has been designated as a Long Term Support (LTS) release version. Bug fixes and security advisories will be made available for AMQ Broker 7.4 in a series of micro releases (7.4.1, 7.4.2, and so on) for a period of at least 12 months. This means that you will be able to get recent bug fixes and security advisories for AMQ Broker without having to upgrade to a new minor release. For more information, see Long Term Support for AMQ Broker . Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . The following subsections describe how to upgrade a 7.4.0 broker instance to 7.4.x for different operating systems. Upgrading from 7.4.0 to 7.4.x on Linux Upgrading from 7.4.0 to 7.4.x on Windows 2.3.1. Upgrading from 7.4.0 to 7.4.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.3.2. Upgrading from 7.4.0 to 7.4.x on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.4. Upgrading a broker instance from 7.4.x to 7.5.0 The following subsections describe how to upgrade a 7.4.x broker instance to 7.5.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.4.x to 7.5.0 on Linux Upgrading from 7.4.x to 7.5.0 on Windows 2.4.1. Upgrading from 7.4.x to 7.5.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.4.2. Upgrading from 7.4.x to 7.5.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.5. Upgrading a broker instance from 7.5.0 to 7.6.0 The following subsections describe how to upgrade a 7.5.0 broker instance to 7.6.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.5.0 to 7.6.0 on Linux Upgrading from 7.5.0 to 7.6.0 on Windows 2.5.1. Upgrading from 7.5.0 to 7.6.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the JAVA_ARGS property. Add the bootstrap class path argument, which references a dependent file for the log manager. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.5.2. Upgrading from 7.5.0 to 7.6.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Set the JAVA_ARGS environment variable to reference the correct log manager version and dependent file. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Set the bootstrap class path start argument to reference the correct log manager version and dependent file. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.6. Upgrading a broker instance from 7.6.0 to 7.7.0 The following subsections describe how to upgrade a 7.6.0 broker instance to 7.7.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.6.0 to 7.7.0 on Linux Upgrading from 7.6.0 to 7.7.0 on Windows 2.6.1. Upgrading from 7.6.0 to 7.7.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/logging.properties configuration file. On the list of additional loggers to be configured, include the org.apache.activemq.audit.resource resource logger that was added in AMQ Broker 7.7.0. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource Before the Console handler configuration section, add a default configuration for the resource logger. .. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false # Console handler configuration .. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.6.2. Upgrading from 7.6.0 to 7.7.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\logging.properties configuration file. On the list of additional loggers to be configured, include the org.apache.activemq.audit.resource resource logger that was added in AMQ Broker 7.7.0. loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource Before the Console handler configuration section, add a default configuration for the resource logger. .. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false # Console handler configuration .. Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.7. Upgrading a broker instance from 7.7.0 to 7.8.0 The following subsections describe how to upgrade a 7.7.0 broker instance to 7.8.0 for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.7.0 to 7.8.0 on Linux Upgrading from 7.7.0 to 7.8.0 on Windows 2.7.1. Upgrading from 7.7.0 to 7.8.0 on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.8. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.7.2. Upgrading from 7.7.0 to 7.8.0 on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.8. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.8. Upgrading a broker instance from 7.8.x to 7.9.x The following subsections describe how to upgrade a 7.8.x broker instance to 7.9.x for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.8.x to 7.9.x on Linux Upgrading from 7.8.x to 7.9.x on Windows 2.8.1. Upgrading from 7.8.x to 7.9.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Locate the JAVA_ARGS property. Ensure that the bootstrap class path argument references the required version of a dependent file for the log manager, as shown below. Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.8.2. Upgrading from 7.8.x to 7.9.x on Windows Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder amd select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.9. <web bind="http://localhost:8161" path="web"> ... <app url="console" war="hawtio.war"/> ... </web> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory. 2.9. Upgrading a broker instance from 7.9.x to 7.10.x The following subsections describe how to upgrade a 7.9.x broker instance to 7.10.x for different operating systems. Important Starting with AMQ Broker 7.1.0, you can access AMQ Management Console only from the local host by default. To learn about configuring remote access to the console, see Configuring local and remote access to AMQ Management Console . Upgrading from 7.9.x to 7.10.x on Linux Upgrading from 7.9.x to 7.10.x on Windows 2.9.1. Upgrading from 7.9.x to 7.10.x on Linux Note The name of the archive that you download could differ from what is used in the following examples. Prerequisites At a minimum, AMQ Broker 7.10 requires Java version 11 to run. Ensure that each AMQ Broker host is running Java version 11 or higher. For more information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . If AMQ Broker 7.9 is configured to persist message data in a database, the data type of the HOLDER_EXPIRATION_TIME column is timestamp in the node manager database table. In AMQ Broker 7.10, the data type of the column changed to number . Before you upgrade to AMQ Broker 7.10, you must drop the node manager table, that is, remove it from the database. After you drop the table, it is recreated with the new schema when you restart the upgraded broker. In a shared store high availability (HA) configuration, the node manager table is shared between brokers. Therefore, you must ensure that all brokers that share the table are stopped before you drop the table. The following example drops a node manager table called NODE_MANAGER_TABLE : Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Change the owner of the archive to the same user that owns the AMQ Broker installation to be upgraded. The following example shows a user called amq-broker . Move the archive to the directory created during the original installation of AMQ Broker. The following example uses /opt/redhat . As the directory owner, extract the contents of the compressed archive. In the following example, the user amq-broker extracts the archive using the unzip command. If the broker is running, stop it. Back up the instance directory of the broker by copying it to the home directory of the current user. (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> /log/artemis.log file. Edit the <broker_instance_dir> /etc/artemis.profile configuration file. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. For example: Edit the <broker_instance_dir> /etc/bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.10. <web path="web"> <binding uri="https://localhost:8161" ... <app url="console" war="hawtio.war"/> ... </web> In the broker xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <broker xmlns="http://activemq.apache.org/schema"> Edit the <broker_instance_dir> /etc/management.xml file. In the management-context xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <management-context xmlns="http://activemq.apache.org/schema"> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> /log/artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> /etc/artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the etc/ and data/ directories within the broker instance's directory. 2.9.2. Upgrading from 7.9.x to 7.10.x on Windows Prerequisites At a minimum, AMQ Broker 7.10 requires Java version 11 to run. Ensure that each AMQ Broker host is running Java version 11 or higher. For more information on supported configurations, see Red Hat AMQ Broker 7 Supported Configurations . If AMQ Broker 7.9 is configured to persist message data in a database, the data type of the HOLDER_EXPIRATION_TIME column is timestamp in the node manager database table. In AMQ Broker 7.10, the data type of the column changed to number . Before you upgrade to AMQ Broker 7.10, you must drop the node manager table, that is, remove it from the database. After you drop the table, it is recreated with the new schema when you restart the upgraded broker. In a shared store high availability (HA) configuration, the node manager table is shared between brokers. Therefore, you must ensure that all brokers that share the table are stopped before you drop the table. The following example drops a node manager table called NODE_MANAGER_TABLE : Procedure Download the desired archive from the Red Hat Customer Portal. Follow the instructions provided in Downloading the AMQ Broker archive . Use a file manager to move the archive to the folder you created during the last installation of AMQ Broker. Extract the contents of the archive. Right-click the .zip file and select Extract All . If the broker is running, stop it. Back up the broker using a file manager. Right-click the <broker_instance_dir> folder and select Copy . Right-click in the same window and select Paste . (Optional) Note the current version of the broker. After the broker stops, you see a line similar to the one below at the end of the <broker_instance_dir> \log\artemis.log file. Edit the <broker_instance_dir> \etc\artemis.profile.cmd and <broker_instance_dir> \bin\artemis-service.xml configuration files. Set the ARTEMIS_HOME property to the new directory created when the archive was extracted. Edit the <broker_instance_dir> \etc\artemis.profile.cmd configuration file. Ensure that the JAVA_ARGS environment variable references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \bin\artemis-service.xml configuration file. Ensure that the bootstrap class path start argument references the correct versions for the log manager and dependent file, as shown below. Edit the <broker_instance_dir> \etc\bootstrap.xml configuration file. In the web element, update the name of the .war file required by AMQ Management Console in 7.10. <web path="web"> <binding uri="https://localhost:8161" ... <app url="console" war="hawtio.war"/> ... </web> In the broker xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <broker xmlns="http://activemq.apache.org/schema"> Edit the <broker_instance_dir> /etc/management.xml file. In the management-context xmlns element, change the schema value from "http://activemq.org/schema" to "http://activemq.apache.org/schema" . <management-context xmlns="http://activemq.apache.org/schema"> Start the upgraded broker. (Optional) Confirm that the broker is running and that the version has changed. After starting the broker, open the <broker_instance_dir> \log\artemis.log file. Find two lines similar to the ones below. Note the new version number that appears in the log when the broker is live. Additional Resources For more information about creating an instance of the broker, see Creating a broker instance . You can now store a broker instance's configuration files and data in any custom directory, including locations outside of the broker instance's directory. In the <broker_instance_dir> \etc\artemis.profile file, update the ARTEMIS_INSTANCE_ETC_URI property by specifying the location of the custom directory after creating the broker instance. Previously, these configuration files and data could only be stored in the \etc and \data directories within the broker instance's directory.
[ "sudo chown amq-broker:amq-broker jboss-amq-7.x.x.redhat-1.zip", "sudo mv jboss-amq-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME='/opt/redhat/jboss-amq-7.x.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.1.0.amq-700005-redhat-1 [0.0.0.0, nodeID=4782d50d-47a2-11e7-a160-9801a793ea45]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.1.0.amq-700005-redhat-1 [0.0.0.0, nodeID=4782d50d-47a2-11e7-a160-9801a793ea45]", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "ARTEMIS_HOME=\" <7.1.0_install_dir> \"", "ARTEMIS_INSTANCE_URI=\"file:// <7.0.x_broker_instance_dir> \"", "-Djolokia.policyLocation=USD{ARTEMIS_INSTANCE_URI}/etc/jolokia-access.xml", "<7.1.0_install_dir> /bin/artemis create --allow-anonymous --user admin --password admin upgrade_tmp", "cp <temporary_7.1.0_broker_instance_dir> /etc/management.xml <7.0_broker_instance_dir> /etc/", "cp <temporary_7.1.0_broker_instance_dir> /etc/jolokia-access.xml <7.0_broker_instance_dir> /etc/", "<app url=\"jolokia\" war=\"jolokia.war\"/> <app url=\"hawtio\" war=\"hawtio-no-slf4j.war\"/>", "<app url=\"console\" war=\"console.war\"/>", "<broker_instance_dir> /bin/artemis run", "> <broker_instance_dir> \\bin\\artemis-service.exe stop", "ARTEMIS_HOME=\" <7.1.0_install_dir> \"", "ARTEMIS_INSTANCE_URI=\"file:// <7.0.x_broker_instance_dir> \"", "-Djolokia.policyLocation=USD{ARTEMIS_INSTANCE_URI}/etc/jolokia-access.xml", "> <7.1.0_install_dir> /bin/artemis create --allow-anonymous --user admin --password admin upgrade_tmp", "> cp <temporary_7.1.0_broker_instance_dir> /etc/management.xml <7.0_broker_instance_dir> /etc/", "> cp <temporary_7.1.0_broker_instance_dir> /etc/jolokia-access.xml <7.0_broker_instance_dir> /etc/", "<app url=\"jolokia\" war=\"jolokia.war\"/> <app url=\"hawtio\" war=\"hawtio-no-slf4j.war\"/>", "<app url=\"console\" war=\"console.war\"/>", "> <broker_instance_dir> \\bin\\artemis-service.exe start", "sudo chown amq-broker:amq-broker amq-7.x.x.redhat-1.zip", "sudo mv amq-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-7.x.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.0.0.amq-700005-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.5.0.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "2019-04-11 18:00:41,334 WARN [org.eclipse.jetty.webapp.WebAppContext] Failed startup of context o.e.j.w.WebAppContext@1ef3efa8{/dispatch-hawtio-console,null,null}{/opt/amqbroker/amq-broker-7.3.0/web/dispatch-hawtio-console.war}: java.io.FileNotFoundException: /opt/amqbroker/amq-broker-7.3.0/web/dispatch-hawtio-console.war.", "<broker xmlns=\"http://activemq.org/schema\"> . <!-- The web server is only bound to localhost by default --> <web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"redhat-branding\" war=\"redhat-branding.war\"/> <app url=\"artemis-plugin\" war=\"artemis-plugin.war\"/> <app url=\"dispatch-hawtio-console\" war=\"dispatch-hawtio-console.war\"/> <app url=\"console\" war=\"console.war\"/> </web> </broker>", "sudo chown amq-broker:amq-broker amq-7.x.x.redhat-1.zip", "sudo mv amq-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip jboss-amq-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.6.3.amq-720001-redhat-1 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-7.x.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.6.3.amq-720001-redhat-1 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS= <install_dir> \\lib\\jboss-logmanager-2.0.3.Final-redhat-1.jar", "<startargument>Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.0.3.Final-redhat-1.jar</startargument>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.x.x.redhat-1.zip", "sudo mv amq-broker-7.x.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.1.Final-redhat-00001.jar", "<app url=\"metrics\" war=\"metrics.war\"/>", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS= -Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.1.Final-redhat-00001.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.1.Final-redhat-00001.jar</startargument>", "<app url=\"metrics\" war=\"metrics.war\"/>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.4.x.redhat-1.zip", "sudo mv amq-broker-7.4.x.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.4.x.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.4.x-redhat-1'", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.5.0.redhat-1.zip", "sudo mv amq-broker-7.5.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.5.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.5.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00001.jar", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.7.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00001.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00001.jar</startargument>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.6.0.redhat-1.zip", "sudo mv amq-broker-7.6.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.6.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00054 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.6.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.9.0.redhat-00054 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.7.0.redhat-1.zip", "sudo mv amq-broker-7.7.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.7.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.7.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource", ".. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false Console handler configuration ..", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mesq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false sage Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.11.0.redhat-00001 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "loggers=org.eclipse.jetty,org.jboss.logging,org.apache.activemq.artemis.core.server,org.apache.activemq.artemis.utils,org.apache.activemq.artemis.journal,org.apache.activemq.artemis.jms.server,org.apache.activemq.artemis.integration.bootstrap,org.apache.activemq.audit.base,org.apache.activemq.audit.message, org.apache.activemq.audit.resource", ".. logger.org.apache.activemq.audit.resource.level=ERROR logger.org.apache.activemq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false Console handler configuration ..", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.8.0.redhat-1.zip", "sudo mv amq-broker-7.8.0.redhat-1.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.8.0.redhat-1.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.8.0-redhat-1'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mesq.audit.resource.handlers=AUDIT_FILE logger.org.apache.activemq.audit.resource.useParentHandlers=false sage Broker version 2.16.0.redhat-00007 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.16.0.redhat-00007 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip", "sudo mv amq-broker-7.x.x-bin.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x-bin.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-bin'", "-Xbootclasspath/a:USDARTEMIS_HOME/lib/wildfly-common-1.5.2.Final-redhat-00002.jar", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mes INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live sage Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.13.0.redhat-00003 [4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<web bind=\"http://localhost:8161\" path=\"web\"> <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "DROP TABLE NODE_MANAGER_TABLE", "sudo chown amq-broker:amq-broker amq-broker-7.x.x-bin.zip", "sudo mv amq-broker-7.x.x-bin.zip /opt/redhat", "su - amq-broker cd /opt/redhat unzip amq-broker-7.x.x-bin.zip", "<broker_instance_dir> /bin/artemis stop", "cp -r <broker_instance_dir> ~/", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "ARTEMIS_HOME='/opt/redhat/amq-broker-7.x.x-bin'", "<web path=\"web\"> <binding uri=\"https://localhost:8161\" <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker xmlns=\"http://activemq.apache.org/schema\">", "<management-context xmlns=\"http://activemq.apache.org/schema\">", "<broker_instance_dir> /bin/artemis run", "INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Mes INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live sage Broker version 2.21.0.redhat-00025 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]", "DROP TABLE NODE_MANAGER_TABLE", "<broker_instance_dir> \\bin\\artemis-service.exe stop", "INFO [org.apache.activemq.artemis.core.server] AMQ221002: Apache ActiveMQ Artemis Message Broker version 2.18.0.redhat-00010[4782d50d-47a2-11e7-a160-9801a793ea45] stopped, uptime 28 minutes", "ARTEMIS_HOME= <install_dir>", "JAVA_ARGS=-Xbootclasspath/%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar", "<startargument>-Xbootclasspath/a:%ARTEMIS_HOME%\\lib\\jboss-logmanager-2.1.10.Final-redhat-00001.jar;%ARTEMIS_HOME%\\lib\\wildfly-common-1.5.2.Final-redhat-00002.jar</startargument>", "<web path=\"web\"> <binding uri=\"https://localhost:8161\" <app url=\"console\" war=\"hawtio.war\"/> </web>", "<broker xmlns=\"http://activemq.apache.org/schema\">", "<management-context xmlns=\"http://activemq.apache.org/schema\">", "<broker_instance_dir> \\bin\\artemis-service.exe start", "INFO [org.apache.activemq.artemis.core.server] AMQ221007: Server is now live INFO [org.apache.activemq.artemis.core.server] AMQ221001: Apache ActiveMQ Artemis Message Broker version 2.21.0.redhat-00025 [0.0.0.0, nodeID=554cce00-63d9-11e8-9808-54ee759954c4]" ]
https://docs.redhat.com/en/documentation/red_hat_amq_broker/7.10/html/managing_amq_broker/patching
function::usymname
function::usymname Name function::usymname - Return the symbol of an address in the current task. Synopsis Arguments addr The address to translate. Description Returns the (function) symbol name associated with the given address if known. If not known it will return the hex string representation of addr.
[ "usymname:string(addr:long)" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-usymname
probe::nfsd.createv3
probe::nfsd.createv3 Name probe::nfsd.createv3 - NFS server creating a regular file or set file attributes for client Synopsis nfsd.createv3 Values iap_mode file access mode filename file name client_ip the ip address of client fh file handle (the first part is the length of the file handle) createmode create mode .The possible values could be: NFS3_CREATE_EXCLUSIVE, NFS3_CREATE_UNCHECKED, or NFS3_CREATE_GUARDED filelen the length of file name iap_valid Attribute flags verifier file attributes (atime,mtime,mode). It's used to reset file attributes for CREATE_EXCLUSIVE truncp trunp arguments, indicates if the file shouldbe truncate Description This probepoints is only called by nfsd3_proc_create and nfsd4_open when op_claim_type is NFS4_OPEN_CLAIM_NULL.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-nfsd-createv3
1.5. Know Your Users
1.5. Know Your Users Although some people bristle at the term "users" (perhaps due to some system administrators' use of the term in a derogatory manner), it is used here with no such connotation implied. Users are those people that use the systems and resources for which you are responsible -- no more, and no less. As such, they are central to your ability to successfully administer your systems; without understanding your users, how can you understand the system resources they require? For example, consider a bank teller. A bank teller uses a strictly-defined set of applications and requires little in the way of system resources. A software engineer, on the other hand, may use many different applications and always welcomes more system resources (for faster build times). Two entirely different users with two entirely different needs. Make sure you learn as much about your users as you can.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/introduction_to_system_administration/s1-philosophy-users
Chapter 11. Controlling access to the Admin Console
Chapter 11. Controlling access to the Admin Console Each realm created on the Red Hat build of Keycloak has a dedicated Admin Console from which that realm can be managed. The master realm is a special realm that allows admins to manage more than one realm on the system. This chapter goes over all the scenarios for this. 11.1. Master realm access control The master realm in Red Hat build of Keycloak is a special realm and treated differently than other realms. Users in the Red Hat build of Keycloak master realm can be granted permission to manage zero or more realms that are deployed on the Red Hat build of Keycloak server. When a realm is created, Red Hat build of Keycloak automatically creates various roles that grant fine-grain permissions to access that new realm. Access to The Admin Console and Admin REST endpoints can be controlled by mapping these roles to users in the master realm. It's possible to create multiple superusers, as well as users that can only manage specific realms. 11.1.1. Global roles There are two realm-level roles in the master realm. These are: admin create-realm Users with the admin role are superusers and have full access to manage any realm on the server. Users with the create-realm role are allowed to create new realms. They will be granted full access to any new realm they create. 11.1.2. Realm specific roles Admin users within the master realm can be granted management privileges to one or more other realms in the system. Each realm in Red Hat build of Keycloak is represented by a client in the master realm. The name of the client is <realm name>-realm . These clients each have client-level roles defined which define varying level of access to manage an individual realm. The roles available are: view-realm view-users view-clients view-events manage-realm manage-users create-client manage-clients manage-events view-identity-providers manage-identity-providers impersonation Assign the roles you want to your users and they will only be able to use that specific part of the administration console. Important Admins with the manage-users role will only be able to assign admin roles to users that they themselves have. So, if an admin has the manage-users role but doesn't have the manage-realm role, they will not be able to assign this role. 11.2. Dedicated realm admin consoles Each realm has a dedicated Admin Console that can be accessed by going to the url /admin/{realm-name}/console . Users within that realm can be granted realm management permissions by assigning specific user role mappings. Each realm has a built-in client called realm-management . You can view this client by going to the Clients left menu item of your realm. This client defines client-level roles that specify permissions that can be granted to manage the realm. view-realm view-users view-clients view-events manage-realm manage-users create-client manage-clients manage-events view-identity-providers manage-identity-providers impersonation Assign the roles you want to your users and they will only be able to use that specific part of the administration console.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_keycloak/22.0/html/server_administration_guide/admin_permissions
Data Grid downloads
Data Grid downloads Access the Data Grid Software Downloads on the Red Hat customer portal. Note You must have a Red Hat account to access and download Data Grid software.
null
https://docs.redhat.com/en/documentation/red_hat_data_grid/8.5/html/data_grid_code_tutorials/rhdg-downloads_datagrid
Chapter 2. Eviction [policy/v1]
Chapter 2. Eviction [policy/v1] Description Eviction evicts a pod from its node subject to certain policies and safety constraints. This is a subresource of Pod. A request to cause such an eviction is created by POSTing to ... /pods/<pod name>/evictions. Type object 2.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources deleteOptions DeleteOptions DeleteOptions may be provided kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta ObjectMeta describes the pod that is being evicted. 2.2. API endpoints The following API endpoints are available: /api/v1/namespaces/{namespace}/pods/{name}/eviction POST : create eviction of a Pod 2.2.1. /api/v1/namespaces/{namespace}/pods/{name}/eviction Table 2.1. Global path parameters Parameter Type Description name string name of the Eviction namespace string object name and auth scope, such as for teams and projects Table 2.2. Global query parameters Parameter Type Description dryRun string When present, indicates that modifications should not be persisted. An invalid or unrecognized dryRun directive will result in an error response and no further processing of the request. Valid values are: - All: all dry run stages will be processed fieldManager string fieldManager is a name associated with the actor or entity that is making these changes. The value must be less than or 128 characters long, and only contain printable characters, as defined by https://golang.org/pkg/unicode/#IsPrint . fieldValidation string fieldValidation instructs the server on how to handle objects in the request (POST/PUT/PATCH) containing unknown or duplicate fields, provided that the ServerSideFieldValidation feature gate is also enabled. Valid values are: - Ignore: This will ignore any unknown fields that are silently dropped from the object, and will ignore all but the last duplicate field that the decoder encounters. This is the default behavior prior to v1.23 and is the default behavior when the ServerSideFieldValidation feature gate is disabled. - Warn: This will send a warning via the standard warning response header for each unknown field that is dropped from the object, and for each duplicate field that is encountered. The request will still succeed if there are no other errors, and will only persist the last of any duplicate fields. This is the default when the ServerSideFieldValidation feature gate is enabled. - Strict: This will fail the request with a BadRequest error if any unknown fields would be dropped from the object, or if any duplicate fields are present. The error returned from the server will contain all unknown and duplicate fields encountered. pretty string If 'true', then the output is pretty printed. HTTP method POST Description create eviction of a Pod Table 2.3. Body parameters Parameter Type Description body Eviction schema Table 2.4. HTTP responses HTTP code Reponse body 200 - OK Eviction schema 201 - Created Eviction schema 202 - Accepted Eviction schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.13/html/policy_apis/eviction-policy-v1
3.5. Configuring FTP
3.5. Configuring FTP File Transport Protocol (FTP) is an old and complex multi-port protocol that presents a distinct set of challenges to an Load Balancer Add-On environment. To understand the nature of these challenges, you must first understand some key things about how FTP works. 3.5.1. How FTP Works With most other server client relationships, the client machine opens up a connection to the server on a particular port and the server then responds to the client on that port. When an FTP client connects to an FTP server it opens a connection to the FTP control port 21. Then the client tells the FTP server whether to establish an active or passive connection. The type of connection chosen by the client determines how the server responds and on what ports transactions will occur. The two types of data connections are: Active Connections When an active connection is established, the server opens a data connection to the client from port 20 to a high range port on the client machine. All data from the server is then passed over this connection. Passive Connections When a passive connection is established, the client asks the FTP server to establish a passive connection port, which can be on any port higher than 10,000. The server then binds to this high-numbered port for this particular session and relays that port number back to the client. The client then opens the newly bound port for the data connection. Each data request the client makes results in a separate data connection. Most modern FTP clients attempt to establish a passive connection when requesting data from servers. Note The client determines the type of connection, not the server. This means to effectively cluster FTP, you must configure the LVS routers to handle both active and passive connections. The FTP client-server relationship can potentially open a large number of ports that the Piranha Configuration Tool and IPVS do not know about.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-ftp-VSA
Chapter 4. ImageStreamImage [image.openshift.io/v1]
Chapter 4. ImageStreamImage [image.openshift.io/v1] Description ImageStreamImage represents an Image that is retrieved by image name from an ImageStream. User interfaces and regular users can use this resource to access the metadata details of a tagged image in the image stream history for viewing, since Image resources are not directly accessible to end users. A not found error will be returned if no such image is referenced by a tag within the ImageStream. Images are created when spec tags are set on an image stream that represent an image in an external registry, when pushing to the integrated registry, or when tagging an existing image from one image stream to another. The name of an image stream image is in the form "<STREAM>@<DIGEST>", where the digest is the content addressible identifier for the image (sha256:xxxxx... ). You can use ImageStreamImages as the from.kind of an image stream spec tag to reference an image exactly. The only operations supported on the imagestreamimage endpoint are retrieving the image. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required image 4.1. Specification Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources image object Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata 4.1.1. .image Description Image is an immutable representation of a container image and metadata at a point in time. Images are named by taking a hash of their contents (metadata and content) and any change in format, content, or metadata results in a new name. The images resource is primarily for use by cluster administrators and integrations like the cluster image registry - end users instead access images via the imagestreamtags or imagestreamimages resources. While image metadata is stored in the API, any integration that implements the container image registry API must provide its own storage for the raw manifest data, image config, and layer contents. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources dockerImageConfig string DockerImageConfig is a JSON blob that the runtime uses to set up the container. This is a part of manifest schema v2. Will not be set when the image represents a manifest list. dockerImageLayers array DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. dockerImageLayers[] object ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. dockerImageManifest string DockerImageManifest is the raw JSON of the manifest dockerImageManifestMediaType string DockerImageManifestMediaType specifies the mediaType of manifest. This is a part of manifest schema v2. dockerImageManifests array DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. dockerImageManifests[] object ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. dockerImageMetadata RawExtension DockerImageMetadata contains metadata about this image dockerImageMetadataVersion string DockerImageMetadataVersion conveys the version of the object, which if empty defaults to "1.0" dockerImageReference string DockerImageReference is the string that can be used to pull this image. dockerImageSignatures array (string) DockerImageSignatures provides the signatures as opaque blobs. This is a part of manifest schema v1. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signatures array Signatures holds all signatures of the image. signatures[] object ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). 4.1.2. .image.dockerImageLayers Description DockerImageLayers represents the layers in the image. May not be set if the image does not define that data or if the image represents a manifest list. Type array 4.1.3. .image.dockerImageLayers[] Description ImageLayer represents a single layer of the image. Some images may have multiple layers. Some may have none. Type object Required name size mediaType Property Type Description mediaType string MediaType of the referenced object. name string Name of the layer as defined by the underlying store. size integer Size of the layer in bytes as defined by the underlying store. 4.1.4. .image.dockerImageManifests Description DockerImageManifests holds information about sub-manifests when the image represents a manifest list. When this field is present, no DockerImageLayers should be specified. Type array 4.1.5. .image.dockerImageManifests[] Description ImageManifest represents sub-manifests of a manifest list. The Digest field points to a regular Image object. Type object Required digest mediaType manifestSize architecture os Property Type Description architecture string Architecture specifies the supported CPU architecture, for example amd64 or ppc64le . digest string Digest is the unique identifier for the manifest. It refers to an Image object. manifestSize integer ManifestSize represents the size of the raw object contents, in bytes. mediaType string MediaType defines the type of the manifest, possible values are application/vnd.oci.image.manifest.v1+json, application/vnd.docker.distribution.manifest.v2+json or application/vnd.docker.distribution.manifest.v1+json. os string OS specifies the operating system, for example linux . variant string Variant is an optional field repreenting a variant of the CPU, for example v6 to specify a particular CPU variant of the ARM CPU. 4.1.6. .image.signatures Description Signatures holds all signatures of the image. Type array 4.1.7. .image.signatures[] Description ImageSignature holds a signature of an image. It allows to verify image identity and possibly other claims as long as the signature is trusted. Based on this information it is possible to restrict runnable images to those matching cluster-wide policy. Mandatory fields should be parsed by clients doing image verification. The others are parsed from signature's content by the server. They serve just an informative purpose. Compatibility level 1: Stable within a major release for a minimum of 12 months or 3 minor releases (whichever is longer). Type object Required type content Property Type Description apiVersion string APIVersion defines the versioned schema of this representation of an object. Servers should convert recognized schemas to the latest internal value, and may reject unrecognized values. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources conditions array Conditions represent the latest available observations of a signature's current state. conditions[] object SignatureCondition describes an image signature condition of particular kind at particular probe time. content string Required: An opaque binary string which is an image's signature. created Time If specified, it is the time of signature's creation. imageIdentity string A human readable string representing image's identity. It could be a product name and version, or an image pull spec (e.g. "registry.access.redhat.com/rhel7/rhel:7.2"). issuedBy object SignatureIssuer holds information about an issuer of signing certificate or key. issuedTo object SignatureSubject holds information about a person or entity who created the signature. kind string Kind is a string value representing the REST resource this object represents. Servers may infer this from the endpoint the client submits requests to. Cannot be updated. In CamelCase. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds metadata ObjectMeta metadata is the standard object's metadata. More info: https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#metadata signedClaims object (string) Contains claims from the signature. type string Required: Describes a type of stored blob. 4.1.8. .image.signatures[].conditions Description Conditions represent the latest available observations of a signature's current state. Type array 4.1.9. .image.signatures[].conditions[] Description SignatureCondition describes an image signature condition of particular kind at particular probe time. Type object Required type status Property Type Description lastProbeTime Time Last time the condition was checked. lastTransitionTime Time Last time the condition transit from one status to another. message string Human readable message indicating details about last transition. reason string (brief) reason for the condition's last transition. status string Status of the condition, one of True, False, Unknown. type string Type of signature condition, Complete or Failed. 4.1.10. .image.signatures[].issuedBy Description SignatureIssuer holds information about an issuer of signing certificate or key. Type object Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. 4.1.11. .image.signatures[].issuedTo Description SignatureSubject holds information about a person or entity who created the signature. Type object Required publicKeyID Property Type Description commonName string Common name (e.g. openshift-signing-service). organization string Organization name. publicKeyID string If present, it is a human readable key id of public key belonging to the subject used to verify image signature. It should contain at least 64 lowest bits of public key's fingerprint (e.g. 0x685ebe62bf278440). 4.2. API endpoints The following API endpoints are available: /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimages/{name} GET : read the specified ImageStreamImage 4.2.1. /apis/image.openshift.io/v1/namespaces/{namespace}/imagestreamimages/{name} Table 4.1. Global path parameters Parameter Type Description name string name of the ImageStreamImage namespace string object name and auth scope, such as for teams and projects Table 4.2. Global query parameters Parameter Type Description pretty string If 'true', then the output is pretty printed. HTTP method GET Description read the specified ImageStreamImage Table 4.3. HTTP responses HTTP code Reponse body 200 - OK ImageStreamImage schema 401 - Unauthorized Empty
null
https://docs.redhat.com/en/documentation/openshift_container_platform/4.14/html/image_apis/imagestreamimage-image-openshift-io-v1
4.7. Synchronizing Configuration Files
4.7. Synchronizing Configuration Files After configuring the primary LVS router, there are several configuration files that must be copied to the backup LVS router before you start the Load Balancer Add-On. These files include: /etc/sysconfig/ha/lvs.cf - the configuration file for the LVS routers. /etc/sysctl - the configuration file that, among other things, turns on packet forwarding in the kernel. /etc/sysconfig/iptables - If you are using firewall marks, you should synchronize one of these files based on which network packet filter you are using. Important The /etc/sysctl.conf and /etc/sysconfig/iptables files do not change when you configure the Load Balancer Add-On using the Piranha Configuration Tool. 4.7.1. Synchronizing lvs.cf Anytime the LVS configuration file, /etc/sysconfig/ha/lvs.cf , is created or updated, you must copy it to the backup LVS router node. Warning Both the active and backup LVS router nodes must have identical lvs.cf files. Mismatched LVS configuration files between the LVS router nodes can prevent failover. The best way to do this is to use the scp command. Important To use scp the sshd must be running on the backup router, see Section 2.1, "Configuring Services on the LVS Router" for details on how to properly configure the necessary services on the LVS routers. Issue the following command as the root user from the primary LVS router to sync the lvs.cf files between the router nodes: scp /etc/sysconfig/ha/lvs.cf n.n.n.n :/etc/sysconfig/ha/lvs.cf In the command, replace n.n.n.n with the real IP address of the backup LVS router.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/load_balancer_administration/s1-lvs-sync-VSA
Part V. Developing Applications Using JAX-WS
Part V. Developing Applications Using JAX-WS This guide describes how to develop Web services using the standard JAX-WS APIs.
null
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_cxf_development_guide/jaxwsguide
Chapter 9. Deploying Features
Chapter 9. Deploying Features Abstract Because applications and other tools typically consist of multiple OSGi bundles, it is often convenient to aggregate inter-dependent or related bundles into a larger unit of deployment. Red Hat Fuse therefore provides a scalable unit of deployment, the feature , which enables you to deploy multiple bundles (and, optionally, dependencies on other features) in a single step. 9.1. Creating a Feature 9.1.1. Overview Essentially, a feature is created by adding a new feature element to a special kind of XML file, known as a feature repository . To create a feature, perform the following steps: Section 9.2, "Create a custom feature repository" . Section 9.3, "Add a feature to the custom feature repository" . Section 9.4, "Add the local repository URL to the features service" . Section 9.5, "Add dependent features to the feature" . Section 9.6, "Add OSGi configurations to the feature" . 9.2. Create a custom feature repository If you have not already defined a custom feature repository, you can create one as follows. Choose a convenient location for the feature repository on your file system-for example, C:\Projects\features.xml -and use your favorite text editor to add the following lines to it: Where you must specify a name for the repository, CustomRepository , by setting the name attribute. Note In contrast to a Maven repository or an OBR, a feature repository does not provide a storage location for bundles. A feature repository merely stores an aggregate of references to bundles. The bundles themselves are stored elsewhere (for example, in the file system or in a Maven repository). 9.3. Add a feature to the custom feature repository To add a feature to the custom feature repository, insert a new feature element as a child of the root features element. You must give the feature a name and you can list any number of bundles belonging to the feature, by inserting bundle child elements. For example, to add a feature named example-camel-bundle containing the single bundle, C:\Projects\camel-bundle\target\camel-bundle-1.0-SNAPSHOT.jar , add a feature element as follows: The contents of the bundle element can be any valid URL, giving the location of a bundle (see Chapter 15, URL Handlers ). You can optionally specify a version attribute on the feature element, to assign a non-zero version to the feature (you can then specify the version as an optional argument to the features:install command). To check whether the features service successfully parses the new feature entry, enter the following pair of console commands: The features:list command typically produces a rather long listing of features, but you should be able to find the entry for your new feature (in this case, example-camel-bundle ) by scrolling back through the listing. The features:refreshurl command forces the kernel to reread all the feature repositories: if you did not issue this command, the kernel would not be aware of any recent changes that you made to any of the repositories (in particular, the new feature would not appear in the listing). To avoid scrolling through the long list of features, you can grep for the example-camel-bundle feature as follows: Where the grep command (a standard UNIX pattern matching utility) is built into the shell, so this command also works on Windows platforms. 9.4. Add the local repository URL to the features service In order to make the new feature repository available to Apache Karaf, you must add the feature repository using the features:addurl console command. For example, to make the contents of the repository, C:\Projects\features.xml , available to the kernel, you would enter the following console command: Where the argument to features:addurl can be specified using any of the supported URL formats (see Chapter 15, URL Handlers ). You can check that the repository's URL is registered correctly by entering the features:listUrl console command, to get a complete listing of all registered feature repository URLs, as follows: 9.5. Add dependent features to the feature If your feature depends on other features, you can specify these dependencies by adding feature elements as children of the original feature element. Each child feature element contains the name of a feature on which the current feature depends. When you deploy a feature with dependent features, the dependency mechanism checks whether or not the dependent features are installed in the container. If not, the dependency mechanism automatically installs the missing dependencies (and any recursive dependencies). For example, for the custom Apache Camel feature, example-camel-bundle , you can specify explicitly which standard Apache Camel features it depends on. This has the advantage that the application could now be successfully deployed and run, even if the OSGi container does not have the required features pre-deployed. For example, you can define the example-camel-bundle feature with Apache Camel dependencies as follows: Specifying the version attribute is optional. When present, it enables you to select the specified version of the feature. 9.6. Add OSGi configurations to the feature If your application uses the OSGi Configuration Admin service, you can specify configuration settings for this service using the config child element of your feature definition. For example, to specify that the prefix property has the value, MyTransform , add the following config child element to your feature's configuration: Where the name attribute of the config element specifies the persistent ID of the property settings (where the persistent ID acts effectively as a name scope for the property names). The content of the config element is parsed in the same way as a Java properties file . The settings in the config element can optionally be overridden by the settings in the Java properties file located in the InstallDir /etc directory, which is named after the persistent ID, as follows: As an example of how the preceding configuration properties can be used in practice, consider the following Blueprint XML file that accesses the OSGi configuration properties: When this Blueprint XML file is deployed in the example-camel-bundle bundle, the property reference, USD{prefix} , is replaced by the value, MyTransform , which is specified by the config element in the feature repository. 9.7. Automatically deploy an OSGi configuration By adding a configfile element to a feature, you can ensure that an OSGi configuration file gets added to the InstallDir /etc directory at the same time that the feature is installed. This means that you can conveniently install a feature and its associated configuration at the same time. For example, given that the org.fusesource.fuseesb.example.cfg configuration file is archived in a Maven repository at mvn:org.fusesource.fuseesb.example/configadmin/1.0/cfg , you could deploy the configuration file by adding the following element to the feature:
[ "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <features name=\" CustomRepository \"> </features>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <features name=\"MyFeaturesRepo\"> <feature name=\" example-camel-bundle \"> <bundle>file:C:/Projects/camel-bundle/target/camel-bundle-1.0-SNAPSHOT.jar</bundle> </feature> </features>", "JBossFuse:karaf@root> features:refreshurl JBossFuse:karaf@root> features:list [uninstalled] [0.0.0 ] example-camel-bundle MyFeaturesRepo", "JBossFuse:karaf@root> features:list | grep example-camel-bundle [uninstalled] [0.0.0 ] example-camel-bundle MyFeaturesRepo", "features:addurl file:C:/Projects/features.xml", "JBossFuse:karaf@root> features:listUrl file:C:/Projects/features.xml mvn:org.apache.ode/ode-jbi-karaf/1.3.3-fuse-01-00/xml/features mvn:org.apache.felix.karaf/apache-felix-karaf/1.2.0-fuse-01-00/xml/features", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <features name=\"MyFeaturesRepo\"> <feature name=\" example-camel-bundle \"> <bundle>file:C:/Projects/camel-bundle/target/camel-bundle-1.0-SNAPSHOT.jar</bundle> <feature version=\"7.13.0.fuse-7_13_0-00012-redhat-00001\">camel-core</feature> <feature version=\"7.13.0.fuse-7_13_0-00012-redhat-00001\">camel-spring-osgi</feature> </feature> </features>", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <features name=\"MyFeaturesRepo\"> <feature name=\"example-camel-bundle\"> <config name=\" org.fusesource.fuseesb.example \"> prefix=MyTransform </config> </feature> </features>", "InstallDir /etc/org.fusesource.fuseesb.example.cfg", "<?xml version=\"1.0\" encoding=\"UTF-8\"?> <blueprint xmlns=\"http://www.osgi.org/xmlns/blueprint/v1.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\" xmlns:cm=\"http://aries.apache.org/blueprint/xmlns/blueprint-cm/v1.1.0\" > <!-- osgi blueprint property placeholder --> <cm:property-placeholder id=\"placeholder\" persistent-id=\" org.fusesource.fuseesb.example \"> <cm:default-properties> <cm:property name=\"prefix\" value=\"DefaultValue\"/> </cm:default-properties> </cm:property-placeholder> <bean id=\"myTransform\" class=\"org.fusesource.fuseesb.example.MyTransform\"> <property name=\"prefix\" value=\" USD{prefix} \"/> </bean> </blueprint>", "<configfile finalname=\"etc/org.fusesource.fuseesb.example.cfg\"> mvn:org.fusesource.fuseesb.example/configadmin/1.0/cfg </configfile>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/deploying_into_apache_karaf/deployfeatures
Providing feedback on Red Hat build of OpenJDK documentation
Providing feedback on Red Hat build of OpenJDK documentation To report an error or to improve our documentation, log in to your Red Hat Jira account and submit an issue. If you do not have a Red Hat Jira account, then you will be prompted to create an account. Procedure Click the following link to create a ticket . Enter a brief description of the issue in the Summary . Provide a detailed description of the issue or enhancement in the Description . Include a URL to where the issue occurs in the documentation. Clicking Create creates and routes the issue to the appropriate documentation team.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/21/html/release_notes_for_eclipse_temurin_21.0.6/providing-direct-documentation-feedback_openjdk
Chapter 9. Consoles and logging during installation
Chapter 9. Consoles and logging during installation The Red Hat Enterprise Linux installer uses the tmux terminal multiplexer to display and control several windows in addition to the main interface. Each of these windows serve a different purpose; they display several different logs, which can be used to troubleshoot issues during the installation process. One of the windows provides an interactive shell prompt with root privileges, unless this prompt was specifically disabled using a boot option or a Kickstart command. The terminal multiplexer is running in virtual console 1. To switch from the actual installation environment to tmux , press Ctrl + Alt + F1 . To go back to the main installation interface which runs in virtual console 6, press Ctrl + Alt + F6 . During the text mode installation, start in virtual console 1 ( tmux ), and switching to console 6 will open a shell prompt instead of a graphical interface. The console running tmux has five available windows; their contents are described in the following table, along with keyboard shortcuts. Note that the keyboard shortcuts are two-part: first press Ctrl + b , then release both keys, and press the number key for the window you want to use. You can also use Ctrl + b n , Alt+ Tab , and Ctrl + b p to switch to the or tmux window, respectively. Table 9.1. Available tmux windows Shortcut Contents Ctrl + b 1 Main installation program window. Contains text-based prompts (during text mode installation or if you use VNC direct mode), and also some debugging information. Ctrl + b 2 Interactive shell prompt with root privileges. Ctrl + b 3 Installation log; displays messages stored in /tmp/anaconda.log . Ctrl + b 4 Storage log; displays messages related to storage devices and configuration, stored in /tmp/storage.log . Ctrl + b 5 Program log; displays messages from utilities executed during the installation process, stored in /tmp/program.log .
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/interactively_installing_rhel_from_installation_media/consoles-logging-during-install_rhel-installer
9.6. Utilization and Placement Strategy
9.6. Utilization and Placement Strategy Pacemaker decides where to place a resource according to the resource allocation scores on every node. The resource will be allocated to the node where the resource has the highest score. This allocation score is derived from a combination of factors, including resource constraints, resource-stickiness settings, prior failure history of a resource on each node, and utilization of each node. If the resource allocation scores on all the nodes are equal, by the default placement strategy Pacemaker will choose a node with the least number of allocated resources for balancing the load. If the number of resources on each node is equal, the first eligible node listed in the CIB will be chosen to run the resource. Often, however, different resources use significantly different proportions of a node's capacities (such as memory or I/O). You cannot always balance the load ideally by taking into account only the number of resources allocated to a node. In addition, if resources are placed such that their combined requirements exceed the provided capacity, they may fail to start completely or they may run run with degraded performance. To take these factors into account, Pacemaker allows you to configure the following components: the capacity a particular node provides the capacity a particular resource requires an overall strategy for placement of resources The following sections describe how to configure these components. 9.6.1. Utilization Attributes To configure the capacity that a node provides or a resource requires, you can use utilization attributes for nodes and resources. You do this by setting a utilization variable for a resource and assigning a value to that variable to indicate what the resource requires, and then setting that same utilization variable for a node and assigning a value to that variable to indicate what that node provides. You can name utilization attributes according to your preferences and define as many name and value pairs as your configuration needs. The values of utilization attributes must be integers. As of Red Hat Enterprise Linux 7.3, you can set utilization attributes with the pcs command. The following example configures a utilization attribute of CPU capacity for two nodes, naming the attribute cpu . It also configures a utilization attribute of RAM capacity, naming the attribute memory . In this example: Node 1 is defined as providing a CPU capacity of two and a RAM capacity of 2048 Node 2 is defined as providing a CPU capacity of four and a RAM capacity of 2048 The following example specifies the same utilization attributes that three different resources require. In this example: resource dummy-small requires a CPU capacity of 1 and a RAM capacity of 1024 resource dummy-medium requires a CPU capacity of 2 and a RAM capacity of 2048 resource dummy-large requires a CPU capacity of 1 and a RAM capacity of 3072 A node is considered eligible for a resource if it has sufficient free capacity to satisfy the resource's requirements, as defined by the utilization attributes. 9.6.2. Placement Strategy After you have configured the capacities your nodes provide and the capacities your resources require, you need to set the placement-strategy cluster property, otherwise the capacity configurations have no effect. For information on setting cluster properties, see Chapter 12, Pacemaker Cluster Properties . Four values are available for the placement-strategy cluster property: default - Utilization values are not taken into account at all. Resources are allocated according to allocation scores. If scores are equal, resources are evenly distributed across nodes. utilization - Utilization values are taken into account only when deciding whether a node is considered eligible (that is, whether it has sufficient free capacity to satisfy the resource's requirements). Load-balancing is still done based on the number of resources allocated to a node. balanced - Utilization values are taken into account when deciding whether a node is eligible to serve a resource and when load-balancing, so an attempt is made to spread the resources in a way that optimizes resource performance. minimal - Utilization values are taken into account only when deciding whether a node is eligible to serve a resource. For load-balancing, an attempt is made to concentrate the resources on as few nodes as possible, thereby enabling possible power savings on the remaining nodes. The following example command sets the value of placement-strategy to balanced . After running this command, Pacemaker will ensure the load from your resources will be distributed evenly throughout the cluster, without the need for complicated sets of colocation constraints. 9.6.3. Resource Allocation The following subsections summarize how Pacemaker allocates resources. 9.6.3.1. Node Preference Pacemaker determines which node is preferred when allocating resources according to the following strategy. The node with the highest node weight gets consumed first. Node weight is a score maintained by the cluster to represent node health. If multiple nodes have the same node weight: If the placement-strategy cluster property is default or utilization : The node that has the least number of allocated resources gets consumed first. If the numbers of allocated resources are equal, the first eligible node listed in the CIB gets consumed first. If the placement-strategy cluster property is balanced : The node that has the most free capacity gets consumed first. If the free capacities of the nodes are equal, the node that has the least number of allocated resources gets consumed first. If the free capacities of the nodes are equal and the number of allocated resources is equal, the first eligible node listed in the CIB gets consumed first. If the placement-strategy cluster property is minimal , the first eligible node listed in the CIB gets consumed first. 9.6.3.2. Node Capacity Pacemaker determines which node has the most free capacity according to the following strategy. If only one type of utilization attribute has been defined, free capacity is a simple numeric comparison. If multiple types of utilization attributes have been defined, then the node that is numerically highest in the most attribute types has the most free capacity. For example: If NodeA has more free CPUs, and NodeB has more free memory, then their free capacities are equal. If NodeA has more free CPUs, while NodeB has more free memory and storage, then NodeB has more free capacity. 9.6.3.3. Resource Allocation Preference Pacemaker determines which resource is allocated first according to the following strategy. The resource that has the highest priority gets allocated first. For information on setting priority for a resource, see Table 6.3, "Resource Meta Options" . If the priorities of the resources are equal, the resource that has the highest score on the node where it is running gets allocated first, to prevent resource shuffling. If the resource scores on the nodes where the resources are running are equal or the resources are not running, the resource that has the highest score on the preferred node gets allocated first. If the resource scores on the preferred node are equal in this case, the first runnable resource listed in the CIB gets allocated first. 9.6.4. Resource Placement Strategy Guidelines To ensure that Pacemaker's placement strategy for resources works most effectively, you should take the following considerations into account when configuring your system. Make sure that you have sufficient physical capacity. If the physical capacity of your nodes is being used to near maximum under normal conditions, then problems could occur during failover. Even without the utilization feature, you may start to experience timeouts and secondary failures. Build some buffer into the capabilities you configure for the nodes. Advertise slightly more node resources than you physically have, on the assumption the that a Pacemaker resource will not use 100% of the configured amount of CPU, memory, and so forth all the time. This practice is sometimes called overcommit. Specify resource priorities. If the cluster is going to sacrifice services, it should be the ones you care about least. Ensure that resource priorities are properly set so that your most important resources are scheduled first. For information on setting resource priorities, see Table 6.3, "Resource Meta Options" . 9.6.5. The NodeUtilization Resource Agent (Red Hat Enterprise Linux 7.4 and later) Red Hat Enterprise Linux 7.4 supports the NodeUtilization resource agent. The NodeUtilization agent can detect the system parameters of available CPU, host memory availability, and hypervisor memory availability and add these parameters into the CIB. You can run the agent as a clone resource to have it automatically populate these parameters on each node. For information on the NodeUtilization resource agent and the resource options for this agent, run the pcs resource describe NodeUtilization command.
[ "pcs node utilization node1 cpu=2 memory=2048 pcs node utilization node2 cpu=4 memory=2048", "pcs resource utilization dummy-small cpu=1 memory=1024 pcs resource utilization dummy-medium cpu=2 memory=2048 pcs resource utilization dummy-large cpu=3 memory=3072", "pcs property set placement-strategy=balanced" ]
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/high_availability_add-on_reference/s1-utilization-HAAR
Chapter 6. Post-installation network configuration
Chapter 6. Post-installation network configuration After installing OpenShift Container Platform, you can further expand and customize your network to your requirements. 6.1. Cluster Network Operator configuration The configuration for the cluster network is specified as part of the Cluster Network Operator (CNO) configuration and stored in a custom resource (CR) object that is named cluster . The CR specifies the fields for the Network API in the operator.openshift.io API group. The CNO configuration inherits the following fields during cluster installation from the Network API in the Network.config.openshift.io API group and these fields cannot be changed: clusterNetwork IP address pools from which pod IP addresses are allocated. serviceNetwork IP address pool for services. defaultNetwork.type Cluster network provider, such as OpenShift SDN or OVN-Kubernetes. Note After cluster installation, you cannot modify the fields listed in the section. 6.2. Enabling the cluster-wide proxy The Proxy object is used to manage the cluster-wide egress proxy. When a cluster is installed or upgraded without the proxy configured, a Proxy object is still generated but it will have a nil spec . For example: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: "" status: A cluster administrator can configure the proxy for OpenShift Container Platform by modifying this cluster Proxy object. Note Only the Proxy object named cluster is supported, and no additional proxies can be created. Prerequisites Cluster administrator permissions OpenShift Container Platform oc CLI tool installed Procedure Create a config map that contains any additional CA certificates required for proxying HTTPS connections. Note You can skip this step if the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Create a file called user-ca-bundle.yaml with the following contents, and provide the values of your PEM-encoded certificates: apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4 1 This data key must be named ca-bundle.crt . 2 One or more PEM-encoded X.509 certificates used to sign the proxy's identity certificate. 3 The config map name that will be referenced from the Proxy object. 4 The config map must be in the openshift-config namespace. Create the config map from this file: USD oc create -f user-ca-bundle.yaml Use the oc edit command to modify the Proxy object: USD oc edit proxy/cluster Configure the necessary fields for the proxy: apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5 1 A proxy URL to use for creating HTTP connections outside the cluster. The URL scheme must be http . 2 A proxy URL to use for creating HTTPS connections outside the cluster. The URL scheme must be either http or https . Specify a URL for the proxy that supports the URL scheme. For example, most proxies will report an error if they are configured to use https but they only support http . This failure message may not propagate to the logs and can appear to be a network connection failure instead. If using a proxy that listens for https connections from the cluster, you may need to configure the cluster to accept the CAs and certificates that the proxy uses. 3 A comma-separated list of destination domain names, domains, IP addresses or other network CIDRs to exclude proxying. Preface a domain with . to match subdomains only. For example, .y.com matches x.y.com , but not y.com . Use * to bypass proxy for all destinations. If you scale up workers that are not included in the network defined by the networking.machineNetwork[].cidr field from the installation configuration, you must add them to this list to prevent connection issues. This field is ignored if neither the httpProxy or httpsProxy fields are set. 4 One or more URLs external to the cluster to use to perform a readiness check before writing the httpProxy and httpsProxy values to status. 5 A reference to the config map in the openshift-config namespace that contains additional CA certificates required for proxying HTTPS connections. Note that the config map must already exist before referencing it here. This field is required unless the proxy's identity certificate is signed by an authority from the RHCOS trust bundle. Save the file to apply the changes. 6.3. Setting DNS to private After you deploy a cluster, you can modify its DNS to use only a private zone. Procedure Review the DNS custom resource for your cluster: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {} Note that the spec section contains both a private and a public zone. Patch the DNS custom resource to remove the public zone: USD oc patch dnses.config.openshift.io/cluster --type=merge --patch='{"spec": {"publicZone": null}}' dns.config.openshift.io/cluster patched Because the Ingress Controller consults the DNS definition when it creates Ingress objects, when you create or modify Ingress objects, only private records are created. Important DNS records for the existing Ingress objects are not modified when you remove the public zone. Optional: Review the DNS custom resource for your cluster and confirm that the public zone was removed: USD oc get dnses.config.openshift.io/cluster -o yaml Example output apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: "2019-10-25T18:27:09Z" generation: 2 name: cluster resourceVersion: "37966" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {} 6.4. Configuring ingress cluster traffic OpenShift Container Platform provides the following methods for communicating from outside the cluster with services running in the cluster: If you have HTTP/HTTPS, use an Ingress Controller. If you have a TLS-encrypted protocol other than HTTPS, such as TLS with the SNI header, use an Ingress Controller. Otherwise, use a load balancer, an external IP, or a node port. Method Purpose Use an Ingress Controller Allows access to HTTP/HTTPS traffic and TLS-encrypted protocols other than HTTPS, such as TLS with the SNI header. Automatically assign an external IP by using a load balancer service Allows traffic to non-standard ports through an IP address assigned from a pool. Manually assign an external IP to a service Allows traffic to non-standard ports through a specific IP address. Configure a NodePort Expose a service on all nodes in the cluster. 6.5. Configuring the node port service range As a cluster administrator, you can expand the available node port range. If your cluster uses of a large number of node ports, you might need to increase the number of available ports. The default port range is 30000-32767 . You can never reduce the port range, even if you first expand it beyond the default range. 6.5.1. Prerequisites Your cluster infrastructure must allow access to the ports that you specify within the expanded range. For example, if you expand the node port range to 30000-32900 , the inclusive port range of 32768-32900 must be allowed by your firewall or packet filtering configuration. 6.5.1.1. Expanding the node port range You can expand the node port range for the cluster. Prerequisites Install the OpenShift CLI ( oc ). Log in to the cluster with a user with cluster-admin privileges. Procedure To expand the node port range, enter the following command. Replace <port> with the largest port number in the new range. USD oc patch network.config.openshift.io cluster --type=merge -p \ '{ "spec": { "serviceNodePortRange": "30000-<port>" } }' Tip You can alternatively apply the following YAML to update the node port range: apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: "30000-<port>" Example output network.config.openshift.io/cluster patched To confirm that the configuration is active, enter the following command. It can take several minutes for the update to apply. USD oc get configmaps -n openshift-kube-apiserver config \ -o jsonpath="{.data['config\.yaml']}" | \ grep -Eo '"service-node-port-range":["[[:digit:]]+-[[:digit:]]+"]' Example output "service-node-port-range":["30000-33000"] 6.6. Configuring network policy As a cluster administrator or project administrator, you can configure network policies for a project. 6.6.1. About network policy In a cluster using a Kubernetes Container Network Interface (CNI) plugin that supports Kubernetes network policy, network isolation is controlled entirely by NetworkPolicy objects. In OpenShift Container Platform 4.9, OpenShift SDN supports using network policy in its default network isolation mode. Note When using the OpenShift SDN cluster network provider, the following limitations apply regarding network policies: Network policy egress as specified by the egress field is not supported. Egress firewall is also known as egress network policy in OpenShift SDN. This is not the same as network policy egress. IPBlock is supported by network policy, but without support for except clauses. If you create a policy with an IPBlock section that includes an except clause, the SDN pods log warnings and the entire IPBlock section of that policy is ignored. Warning Network policy does not apply to the host network namespace. Pods with host networking enabled are unaffected by network policy rules. By default, all pods in a project are accessible from other pods and network endpoints. To isolate one or more pods in a project, you can create NetworkPolicy objects in that project to indicate the allowed incoming connections. Project administrators can create and delete NetworkPolicy objects within their own project. If a pod is matched by selectors in one or more NetworkPolicy objects, then the pod will accept only connections that are allowed by at least one of those NetworkPolicy objects. A pod that is not selected by any NetworkPolicy objects is fully accessible. The following example NetworkPolicy objects demonstrate supporting different scenarios: Deny all traffic: To make a project deny by default, add a NetworkPolicy object that matches all pods but accepts no traffic: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: [] Only allow connections from the OpenShift Container Platform Ingress Controller: To make a project allow only connections from the OpenShift Container Platform Ingress Controller, add the following NetworkPolicy object. apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress Only accept connections from pods within a project: To make pods accept connections from other pods in the same project, but reject all other connections from pods in other projects, add the following NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} Only allow HTTP and HTTPS traffic based on pod labels: To enable only HTTP and HTTPS access to the pods with a specific label ( role=frontend in following example), add a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443 Accept connections by using both namespace and pod selectors: To match network traffic by combining namespace and pod selectors, you can use a NetworkPolicy object similar to the following: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods NetworkPolicy objects are additive, which means you can combine multiple NetworkPolicy objects together to satisfy complex network requirements. For example, for the NetworkPolicy objects defined in samples, you can define both allow-same-namespace and allow-http-and-https policies within the same project. Thus allowing the pods with the label role=frontend , to accept any connection allowed by each policy. That is, connections on any port from pods in the same namespace, and connections on ports 80 and 443 from pods in any namespace. 6.6.2. Example NetworkPolicy object The following annotates an example NetworkPolicy object: kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017 1 The name of the NetworkPolicy object. 2 A selector that describes the pods to which the policy applies. The policy object can only select pods in the project that defines the NetworkPolicy object. 3 A selector that matches the pods from which the policy object allows ingress traffic. The selector matches pods in the same namespace as the NetworkPolicy. 4 A list of one or more destination ports on which to accept traffic. 6.6.3. Creating a network policy To define granular rules describing ingress or egress network traffic allowed for namespaces in your cluster, you can create a network policy. Note If you log in with a user with the cluster-admin role, then you can create a network policy in any namespace in the cluster. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. You are working in the namespace that the network policy applies to. Procedure Create a policy rule: Create a <policy_name>.yaml file: USD touch <policy_name>.yaml where: <policy_name> Specifies the network policy file name. Define a network policy in the file that you just created, such as in the following examples: Deny ingress from all pods in all namespaces kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: [] .Allow ingress from all pods in the same namespace kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} To create the network policy object, enter the following command: USD oc apply -f <policy_name>.yaml -n <namespace> where: <policy_name> Specifies the network policy file name. <namespace> Optional: Specifies the namespace if the object is defined in a different namespace than the current namespace. Example output networkpolicy.networking.k8s.io/default-deny created Note If you log in with a user with the cluster-admin role in the console, then you have a choice of creating a network policy in any namespace in the cluster directly from the YAML view or from a form in the web console. 6.6.4. Configuring multitenant isolation by using network policy You can configure your project to isolate it from pods and services in other project namespaces. Prerequisites Your cluster uses a cluster network provider that supports NetworkPolicy objects, such as the OVN-Kubernetes network provider or the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You are logged in to the cluster with a user with admin privileges. Procedure Create the following NetworkPolicy objects: A policy named allow-from-openshift-ingress . USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: "" podSelector: {} policyTypes: - Ingress EOF Note policy-group.network.openshift.io/ingress: "" is the preferred namespace selector label for OpenShift SDN. You can use the network.openshift.io/policy-group: ingress namespace selector label, but this is a legacy label. A policy named allow-from-openshift-monitoring : USD cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF A policy named allow-same-namespace : USD cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF Optional: To confirm that the network policies exist in your current project, enter the following command: USD oc describe networkpolicy Example output Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress 6.6.5. Creating default network policies for a new project As a cluster administrator, you can modify the new project template to automatically include NetworkPolicy objects when you create a new project. 6.6.6. Modifying the template for new projects As a cluster administrator, you can modify the default project template so that new projects are created using your custom requirements. To create your own custom project template: Procedure Log in as a user with cluster-admin privileges. Generate the default project template: USD oc adm create-bootstrap-project-template -o yaml > template.yaml Use a text editor to modify the generated template.yaml file by adding objects or modifying existing objects. The project template must be created in the openshift-config namespace. Load your modified template: USD oc create -f template.yaml -n openshift-config Edit the project configuration resource using the web console or CLI. Using the web console: Navigate to the Administration Cluster Settings page. Click Configuration to view all configuration resources. Find the entry for Project and click Edit YAML . Using the CLI: Edit the project.config.openshift.io/cluster resource: USD oc edit project.config.openshift.io/cluster Update the spec section to include the projectRequestTemplate and name parameters, and set the name of your uploaded project template. The default name is project-request . Project configuration resource with custom project template apiVersion: config.openshift.io/v1 kind: Project metadata: ... spec: projectRequestTemplate: name: <template_name> After you save your changes, create a new project to verify that your changes were successfully applied. 6.6.6.1. Adding network policies to the new project template As a cluster administrator, you can add network policies to the default template for new projects. OpenShift Container Platform will automatically create all the NetworkPolicy objects specified in the template in the project. Prerequisites Your cluster uses a default CNI network provider that supports NetworkPolicy objects, such as the OpenShift SDN network provider with mode: NetworkPolicy set. This mode is the default for OpenShift SDN. You installed the OpenShift CLI ( oc ). You must log in to the cluster with a user with cluster-admin privileges. You must have created a custom default project template for new projects. Procedure Edit the default template for a new project by running the following command: USD oc edit template <project_template> -n openshift-config Replace <project_template> with the name of the default template that you configured for your cluster. The default template name is project-request . In the template, add each NetworkPolicy object as an element to the objects parameter. The objects parameter accepts a collection of one or more objects. In the following example, the objects parameter collection includes several NetworkPolicy objects. objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress ... Optional: Create a new project to confirm that your network policy objects are created successfully by running the following commands: Create a new project: USD oc new-project <project> 1 1 Replace <project> with the name for the project you are creating. Confirm that the network policy objects in the new project template exist in the new project: USD oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s 6.7. Supported configurations The following configurations are supported for the current release of Red Hat OpenShift Service Mesh. 6.7.1. Supported platforms The Red Hat OpenShift Service Mesh Operator supports multiple versions of the ServiceMeshControlPlane resource. Version 2.3 Service Mesh control planes are supported on the following platform versions: Red Hat OpenShift Container Platform version 4.9 or later. Red Hat OpenShift Dedicated version 4. Azure Red Hat OpenShift (ARO) version 4. Red Hat OpenShift Service on AWS (ROSA). 6.7.2. Unsupported configurations Explicitly unsupported cases include: OpenShift Online is not supported for Red Hat OpenShift Service Mesh. Red Hat OpenShift Service Mesh does not support the management of microservices outside the cluster where Service Mesh is running. 6.7.3. Supported network configurations Red Hat OpenShift Service Mesh supports the following network configurations. OpenShift-SDN OVN-Kubernetes is supported on OpenShift Container Platform 4.7.32+, OpenShift Container Platform 4.8.12+, and OpenShift Container Platform 4.9+. Third-Party Container Network Interface (CNI) plugins that have been certified on OpenShift Container Platform and passed Service Mesh conformance testing. See Certified OpenShift CNI Plug-ins for more information. 6.7.4. Supported configurations for Service Mesh This release of Red Hat OpenShift Service Mesh is only available on OpenShift Container Platform x86_64, IBM Z, and IBM Power Systems. IBM Z is only supported on OpenShift Container Platform 4.6 and later. IBM Power Systems is only supported on OpenShift Container Platform 4.6 and later. Configurations where all Service Mesh components are contained within a single OpenShift Container Platform cluster. Configurations that do not integrate external services such as virtual machines. Red Hat OpenShift Service Mesh does not support EnvoyFilter configuration except where explicitly documented. 6.7.5. Supported configurations for Kiali The Kiali console is only supported on the two most recent releases of the Chrome, Edge, Firefox, or Safari browsers. 6.7.6. Supported configurations for Distributed Tracing Jaeger agent as a sidecar is the only supported configuration for Jaeger. Jaeger as a daemonset is not supported for multitenant installations or OpenShift Dedicated. 6.7.7. Supported WebAssembly module 3scale WebAssembly is the only provided WebAssembly module. You can create custom WebAssembly modules. 6.7.8. Operator overview Red Hat OpenShift Service Mesh requires the following four Operators: OpenShift Elasticsearch - (Optional) Provides database storage for tracing and logging with the distributed tracing platform. It is based on the open source Elasticsearch project. Red Hat OpenShift distributed tracing platform - Provides distributed tracing to monitor and troubleshoot transactions in complex distributed systems. It is based on the open source Jaeger project. Kiali - Provides observability for your service mesh. Allows you to view configurations, monitor traffic, and analyze traces in a single console. It is based on the open source Kiali project. Red Hat OpenShift Service Mesh - Allows you to connect, secure, control, and observe the microservices that comprise your applications. The Service Mesh Operator defines and monitors the ServiceMeshControlPlane resources that manage the deployment, updating, and deletion of the Service Mesh components. It is based on the open source Istio project. steps Install Red Hat OpenShift Service Mesh in your OpenShift Container Platform environment. 6.8. Optimizing routing The OpenShift Container Platform HAProxy router can be scaled or configured to optimize performance. 6.8.1. Baseline Ingress Controller (router) performance The OpenShift Container Platform Ingress Controller, or router, is the ingress point for ingress traffic for applications and services that are configured using routes and ingresses. When evaluating a single HAProxy router performance in terms of HTTP requests handled per second, the performance varies depending on many factors. In particular: HTTP keep-alive/close mode Route type TLS session resumption client support Number of concurrent connections per target route Number of target routes Back end server page size Underlying infrastructure (network/SDN solution, CPU, and so on) While performance in your specific environment will vary, Red Hat lab tests on a public cloud instance of size 4 vCPU/16GB RAM. A single HAProxy router handling 100 routes terminated by backends serving 1kB static pages is able to handle the following number of transactions per second. In HTTP keep-alive mode scenarios: Encryption LoadBalancerService HostNetwork none 21515 29622 edge 16743 22913 passthrough 36786 53295 re-encrypt 21583 25198 In HTTP close (no keep-alive) scenarios: Encryption LoadBalancerService HostNetwork none 5719 8273 edge 2729 4069 passthrough 4121 5344 re-encrypt 2320 2941 The default Ingress Controller configuration was used with the spec.tuningOptions.threadCount field set to 4 . Two different endpoint publishing strategies were tested: Load Balancer Service and Host Network. TLS session resumption was used for encrypted routes. With HTTP keep-alive, a single HAProxy router is capable of saturating a 1 Gbit NIC at page sizes as small as 8 kB. When running on bare metal with modern processors, you can expect roughly twice the performance of the public cloud instance above. This overhead is introduced by the virtualization layer in place on public clouds and holds mostly true for private cloud-based virtualization as well. The following table is a guide to how many applications to use behind the router: Number of applications Application type 5-10 static file/web server or caching proxy 100-1000 applications generating dynamic content In general, HAProxy can support routes for up to 1000 applications, depending on the technology in use. Ingress Controller performance might be limited by the capabilities and performance of the applications behind it, such as language or static versus dynamic content. Ingress, or router, sharding should be used to serve more routes towards applications and help horizontally scale the routing tier. 6.9. Post-installation RHOSP network configuration You can configure some aspects of an OpenShift Container Platform on Red Hat OpenStack Platform (RHOSP) cluster after installation. 6.9.1. Configuring application access with floating IP addresses After you install OpenShift Container Platform, configure Red Hat OpenStack Platform (RHOSP) to allow application network traffic. Note You do not need to perform this procedure if you provided values for platform.openstack.apiFloatingIP and platform.openstack.ingressFloatingIP in the install-config.yaml file, or os_api_fip and os_ingress_fip in the inventory.yaml playbook, during installation. The floating IP addresses are already set. Prerequisites OpenShift Container Platform cluster must be installed Floating IP addresses are enabled as described in the OpenShift Container Platform on RHOSP installation documentation. Procedure After you install the OpenShift Container Platform cluster, attach a floating IP address to the ingress port: Show the port: USD openstack port show <cluster_name>-<cluster_ID>-ingress-port Attach the port to the IP address: USD openstack floating ip set --port <ingress_port_ID> <apps_FIP> Add a wildcard A record for *apps. to your DNS file: *.apps.<cluster_name>.<base_domain> IN A <apps_FIP> Note If you do not control the DNS server but want to enable application access for non-production purposes, you can add these hostnames to /etc/hosts : <apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain> 6.9.2. Kuryr ports pools A Kuryr ports pool maintains a number of ports on standby for pod creation. Keeping ports on standby minimizes pod creation time. Without ports pools, Kuryr must explicitly request port creation or deletion whenever a pod is created or deleted. The Neutron ports that Kuryr uses are created in subnets that are tied to namespaces. These pod ports are also added as subports to the primary port of OpenShift Container Platform cluster nodes. Because Kuryr keeps each namespace in a separate subnet, a separate ports pool is maintained for each namespace-worker pair. Prior to installing a cluster, you can set the following parameters in the cluster-network-03-config.yml manifest file to configure ports pool behavior: The enablePortPoolsPrepopulation parameter controls pool prepopulation, which forces Kuryr to add ports to the pool when it is created, such as when a new host is added, or a new namespace is created. The default value is false . The poolMinPorts parameter is the minimum number of free ports that are kept in the pool. The default value is 1 . The poolMaxPorts parameter is the maximum number of free ports that are kept in the pool. A value of 0 disables that upper bound. This is the default setting. If your OpenStack port quota is low, or you have a limited number of IP addresses on the pod network, consider setting this option to ensure that unneeded ports are deleted. The poolBatchPorts parameter defines the maximum number of Neutron ports that can be created at once. The default value is 3 . 6.9.3. Adjusting Kuryr ports pool settings in active deployments on RHOSP You can use a custom resource (CR) to configure how Kuryr manages Red Hat OpenStack Platform (RHOSP) Neutron ports to control the speed and efficiency of pod creation on a deployed cluster. Procedure From a command line, open the Cluster Network Operator (CNO) CR for editing: USD oc edit networks.operator.openshift.io cluster Edit the settings to meet your requirements. The following file is provided as an example: apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4 1 Set enablePortPoolsPrepopulation to true to make Kuryr create new Neutron ports after a namespace is created or a new node is added to the cluster. This setting raises the Neutron ports quota but can reduce the time that is required to spawn pods. The default value is false . 2 Kuryr creates new ports for a pool if the number of free ports in that pool is lower than the value of poolMinPorts . The default value is 1 . 3 poolBatchPorts controls the number of new ports that are created if the number of free ports is lower than the value of poolMinPorts . The default value is 3 . 4 If the number of free ports in a pool is higher than the value of poolMaxPorts , Kuryr deletes them until the number matches that value. Setting the value to 0 disables this upper bound, preventing pools from shrinking. The default value is 0 . Save your changes and quit the text editor to commit your changes. Important Modifying these options on a running cluster forces the kuryr-controller and kuryr-cni pods to restart. As a result, the creation of new pods and services will be delayed.
[ "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: trustedCA: name: \"\" status:", "apiVersion: v1 data: ca-bundle.crt: | 1 <MY_PEM_ENCODED_CERTS> 2 kind: ConfigMap metadata: name: user-ca-bundle 3 namespace: openshift-config 4", "oc create -f user-ca-bundle.yaml", "oc edit proxy/cluster", "apiVersion: config.openshift.io/v1 kind: Proxy metadata: name: cluster spec: httpProxy: http://<username>:<pswd>@<ip>:<port> 1 httpsProxy: https://<username>:<pswd>@<ip>:<port> 2 noProxy: example.com 3 readinessEndpoints: - http://www.google.com 4 - https://www.google.com trustedCA: name: user-ca-bundle 5", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>: owned publicZone: id: Z2XXXXXXXXXXA4 status: {}", "oc patch dnses.config.openshift.io/cluster --type=merge --patch='{\"spec\": {\"publicZone\": null}}' dns.config.openshift.io/cluster patched", "oc get dnses.config.openshift.io/cluster -o yaml", "apiVersion: config.openshift.io/v1 kind: DNS metadata: creationTimestamp: \"2019-10-25T18:27:09Z\" generation: 2 name: cluster resourceVersion: \"37966\" selfLink: /apis/config.openshift.io/v1/dnses/cluster uid: 0e714746-f755-11f9-9cb1-02ff55d8f976 spec: baseDomain: <base_domain> privateZone: tags: Name: <infrastructure_id>-int kubernetes.io/cluster/<infrastructure_id>-wfpg4: owned status: {}", "oc patch network.config.openshift.io cluster --type=merge -p '{ \"spec\": { \"serviceNodePortRange\": \"30000-<port>\" } }'", "apiVersion: config.openshift.io/v1 kind: Network metadata: name: cluster spec: serviceNodePortRange: \"30000-<port>\"", "network.config.openshift.io/cluster patched", "oc get configmaps -n openshift-kube-apiserver config -o jsonpath=\"{.data['config\\.yaml']}\" | grep -Eo '\"service-node-port-range\":[\"[[:digit:]]+-[[:digit:]]+\"]'", "\"service-node-port-range\":[\"30000-33000\"]", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: {} ingress: []", "apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {}", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-http-and-https spec: podSelector: matchLabels: role: frontend ingress: - ports: - protocol: TCP port: 80 - protocol: TCP port: 443", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-pod-and-namespace-both spec: podSelector: matchLabels: name: test-pods ingress: - from: - namespaceSelector: matchLabels: project: project_name podSelector: matchLabels: name: test-pods", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-27107 1 spec: podSelector: 2 matchLabels: app: mongodb ingress: - from: - podSelector: 3 matchLabels: app: app ports: 4 - protocol: TCP port: 27017", "touch <policy_name>.yaml", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: deny-by-default spec: podSelector: ingress: []", "kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {}", "oc apply -f <policy_name>.yaml -n <namespace>", "networkpolicy.networking.k8s.io/default-deny created", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: policy-group.network.openshift.io/ingress: \"\" podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-monitoring spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: monitoring podSelector: {} policyTypes: - Ingress EOF", "cat << EOF| oc create -f - kind: NetworkPolicy apiVersion: networking.k8s.io/v1 metadata: name: allow-same-namespace spec: podSelector: ingress: - from: - podSelector: {} EOF", "oc describe networkpolicy", "Name: allow-from-openshift-ingress Namespace: example1 Created on: 2020-06-09 00:28:17 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: ingress Not affecting egress traffic Policy Types: Ingress Name: allow-from-openshift-monitoring Namespace: example1 Created on: 2020-06-09 00:29:57 -0400 EDT Labels: <none> Annotations: <none> Spec: PodSelector: <none> (Allowing the specific traffic to all pods in this namespace) Allowing ingress traffic: To Port: <any> (traffic allowed to all ports) From: NamespaceSelector: network.openshift.io/policy-group: monitoring Not affecting egress traffic Policy Types: Ingress", "oc adm create-bootstrap-project-template -o yaml > template.yaml", "oc create -f template.yaml -n openshift-config", "oc edit project.config.openshift.io/cluster", "apiVersion: config.openshift.io/v1 kind: Project metadata: spec: projectRequestTemplate: name: <template_name>", "oc edit template <project_template> -n openshift-config", "objects: - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-same-namespace spec: podSelector: {} ingress: - from: - podSelector: {} - apiVersion: networking.k8s.io/v1 kind: NetworkPolicy metadata: name: allow-from-openshift-ingress spec: ingress: - from: - namespaceSelector: matchLabels: network.openshift.io/policy-group: ingress podSelector: {} policyTypes: - Ingress", "oc new-project <project> 1", "oc get networkpolicy NAME POD-SELECTOR AGE allow-from-openshift-ingress <none> 7s allow-from-same-namespace <none> 7s", "openstack port show <cluster_name>-<cluster_ID>-ingress-port", "openstack floating ip set --port <ingress_port_ID> <apps_FIP>", "*.apps.<cluster_name>.<base_domain> IN A <apps_FIP>", "<apps_FIP> console-openshift-console.apps.<cluster name>.<base domain> <apps_FIP> integrated-oauth-server-openshift-authentication.apps.<cluster name>.<base domain> <apps_FIP> oauth-openshift.apps.<cluster name>.<base domain> <apps_FIP> prometheus-k8s-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> grafana-openshift-monitoring.apps.<cluster name>.<base domain> <apps_FIP> <app name>.apps.<cluster name>.<base domain>", "oc edit networks.operator.openshift.io cluster", "apiVersion: operator.openshift.io/v1 kind: Network metadata: name: cluster spec: clusterNetwork: - cidr: 10.128.0.0/14 hostPrefix: 23 serviceNetwork: - 172.30.0.0/16 defaultNetwork: type: Kuryr kuryrConfig: enablePortPoolsPrepopulation: false 1 poolMinPorts: 1 2 poolBatchPorts: 3 3 poolMaxPorts: 5 4" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.9/html/post-installation_configuration/post-install-network-configuration
1.2. Supported Virtual Machine Operating Systems
1.2. Supported Virtual Machine Operating Systems See Certified Guest Operating Systems in Red Hat OpenStack Platform, Red Hat Virtualization and OpenShift Virtualization for a current list of supported operating systems. For information on customizing the operating systems, see Configuring operating systems with osinfo .
null
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.4/html/virtual_machine_management_guide/supported_virtual_machines
File System Guide
File System Guide Red Hat Ceph Storage 8 Configuring and Mounting Ceph File Systems Red Hat Ceph Storage Documentation Team
[ "cephadm shell", "ceph fs volume create FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph fs volume create test --placement=\"2 host01 host02\"", "ceph osd pool create DATA_POOL [ PG_NUM ] ceph osd pool create METADATA_POOL [ PG_NUM ]", "ceph osd pool create cephfs_data 64 ceph osd pool create cephfs_metadata 64", "ceph fs new FILESYSTEM_NAME METADATA_POOL DATA_POOL", "ceph fs new test cephfs_metadata cephfs_data", "ceph orch apply mds FILESYSTEM_NAME --placement=\" NUMBER_OF_DAEMONS HOST_NAME_1 HOST_NAME_2 HOST_NAME_3 \"", "ceph orch apply mds test --placement=\"2 host01 host02\"", "ceph orch ls", "ceph fs ls ceph fs status", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "touch mds.yaml", "service_type: mds service_id: FILESYSTEM_NAME placement: hosts: - HOST_NAME_1 - HOST_NAME_2 - HOST_NAME_3", "service_type: mds service_id: fs_name placement: hosts: - host01 - host02", "cephadm shell --mount mds.yaml:/var/lib/ceph/mds/mds.yaml", "cd /var/lib/ceph/mds/", "cephadm shell", "cd /var/lib/ceph/mds/", "ceph orch apply -i FILE_NAME .yaml", "ceph orch apply -i mds.yaml", "ceph fs new CEPHFS_NAME METADATA_POOL DATA_POOL", "ceph fs new test metadata_pool data_pool", "ceph orch ls", "ceph orch ps --daemon_type= DAEMON_NAME", "ceph orch ps --daemon_type=mds", "cephadm shell", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm FILESYSTEM_NAME --yes-i-really-mean-it", "ceph fs volume rm cephfs-new --yes-i-really-mean-it", "ceph orch ls", "ceph orch rm SERVICE_NAME", "ceph orch rm mds.test", "ceph orch ps", "ceph orch ps", "ceph fs dump dumped fsmap epoch 399 Filesystem 'cephfs01' (27) e399 max_mds 1 in 0 up {0=20384} failed damaged stopped [mds.a{0:20384} state up:active seq 239 addr [v2:127.0.0.1:6854/966242805,v1:127.0.0.1:6855/966242805]] Standby daemons: [mds.b{-1:10420} state up:standby seq 2 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]]", "ceph config set STANDBY_DAEMON mds_join_fs FILE_SYSTEM_NAME", "ceph config set mds.b mds_join_fs cephfs01", "ceph fs dump dumped fsmap epoch 405 e405 Filesystem 'cephfs01' (27) max_mds 1 in 0 up {0=10420} failed damaged stopped [mds.b{0:10420} state up:active seq 274 join_fscid=27 addr [v2:127.0.0.1:6856/2745199145,v1:127.0.0.1:6857/2745199145]] 1 Standby daemons: [mds.a{-1:10720} state up:standby seq 2 addr [v2:127.0.0.1:6854/1340357658,v1:127.0.0.1:6855/1340357658]]", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 2", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients ====== +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | STANDBY MDS | +-------------+ | node3 | +-------------+", "ceph fs set FS_NAME standby_count_wanted NUMBER", "ceph fs set cephfs standby_count_wanted 2", "ceph fs set FS_NAME allow_standby_replay 1", "ceph fs set cephfs allow_standby_replay 1", "setfattr -n ceph.dir.pin.distributed -v 1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 1 dir1/", "setfattr -n ceph.dir.pin.random -v PERCENTAGE_IN_DECIMAL DIRECTORY_PATH", "setfattr -n ceph.dir.pin.random -v 0.01 dir1/", "getfattr -n ceph.dir.pin.random DIRECTORY_PATH getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/ file: dir1/ ceph.dir.pin.distributed=\"1\" getfattr -n ceph.dir.pin.random dir1/ file: dir1/ ceph.dir.pin.random=\"0.01\"", "ceph tell mds.a get subtrees | jq '.[] | [.dir.path, .auth_first, .export_pin]'", "setfattr -n ceph.dir.pin.distributed -v 0 DIRECTORY_PATH", "setfattr -n ceph.dir.pin.distributed -v 0 dir1/", "getfattr -n ceph.dir.pin.distributed DIRECTORY_PATH", "getfattr -n ceph.dir.pin.distributed dir1/", "setfattr -n ceph.dir.pin -v -1 DIRECTORY_PATH", "setfattr -n ceph.dir.pin -v -1 dir1/", "mkdir -p a/b 1 setfattr -n ceph.dir.pin -v 1 a/ 2 setfattr -n ceph.dir.pin -v 0 a/b 3", "setfattr -n ceph.dir.pin -v RANK PATH_TO_DIRECTORY", "setfattr -n ceph.dir.pin -v 2 cephfs/home", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | | 1 | active | node2 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------+--------+ +-----------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | +-------------+", "ceph fs set NAME max_mds NUMBER", "ceph fs set cephfs max_mds 1", "ceph fs status NAME", "ceph fs status cephfs cephfs - 0 clients +------+--------+-------+---------------+-------+-------+--------+--------+ | RANK | STATE | MDS | ACTIVITY | DNS | INOS | DIRS | CAPS | +------+--------+-------+---------------+-------+-------+--------+--------+ | 0 | active | node1 | Reqs: 0 /s | 10 | 12 | 12 | 0 | +------+--------+-------+---------------+-------+-------+--------|--------+ +-----------------+----------+-------+-------+ | POOl | TYPE | USED | AVAIL | +-----------------+----------+-------+-------+ | cephfs_metadata | metadata | 4638 | 26.7G | | cephfs_data | data | 0 | 26.7G | +-----------------+----------+-------+-------+ +-------------+ | Standby MDS | +-------------+ | node3 | | node2 | +-------------+", "ceph orch ps | grep mds", "ceph tell MDS_SERVICE_NAME counter dump", "ceph tell mds.cephfs.ceph2-hk-n-0mfqao-node4.isztbk counter dump [ { \"key\": \"mds_client_metrics\", \"value\": [ { \"labels\": { \"fs_name\": \"cephfs\", \"id\": \"24379\" }, \"counters\": { \"num_clients\": 4 } } ] }, { \"key\": \"mds_client_metrics-cephfs\", \"value\": [ { \"labels\": { \"client\": \"client.24413\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } }, { \"labels\": { \"client\": \"client.24502\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 921403, \"cap_miss\": 102382, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17117, \"dentry_lease_miss\": 204710, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24508\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 928694, \"cap_miss\": 103183, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 17217, \"dentry_lease_miss\": 206348, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 7, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 1, \"total_write_size\": 132 } }, { \"labels\": { \"client\": \"client.24520\", \"rank\": \"0\" }, \"counters\": { \"cap_hits\": 56, \"cap_miss\": 9, \"avg_read_latency\": 0E-9, \"avg_write_latency\": 0E-9, \"avg_metadata_latency\": 0E-9, \"dentry_lease_hits\": 2, \"dentry_lease_miss\": 12, \"opened_files\": 0, \"opened_inodes\": 9, \"pinned_icaps\": 4, \"total_inodes\": 9, \"total_read_ops\": 0, \"total_read_size\": 0, \"total_write_ops\": 0, \"total_write_size\": 0 } } ] } ]", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rwp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a client.1 key: AQAz7EVWygILFRAAdIcuJ11opU/JKyfFmxhuaw== caps: [mds] allow rw caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow rw, allow rws path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "client.0 key: AQAz7EVWygILFRAAdIcuJ10opU/JKyfFmxhuaw== caps: [mds] allow r network 10.0.0.0/8, allow rw path=/bar network 10.0.0.0/8 caps: [mon] allow r network 10.0.0.0/8 caps: [osd] allow rw tag cephfs data=cephfs_a network 10.0.0.0/8", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ KEYRING_FILE /etc/ceph/", "scp [email protected]:/etc/ceph/ceph.client.1.keyring /etc/ceph/", "scp root@ MONITOR_NODE_NAME :/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "scp [email protected]:/etc/ceph/ceph.conf /etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "ceph fs volume create FILE_SYSTEM_NAME", "ceph fs volume create cephfs01", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME DIRECTORY PERMISSIONS", "ceph fs authorize cephfs01 client.1 / rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 exported keyring for client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph fs authorize cephfs01 client.1 / rw root_squash /volumes rw [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== ceph auth get client.1 [client.1] key = BQAmthpf81M+JhAAiHDYQkMiCq3x+J0n9e8REK== caps mds = \"allow rw fsname=cephfs01 root_squash, allow rw fsname=cephfs01 path=/volumes\" caps mon = \"allow r fsname=cephfs01\" caps osd = \"allow rw tag cephfs data=cephfs01\"", "ceph auth get CLIENT_NAME > OUTPUT_FILE_NAME scp OUTPUT_FILE_NAME TARGET_NODE_NAME :/etc/ceph", "ceph auth get client.1 > ceph.client.1.keyring exported keyring for client.1 scp ceph.client.1.keyring client:/etc/ceph root@client's password: ceph.client.1.keyring 100% 178 333.0KB/s 00:00", "mkdir PATH_TO_NEW_DIRECTORY_NAME", "mkdir /mnt/mycephfs", "ceph-fuse PATH_TO_NEW_DIRECTORY_NAME -n CEPH_USER_NAME --client-fs=_FILE_SYSTEM_NAME", "ceph-fuse /mnt/mycephfs/ -n client.1 --client-fs=cephfs01 ceph-fuse[555001]: starting ceph client 2022-05-09T07:33:27.158+0000 7f11feb81200 -1 init, newargv = 0x55fc4269d5d0 newargc=15 ceph-fuse[555001]: starting fuse", "ceph osd pool create DATA_POOL_NAME erasure", "ceph osd pool create cephfs-data-ec01 erasure pool 'cephfs-data-ec01' created", "ceph osd lspools", "ceph osd pool set DATA_POOL_NAME allow_ec_overwrites true", "ceph osd pool set cephfs-data-ec01 allow_ec_overwrites true set pool 15 allow_ec_overwrites to true", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "ceph fs add_data_pool FILE_SYSTEM_NAME DATA_POOL_NAME", "ceph fs add_data_pool cephfs-ec cephfs-data-ec01", "ceph fs status FILE_SYSTEM_NAME", "ceph fs status cephfs-ec cephfs-ec - 14 clients ========= RANK STATE MDS ACTIVITY DNS INOS DIRS CAPS 0 active cephfs-ec.example.ooymyq Reqs: 0 /s 8231 8233 891 921 POOL TYPE USED AVAIL cephfs-metadata-ec metadata 787M 8274G cephfs-data-ec data 2360G 12.1T cephfs-data-ec01 data 0 12.1T STANDBY MDS cephfs-ec.example.irsrql cephfs-ec.example.cauuaj", "mkdir PATH_TO_DIRECTORY setfattr -n ceph.dir.layout.pool -v DATA_POOL_NAME PATH_TO_DIRECTORY", "mkdir /mnt/cephfs/newdir setfattr -n ceph.dir.layout.pool -v cephfs-data-ec01 /mnt/cephfs/newdir", "cephadm shell", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ] PERMISSIONS", "ceph fs authorize cephfs_a client.1 / r /temp rw client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A==", "ceph fs authorize cephfs_a client.1 /temp rw", "ceph auth get client. ID", "ceph auth get client.1 client.1 key = AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps mds = \"allow r, allow rw path=/temp\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph auth get client. ID -o ceph.client. ID .keyring", "ceph auth get client.1 -o ceph.client.1.keyring exported keyring for client.1", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "chmod 644 ceph.client. ID .keyring", "chmod 644 /etc/ceph/ceph.client.1.keyring", "subscription-manager repos --enable=rhceph-6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-common", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "mount -t ceph MONITOR-1_NAME :6789, MONITOR-2_NAME :6789, MONITOR-3_NAME :6789:/ MOUNT_POINT -o name= CLIENT_ID ,fs= FILE_SYSTEM_NAME", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o name=1,fs=cephfs01", "mount -t ceph mon1:6789,mon2:6789,mon3:6789:/ /mnt/cephfs -o nowsync,name=1,fs=cephfs01", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir -p MOUNT_POINT", "mkdir -p /mnt/cephfs", "#DEVICE PATH TYPE OPTIONS MON_0_HOST : PORT , MOUNT_POINT ceph name= CLIENT_ID , MON_1_HOST : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , fs= FILE_SYSTEM_NAME , MON_2_HOST : PORT :/q[_VOL_]/ SUB_VOL / UID_SUB_VOL , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/cephfs ceph name=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ fs=cephfs01, _netdev,noatime", "subscription-manager repos --enable=6-tools-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=6-tools-for-rhel-9-x86_64-rpms", "dnf install ceph-fuse", "cephadm shell", "scp /ceph.client. ID .keyring root@ CLIENT_NODE_NAME :/etc/ceph/ceph.client. ID .keyring", "scp /ceph.client.1.keyring root@client01:/etc/ceph/ceph.client.1.keyring", "scp /etc/ceph/ceph.conf root@ CLIENT_NODE_NAME :/etc/ceph/ceph.conf", "scp /etc/ceph/ceph.conf root@client01:/etc/ceph/ceph.conf", "chmod 644 /etc/ceph/ceph.conf", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID --client_fs FILE_SYSTEM_NAME MOUNT_POINT", "ceph-fuse -n client.1 --client_fs cephfs01 /mnt/mycephfs", "ceph-fuse -n client.1 --keyring=/etc/ceph/client.1.keyring /mnt/mycephfs", "ceph-fuse -n client. CLIENT_ID MOUNT_POINT -r PATH", "ceph-fuse -n client.1 /mnt/cephfs -r /home/cephfs", "ceph-fuse -n client.1 /mnt/cephfs --client_reconnect_stale=true", "stat -f MOUNT_POINT", "stat -f /mnt/cephfs", "mkdir PATH_TO_MOUNT_POINT", "mkdir /mnt/mycephfs", "#DEVICE PATH TYPE OPTIONS DUMP FSCK HOST_NAME : PORT , MOUNT_POINT fuse.ceph ceph.id= CLIENT_ID , 0 0 HOST_NAME : PORT , ceph.client_mountpoint=/ VOL / SUB_VOL_GROUP / SUB_VOL / UID_SUB_VOL , HOST_NAME : PORT :/ ceph.client_fs= FILE_SYSTEM_NAME ,ceph.name= USERNAME ,ceph.keyring=/etc/ceph/ KEYRING_FILE , [ ADDITIONAL_OPTIONS ]", "#DEVICE PATH TYPE OPTIONS DUMP FSCK mon1:6789, /mnt/mycephfs fuse.ceph ceph.id=1, 0 0 mon2:6789, ceph.client_mountpoint=/my_vol/my_sub_vol_group/my_sub_vol/0, mon3:6789:/ ceph.client_fs=cephfs01,ceph.name=client.1,ceph.keyring=/etc/ceph/client1.keyring, _netdev,defaults", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs volume ls", "ceph fs volume info VOLUME_NAME", "ceph fs volume info cephfs { \"mon_addrs\": [ \"192.168.1.7:40977\", ], \"pending_subvolume_deletions\": 0, \"pools\": { \"data\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.data\", \"used\": 4096 } ], \"metadata\": [ { \"avail\": 106288709632, \"name\": \"cephfs.cephfs.meta\", \"used\": 155648 } ] }, \"used_size\": 0 }", "ceph config set mon mon_allow_pool_delete true", "ceph fs volume rm VOLUME_NAME [--yes-i-really-mean-it]", "ceph fs volume rm cephfs --yes-i-really-mean-it", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--size SIZE_IN_BYTES ] [--pool_layout DATA_POOL_NAME ] [--uid UID ] [--gid GID ] [--mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subvolgroup_2 10737418240", "ceph fs subvolumegroup resize VOLUME_NAME GROUP_NAME new_size [--no_shrink]", "ceph fs subvolumegroup resize cephfs subvolgroup_2 20737418240 [ { \"bytes_used\": 10768679044 }, { \"bytes_quota\": 20737418240 }, { \"bytes_pcent\": \"51.93\" } ]", "ceph fs subvolumegroup info VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup info cephfs subvolgroup_2 { \"atime\": \"2022-10-05 18:00:39\", \"bytes_pcent\": \"51.85\", \"bytes_quota\": 20768679043, \"bytes_used\": 10768679044, \"created_at\": \"2022-10-05 18:00:39\", \"ctime\": \"2022-10-05 18:21:26\", \"data_pool\": \"cephfs.cephfs.data\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"60.221.178.236:1221\", \"205.64.75.112:1221\", \"20.209.241.242:1221\" ], \"mtime\": \"2022-10-05 18:01:25\", \"uid\": 0 }", "ceph fs subvolumegroup ls VOLUME_NAME", "ceph fs subvolumegroup ls cephfs", "ceph fs subvolumegroup getpath VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup getpath cephfs subgroup0", "ceph fs subvolumegroup snapshot ls VOLUME_NAME GROUP_NAME", "ceph fs subvolumegroup snapshot ls cephfs subgroup0", "ceph fs subvolumegroup snapshot rm VOLUME_NAME GROUP_NAME SNAP_NAME [--force]", "ceph fs subvolumegroup snapshot rm cephfs subgroup0 snap0 --force", "ceph fs subvolumegroup rm VOLUME_NAME GROUP_NAME [--force]", "ceph fs subvolumegroup rm cephfs subgroup0 --force", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ] [--namespace-isolated]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0 --namespace-isolated", "ceph fs subvolume ls VOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume ls cephfs --group_name subgroup0", "ceph fs subvolume resize VOLUME_NAME SUBVOLUME_NAME NEW_SIZE [--group_name SUBVOLUME_GROUP_NAME ] [--no_shrink]", "ceph fs subvolume resize cephfs sub0 1024000000 --group_name subgroup0 --no_shrink", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME [--group_name _SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume getpath cephfs sub0 --group_name subgroup0", "ceph fs subvolume info VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume info cephfs sub0 --group_name subgroup0", "ceph fs subvolume info cephfs sub0 { \"atime\": \"2023-07-14 08:52:46\", \"bytes_pcent\": \"0.00\", \"bytes_quota\": 1024000000, \"bytes_used\": 0, \"created_at\": \"2023-07-14 08:52:46\", \"ctime\": \"2023-07-14 08:53:54\", \"data_pool\": \"cephfs.cephfs.data\", \"features\": [ \"snapshot-clone\", \"snapshot-autoprotect\", \"snapshot-retention\" ], \"flavor\": \"2\", \"gid\": 0, \"mode\": 16877, \"mon_addrs\": [ \"10.0.208.172:6789\", \"10.0.211.197:6789\", \"10.0.209.212:6789\" ], \"mtime\": \"2023-07-14 08:52:46\", \"path\": \"/volumes/_nogroup/sub0/834c5cbc-f5db-4481-80a3-aca92ff0e7f3\", \"pool_namespace\": \"\", \"state\": \"complete\", \"type\": \"subvolume\", \"uid\": 0 }", "ceph auth get CLIENT_NAME", "ceph auth get client.0 [client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" 1 caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\" 2", "ceph fs subvolume snapshot create VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "CLIENT_NAME key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = allow rw, allow rws path= DIRECTORY_PATH caps mon = allow r caps osd = allow rw tag cephfs data= DIRECTORY_NAME", "[client.0] key = AQAz7EVWygILFRAAdIcuJ12opU/JKyfFmxhuaw== caps mds = \"allow rw, allow rws path=/bar\" caps mon = \"allow r\" caps osd = \"allow rw tag cephfs data=cephfs_a\"", "ceph fs volume create VOLUME_NAME", "ceph fs volume create cephfs", "ceph fs subvolumegroup create VOLUME_NAME GROUP_NAME [--pool_layout DATA_POOL_NAME --uid UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolumegroup create cephfs subgroup0", "ceph fs subvolume create VOLUME_NAME SUBVOLUME_NAME [--size SIZE_IN_BYTES --group_name SUBVOLUME_GROUP_NAME --pool_layout DATA_POOL_NAME --uid _UID --gid GID --mode OCTAL_MODE ]", "ceph fs subvolume create cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot create VOLUME_NAME _SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot create cephfs sub0 snap0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --group_name subgroup0", "ceph fs subvolume snapshot clone VOLUME_NAME SUBVOLUME_NAME SNAP_NAME TARGET_CLONE_NAME --target_group_name SUBVOLUME_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 clone0 --target_group_name subgroup1", "ceph fs clone status VOLUME_NAME CLONE_NAME [--group_name TARGET_GROUP_NAME ]", "ceph fs clone status cephfs clone0 --group_name subgroup1 { \"status\": { \"state\": \"complete\" } }", "ceph fs subvolume snapshot ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot ls cephfs sub0 --group_name subgroup0", "ceph fs subvolume snapshot info VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume snapshot info cephfs sub0 snap0 --group_name subgroup0", "{ \"created_at\": \"2022-05-09 06:18:47.330682\", \"data_pool\": \"cephfs_data\", \"has_pending_clones\": \"no\", \"size\": 0 }", "ceph fs subvolume rm VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ] [--force] [--retain-snapshots]", "ceph fs subvolume rm cephfs sub0 --group_name subgroup0 --retain-snapshots", "ceph fs subvolume snapshot clone VOLUME_NAME DELETED_SUBVOLUME RETAINED_SNAPSHOT NEW_SUBVOLUME --group_name SUBVOLUME_GROUP_NAME --target_group_name SUBVOLUME_TARGET_GROUP_NAME", "ceph fs subvolume snapshot clone cephfs sub0 snap0 sub1 --group_name subgroup0 --target_group_name subgroup0", "ceph fs subvolume snapshot rm VOLUME_NAME SUBVOLUME_NAME SNAP_NAME [--group_name GROUP_NAME --force]", "ceph fs subvolume snapshot rm cephfs sub0 snap0 --group_name subgroup0 --force", "ceph fs subvolume metadata set VOLUME_NAME SUBVOLUME_NAME KEY_NAME VALUE [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata set cephfs sub0 test_meta cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test meta\" cluster --group_name subgroup0", "ceph fs subvolume metadata set cephfs sub0 \"test_meta\" cluster2 --group_name subgroup0", "ceph fs subvolume metadata get VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata get cephfs sub0 test_meta --group_name subgroup0 cluster", "ceph fs subvolume metadata ls VOLUME_NAME SUBVOLUME_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata ls cephfs sub0 { \"test_meta\": \"cluster\" }", "ceph fs subvolume metadata rm VOLUME_NAME SUBVOLUME_NAME KEY_NAME [--group_name SUBVOLUME_GROUP_NAME ]", "ceph fs subvolume metadata rm cephfs sub0 test_meta --group_name subgroup0", "ceph fs subvolume metadata ls cephfs sub0 {}", "subscription-manager repos --enable=rhceph-8-tools-for-rhel-9-x86_64-rpms", "dnf install cephfs-top", "ceph mgr module enable stats", "ceph auth get-or-create client.fstop mon 'allow r' mds 'allow r' osd 'allow r' mgr 'allow r' > /etc/ceph/ceph.client.fstop.keyring", "cephfs-top cephfs-top - Wed Nov 30 15:26:05 2022 All Filesystem Info Total Client(s): 4 - 3 FUSE, 1 kclient, 0 libcephfs COMMANDS: m - select a filesystem | s - sort menu | l - limit number of clients | r - reset to default | q - quit client_id mount_root chit(%) dlease(%) ofiles oicaps oinodes rtio(MB) raio(MB) rsp(MB/s) wtio(MB) waio(MB) wsp(MB/s) rlatavg(ms) rlatsd(ms) wlatavg(ms) wlatsd(ms) mlatavg(ms) mlatsd(ms) mount_point@host/addr Filesystem: cephfs1 - 2 client(s) 4500 / 100.0 100.0 0 751 0 0.0 0.0 0.0 578.13 0.03 0.0 N/A N/A N/A N/A N/A N/A N/A@example/192.168.1.4 4501 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.41 0.0 /mnt/cephfs2@example/192.168.1.4 Filesystem: cephfs2 - 2 client(s) 4512 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.4 0.0 /mnt/cephfs3@example/192.168.1.4 4518 / 100.0 0.0 0 1 0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0.52 0.0 /mnt/cephfs4@example/192.168.1.4", "m Filesystems Press \"q\" to go back to home (all filesystem info) screen cephfs01 cephfs02 q cephfs-top - Thu Oct 20 07:29:35 2022 Total Client(s): 3 - 2 FUSE, 1 kclient, 0 libcephfs", "cephfs-top --selftest selftest ok", "ceph mgr module enable mds_autoscaler", "umount MOUNT_POINT", "umount /mnt/cephfs", "fusermount -u MOUNT_POINT", "fusermount -u /mnt/cephfs", "ceph fs authorize FILE_SYSTEM_NAME client. CLIENT_NAME / DIRECTORY CAPABILITY [/ DIRECTORY CAPABILITY ]", "[user@client ~]USD ceph fs authorize cephfs_a client.1 /temp rwp client.1 key: AQBSdFhcGZFUDRAAcKhG9Cl2HPiDMMRv4DC43A== caps: [mds] allow r, allow rwp path=/temp caps: [mon] allow r caps: [osd] allow rw tag cephfs data=cephfs_a", "setfattr -n ceph.dir.pin -v RANK DIRECTORY", "[user@client ~]USD setfattr -n ceph.dir.pin -v 2 /temp", "setfattr -n ceph.dir.pin -v -1 DIRECTORY", "[user@client ~]USD setfattr -n ceph.dir.pin -v -1 /home/ceph-user", "ceph osd pool create POOL_NAME", "ceph osd pool create cephfs_data_ssd pool 'cephfs_data_ssd' created", "ceph fs add_data_pool FS_NAME POOL_NAME", "ceph fs add_data_pool cephfs cephfs_data_ssd added data pool 6 to fsmap", "ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs_data cephfs_data_ssd]", "ceph fs rm_data_pool FS_NAME POOL_NAME", "ceph fs rm_data_pool cephfs cephfs_data_ssd removed data pool 6 from fsmap", "ceph fs ls name: cephfs, metadata pool: cephfs_metadata, data pools: [cephfs.cephfs.data]", "ceph fs set FS_NAME down true", "ceph fs set cephfs down true", "ceph fs set FS_NAME down false", "ceph fs set cephfs down false", "ceph fs fail FS_NAME", "ceph fs fail cephfs", "ceph fs set FS_NAME joinable true", "ceph fs set cephfs joinable true cephfs marked joinable; MDS may join as newly active.", "ceph fs set FS_NAME down true", "ceph fs set cephfs down true cephfs marked down.", "ceph fs status", "ceph fs status cephfs - 0 clients ====== +-------------------+----------+-------+-------+ | POOL | TYPE | USED | AVAIL | +-----------------+------------+-------+-------+ |cephfs.cephfs.meta | metadata | 31.5M | 52.6G| |cephfs.cephfs.data | data | 0 | 52.6G| +-----------------+----------+-------+---------+ STANDBY MDS cephfs.ceph-host01 cephfs.ceph-host02 cephfs.ceph-host03", "ceph fs rm FS_NAME --yes-i-really-mean-it", "ceph fs rm cephfs --yes-i-really-mean-it", "ceph fs ls", "ceph mds fail MDS_NAME", "ceph mds fail example01", "fs required_client_features FILE_SYSTEM_NAME add FEATURE_NAME fs required_client_features FILE_SYSTEM_NAME rm FEATURE_NAME", "ceph tell DAEMON_NAME client ls", "ceph tell mds.0 client ls [ { \"id\": 4305, \"num_leases\": 0, \"num_caps\": 3, \"state\": \"open\", \"replay_requests\": 0, \"completed_requests\": 0, \"reconnecting\": false, \"inst\": \"client.4305 172.21.9.34:0/422650892\", \"client_metadata\": { \"ceph_sha1\": \"79f0367338897c8c6d9805eb8c9ad24af0dcd9c7\", \"ceph_version\": \"ceph version 16.2.8-65.el8cp (79f0367338897c8c6d9805eb8c9ad24af0dcd9c7)\", \"entity_id\": \"0\", \"hostname\": \"senta04\", \"mount_point\": \"/tmp/tmpcMpF1b/mnt.0\", \"pid\": \"29377\", \"root\": \"/\" } } ]", "ceph tell DAEMON_NAME client evict id= ID_NUMBER", "ceph tell mds.0 client evict id=4305", "ceph osd blocklist ls listed 1 entries 127.0.0.1:0/3710147553 2022-05-09 11:32:24.716146", "ceph osd blocklist rm CLIENT_NAME_OR_IP_ADDR", "ceph osd blocklist rm 127.0.0.1:0/3710147553 un-blocklisting 127.0.0.1:0/3710147553", "recover_session=clean", "client_reconnect_stale=true", "getfattr -n ceph.quota.max_bytes DIRECTORY", "getfattr -n ceph.quota.max_bytes /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_bytes=\"100000000\"", "getfattr -n ceph.quota.max_files DIRECTORY", "getfattr -n ceph.quota.max_files /mnt/cephfs/ getfattr: Removing leading '/' from absolute path names file: mnt/cephfs/ ceph.quota.max_files=\"10000\"", "setfattr -n ceph.quota.max_bytes -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 2T /cephfs/", "setfattr -n ceph.quota.max_files -v LIMIT_VALUE DIRECTORY", "setfattr -n ceph.quota.max_files -v 10000 /cephfs/", "setfattr -n ceph.quota.max_bytes -v 0 DIRECTORY", "setfattr -n ceph.quota.max_bytes -v 0 /mnt/cephfs/", "setfattr -n ceph.quota.max_files -v 0 DIRECTORY", "setfattr -n ceph.quota.max_files -v 0 /mnt/cephfs/", "setfattr -n ceph. TYPE .layout. FIELD -v VALUE PATH", "setfattr -n ceph.file.layout.stripe_unit -v 1048576 test", "getfattr -n ceph. TYPE .layout PATH", "getfattr -n ceph.dir.layout /home/test ceph.dir.layout=\"stripe_unit=4194304 stripe_count=2 object_size=4194304 pool=cephfs_data\"", "getfattr -n ceph. TYPE .layout. FIELD _PATH", "getfattr -n ceph.file.layout.pool test ceph.file.layout.pool=\"cephfs_data\"", "setfattr -x ceph.dir.layout DIRECTORY_PATH", "[user@client ~]USD setfattr -x ceph.dir.layout /home/cephfs", "setfattr -x ceph.dir.layout.pool_namespace DIRECTORY_PATH", "[user@client ~]USD setfattr -x ceph.dir.layout.pool_namespace /home/cephfs", "cephadm shell", "ceph fs set FILE_SYSTEM_NAME allow_new_snaps true", "ceph fs set cephfs01 allow_new_snaps true", "mkdir NEW_DIRECTORY_PATH", "mkdir /.snap/new-snaps", "rmdir NEW_DIRECTORY_PATH", "rmdir /.snap/new-snaps", "cephadm shell", "ceph mgr module enable snap_schedule", "cephadm shell", "ceph fs snap-schedule add FILE_SYSTEM_VOLUME_PATH REPEAT_INTERVAL [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME", "ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs mycephfs", "ceph fs snap-schedule retention add FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention add /cephfs h 14 1 ceph fs snap-schedule retention add /cephfs d 4 2 ceph fs snap-schedule retention add /cephfs 14h4w 3", "ceph fs snap-schedule list FILE_SYSTEM_VOLUME_PATH [--format=plain|json] [--recursive=true]", "ceph fs snap-schedule list /cephfs --recursive=true", "ceph fs snap-schedule status FILE_SYSTEM_VOLUME_PATH [--format=plain|json]", "ceph fs snap-schedule status /cephfs --format=json", "ceph fs subvolume getpath VOLUME_NAME SUBVOLUME_NAME SUBVOLUME_GROUP_NAME", "ceph fs subvolume getpath cephfs subvol_1 subvolgroup_1", "ceph fs snap-schedule add SUBVOLUME_DIR_PATH SNAP_SCHEDULE [ START_TIME ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule add /cephfs_kernelf739cwtus2/pmo9axbwsi 1h 2022-06-27T21:50:00 --fs cephfs --subvol subvol_1 Schedule set for path /..", "ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME", "ceph fs snap-schedule add - 2M --subvol sv_non_def_1", "ceph fs snap-schedule add /.. SNAP_SCHEDULE [ START_TIME] --fs CEPH_FILE_SYSTEM_NAME --subvol _SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME", "ceph fs snap-schedule add - 2M --fs cephfs --subvol sv_non_def_1 --group svg1", "ceph fs snap-schedule retention add SUBVOLUME_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 14 1 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. d 4 2 ceph fs snap-schedule retention add /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14h4w 3 Retention added to path /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched Retention added to path /volumes/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention add / [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD_COUNT --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON_DEFAULT_SUBVOLGROUP_NAME", "ceph fs snap-schedule retention add / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention added to path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a54j0dda7f16/..", "ceph fs snap-schedule list SUBVOLUME_VOLUME_PATH [--format=plain|json] [--recursive=true]", "ceph fs snap-schedule list / --recursive=true /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h", "ceph fs snap-schedule status SUBVOLUME_DIR_PATH [--format=plain|json]", "ceph fs snap-schedule status /volumes/_nogroup/subv1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. --format=json {\"fs\": \"cephfs\", \"subvol\": \"subvol_1\", \"path\": \"/volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..\", \"rel_path\": \"/..\", \"schedule\": \"4h\", \"retention\": {\"h\": 14}, \"start\": \"2022-05-16T14:00:00\", \"created\": \"2023-03-20T08:47:18\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule status --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule status --fs cephfs --subvol sv_sched {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule status --fs _CEPH_FILE_SYSTEM_NAME_ --subvol _SUBVOLUME_NAME_ --group _NON-DEFAULT_SUBVOLGROUP_NAME_", "ceph fs snap-schedule status --fs cephfs --subvol sv_sched --group subvolgroup_cg {\"fs\": \"cephfs\", \"subvol\": \"sv_sched\", \"group\": \"subvolgroup_cg\", \"path\": \"/volumes/subvolgroup_cg/sv_sched/e564329a-kj87-4763-gh0y-b56c8sev7t23/..\", \"rel_path\": \"/volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..\", \"schedule\": \"1h\", \"retention\": {\"h\": 5}, \"start\": \"2024-05-21T00:00:00\", \"created\": \"2024-05-21T09:18:58\", \"first\": null, \"last\": null, \"last_pruned\": null, \"created_count\": 0, \"pruned_count\": 0, \"active\": true}", "ceph fs snap-schedule activate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule activate /cephfs", "ceph fs snap-schedule activate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule activate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/..", "ceph fs snap-schedule activate /.. REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule activate /.. [ REPEAT_INTERVAL ] --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule activate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule activated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule deactivate FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule deactivate /cephfs 1d", "ceph fs snap-schedule deactivate SUBVOL_DIR_PATH [ REPEAT_INTERVAL ]", "ceph fs snap-schedule deactivate /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 1d", "ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule deactivate / REPEAT_INTERVAL --fs CEPH_FILE_SYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule deactivate / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule deactivated for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH [ REPEAT_INTERVAL ] [ START_TIME ]", "ceph fs snap-schedule remove /cephfs 4h 2022-05-16T14:00:00", "ceph fs snap-schedule remove FILE_SYSTEM_VOLUME_PATH", "ceph fs snap-schedule remove /cephfs", "ceph fs snap-schedule remove SUBVOL_DIR_PATH [ REPEAT_INTERVAL ] [ START_TIME ]", "ceph fs snap-schedule remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 4h 2022-05-16T14:00:00", "ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule remove / --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule remove / --fs cephfs --subvol sv_sched --group subvolgroup_cg Schedule removed for path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention remove FILE_SYSTEM_VOLUME_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention remove /cephfs h 4 1 ceph fs snap-schedule retention remove /cephfs 14d4w 2", "ceph fs snap-schedule retention remove SUBVOL_DIR_PATH [ COUNT_TIME_PERIOD_PAIR ] TIME_PERIOD COUNT", "ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. h 4 1 ceph fs snap-schedule retention remove /volumes/_nogroup/subvol_1/85a615da-e8fa-46c1-afc3-0eb8ae64a954/.. 14d4w 2", "ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME", "ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "ceph fs snap-schedule retention remove / TIME_PERIOD_PAIR TIME_PERIOD COUNT --fs CEPH_FILESYSTEM_NAME --subvol SUBVOLUME_NAME --group NON-DEFAULT_GROUP_NAME", "ceph fs snap-schedule retention remove / 5h --fs cephfs --subvol sv_sched --group subvolgroup_cg Retention removed from path /volumes/subvolgroup_cg/sv_sched/e704342a-ff07-4763-bb0b-a46d9dda6f27/..", "cephadm shell", "ceph orch apply cephfs-mirror [\" NODE_NAME \"]", "ceph orch apply cephfs-mirror \"node1.example.com\" Scheduled cephfs-mirror update", "ceph orch apply cephfs-mirror --placement=\" PLACEMENT_SPECIFICATION \"", "ceph orch apply cephfs-mirror --placement=\"3 host1 host2 host3\" Scheduled cephfs-mirror update", "Error EINVAL: name component must include only a-z, 0-9, and -", "ceph fs authorize FILE_SYSTEM_NAME CLIENT_NAME / rwps", "ceph fs authorize cephfs client.mirror_remote / rwps [client.mirror_remote] key = AQCjZ5Jg739AAxAAxduIKoTZbiFJ0lgose8luQ==", "ceph mgr module enable mirroring", "ceph fs snapshot mirror enable FILE_SYSTEM_NAME", "ceph fs snapshot mirror enable cephfs", "ceph fs snapshot mirror disable FILE_SYSTEM_NAME", "ceph fs snapshot mirror disable cephfs", "ceph mgr module enable mirroring", "ceph fs snapshot mirror peer_bootstrap create FILE_SYSTEM_NAME CLIENT_NAME SITE_NAME", "ceph fs snapshot mirror peer_bootstrap create cephfs client.mirror_remote remote-site {\"token\": \"eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==\"}", "ceph fs snapshot mirror peer_bootstrap import FILE_SYSTEM_NAME TOKEN", "ceph fs snapshot mirror peer_bootstrap import cephfs eyJmc2lkIjogIjBkZjE3MjE3LWRmY2QtNDAzMC05MDc5LTM2Nzk4NTVkNDJlZiIsICJmaWxlc3lzdGVtIjogImJhY2t1cF9mcyIsICJ1c2VyIjogImNsaWVudC5taXJyb3JfcGVlcl9ib290c3RyYXAiLCAic2l0ZV9uYW1lIjogInNpdGUtcmVtb3RlIiwgImtleSI6ICJBUUFhcDBCZ0xtRmpOeEFBVnNyZXozai9YYUV0T2UrbUJEZlJDZz09IiwgIm1vbl9ob3N0IjogIlt2MjoxOTIuMTY4LjAuNTo0MDkxOCx2MToxOTIuMTY4LjAuNTo0MDkxOV0ifQ==", "ceph fs snapshot mirror peer_list FILE_SYSTEM_NAME", "ceph fs snapshot mirror peer_list cephfs {\"e5ecb883-097d-492d-b026-a585d1d7da79\": {\"client_name\": \"client.mirror_remote\", \"site_name\": \"remote-site\", \"fs_name\": \"cephfs\", \"mon_host\": \"[v2:10.0.211.54:3300/0,v1:10.0.211.54:6789/0] [v2:10.0.210.56:3300/0,v1:10.0.210.56:6789/0] [v2:10.0.210.65:3300/0,v1:10.0.210.65:6789/0]\"}}", "ceph fs snapshot mirror peer_remove FILE_SYSTEM_NAME PEER_UUID", "ceph fs snapshot mirror peer_remove cephfs e5ecb883-097d-492d-b026-a585d1d7da79", "ceph fs snapshot mirror add FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror add cephfs /volumes/_nogroup/subvol_1", "ceph fs snapshot mirror remove FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror remove cephfs /home/user1", "cephadm shell", "ceph fs snapshot mirror daemon status", "ceph fs snapshot mirror daemon status [ { \"daemon_id\": 15594, \"filesystems\": [ { \"filesystem_id\": 1, \"name\": \"cephfs\", \"directory_count\": 1, \"peers\": [ { \"uuid\": \"e5ecb883-097d-492d-b026-a585d1d7da79\", \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" }, \"stats\": { \"failure_count\": 1, \"recovery_count\": 0 } } ] } ] } ]", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE help", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok help { \"fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e\": \"get peer mirror status\", \"fs mirror status cephfs@11\": \"get filesystem mirror status\", }", "ceph --admin-daemon PATH_TO_THE_ASOK_FILE fs mirror status FILE_SYSTEM_NAME @_FILE_SYSTEM_ID", "ceph --admin-daemon /var/run/ceph/1011435c-9e30-4db6-b720-5bf482006e0e/ceph-client.cephfs-mirror.node1.bndvox.asok fs mirror status cephfs@11 { \"rados_inst\": \"192.168.0.5:0/1476644347\", \"peers\": { \"1011435c-9e30-4db6-b720-5bf482006e0e\": { 1 \"remote\": { \"client_name\": \"client.mirror_remote\", \"cluster_name\": \"remote-site\", \"fs_name\": \"cephfs\" } } }, \"snap_dirs\": { \"dir_count\": 1 } }", "ceph --admin-daemon PATH_TO_ADMIN_SOCKET fs mirror status FILE_SYSTEM_NAME @ FILE_SYSTEM_ID PEER_UUID", "ceph --admin-daemon /var/run/ceph/cephfs-mirror.asok fs mirror peer status cephfs@11 1011435c-9e30-4db6-b720-5bf482006e0e { \"/home/user1\": { \"state\": \"idle\", 1 \"last_synced_snap\": { \"id\": 120, \"name\": \"snap1\", \"sync_duration\": 0.079997898999999997, \"sync_time_stamp\": \"274900.558797s\" }, \"snaps_synced\": 2, 2 \"snaps_deleted\": 0, 3 \"snaps_renamed\": 0 } }", "ceph fs snapshot mirror dirmap FILE_SYSTEM_NAME PATH", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"instance_id\": \"25184\", 1 \"last_shuffled\": 1661162007.012663, \"state\": \"mapped\" }", "ceph fs snapshot mirror dirmap cephfs /volumes/_nogroup/subvol_1 { \"reason\": \"no mirror daemons running\", \"state\": \"stalled\" 1 }", "ceph --admin-daemon ASOK_FILE_NAME counter dump", "ceph --admin-daemon ceph-client.cephfs-mirror.ceph1-hk-n-0mfqao-node7.pnbrlu.2.93909288073464.asok counter dump [ { \"key\": \"cephfs_mirror\", \"value\": [ { \"labels\": {}, \"counters\": { \"mirrored_filesystems\": 1, \"mirror_enable_failures\": 0 } } ] }, { \"key\": \"cephfs_mirror_mirrored_filesystems\", \"value\": [ { \"labels\": { \"filesystem\": \"cephfs\" }, \"counters\": { \"mirroring_peers\": 1, \"directory_count\": 1 } } ] }, { \"key\": \"cephfs_mirror_peers\", \"value\": [ { \"labels\": { \"peer_cluster_filesystem\": \"cephfs\", \"peer_cluster_name\": \"remote_site\", \"source_filesystem\": \"cephfs\", \"source_fscid\": \"1\" }, \"counters\": { \"snaps_synced\": 1, \"snaps_deleted\": 0, \"snaps_renamed\": 0, \"sync_failures\": 0, \"avg_sync_time\": { \"avgcount\": 1, \"sum\": 4.216959457, \"avgtime\": 4.216959457 }, \"sync_bytes\": 132 } } ] } ]" ]
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html-single/file_system_guide/mounting-the-ceph-file-system-as-a-kernel-client_fs
Security hardening
Security hardening Red Hat Enterprise Linux 9 Enhancing security of Red Hat Enterprise Linux 9 systems Red Hat Customer Content Services
[ "dnf update", "systemctl start firewalld systemctl enable firewalld", "systemctl disable cups", "systemctl list-units | grep service", "fips-mode-setup --check FIPS mode is enabled.", "fips-mode-setup --enable Kernel initramdisks are being regenerated. This might take some time. Setting system policy to FIPS Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place. FIPS mode will be enabled. Please reboot the system for the setting to take effect.", "reboot", "fips-mode-setup --check FIPS mode is enabled.", "update-crypto-policies --show DEFAULT", "update-crypto-policies --set <POLICY> <POLICY>", "reboot", "update-crypto-policies --show <POLICY>", "update-crypto-policies --set LEGACY Setting system policy to LEGACY", "update-crypto-policies --set DEFAULT:SHA1 Setting system policy to DEFAULT:SHA1 Note: System-wide crypto policies are applied on application start-up. It is recommended to restart the system for the change of policies to fully take place.", "reboot", "update-crypto-policies --show DEFAULT:SHA1", "wget --secure-protocol= TLSv1_1 --ciphers=\" SECURE128 \" https://example.com", "curl https://example.com --ciphers '@SECLEVEL=0:DES-CBC3-SHA:RSA-DES-CBC3-SHA'", "cd /etc/crypto-policies/policies/modules/", "touch MYCRYPTO-1 .pmod touch SCOPES-AND-WILDCARDS .pmod", "vi MYCRYPTO-1 .pmod", "min_rsa_size = 3072 hash = SHA2-384 SHA2-512 SHA3-384 SHA3-512", "vi SCOPES-AND-WILDCARDS .pmod", "Disable the AES-128 cipher, all modes cipher = -AES-128-* Disable CHACHA20-POLY1305 for the TLS protocol (OpenSSL, GnuTLS, NSS, and OpenJDK) cipher@TLS = -CHACHA20-POLY1305 Allow using the FFDHE-1024 group with the SSH protocol (libssh and OpenSSH) group@SSH = FFDHE-1024+ Disable all CBC mode ciphers for the SSH protocol (libssh and OpenSSH) cipher@SSH = -*-CBC Allow the AES-256-CBC cipher in applications using libssh cipher@libssh = AES-256-CBC+", "update-crypto-policies --set DEFAULT: MYCRYPTO-1 : SCOPES-AND-WILDCARDS", "reboot", "cat /etc/crypto-policies/state/CURRENT.pol | grep rsa_size min_rsa_size = 3072", "cd /etc/crypto-policies/policies/ touch MYPOLICY .pol", "cp /usr/share/crypto-policies/policies/ DEFAULT .pol /etc/crypto-policies/policies/ MYPOLICY .pol", "vi /etc/crypto-policies/policies/ MYPOLICY .pol", "update-crypto-policies --set MYPOLICY", "reboot", "--- - name: Configure cryptographic policies hosts: managed-node-01.example.com tasks: - name: Configure the FUTURE cryptographic security policy on the managed node ansible.builtin.include_role: name: rhel-system-roles.crypto_policies vars: - crypto_policies_policy: FUTURE - crypto_policies_reboot_ok: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Verification hosts: managed-node-01.example.com tasks: - name: Verify active cryptographic policy ansible.builtin.include_role: name: rhel-system-roles.crypto_policies - name: Display the currently active cryptographic policy ansible.builtin.debug: var: crypto_policies_active", "ansible-playbook --syntax-check ~/verify_playbook.yml", "ansible-playbook ~/verify_playbook.yml TASK [debug] ************************** ok: [host] => { \"crypto_policies_active\": \"FUTURE\" }", "cat /usr/share/p11-kit/modules/opensc.module module: opensc-pkcs11.so", "ssh-keygen -D pkcs11: > keys.pub", "ssh-copy-id -f -i keys.pub <[email protected]>", "ssh -i \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i \"pkcs11:id=%01\" <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "ssh -i pkcs11: <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "cat ~/.ssh/config IdentityFile \"pkcs11:id=%01?module-path=/usr/lib64/pkcs11/opensc-pkcs11.so\" ssh <ssh-server-example.com> Enter PIN for 'SSH key': [ssh-server-example.com] USD", "wget --private-key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --certificate 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/", "curl --key 'pkcs11:token=softhsm;id=%01;type=private?pin-value=111111' --cert 'pkcs11:token=softhsm;id=%01;type=cert' https://example.com/", "SSLCertificateFile \"pkcs11:id=%01;token=softhsm;type=cert\" SSLCertificateKeyFile \"pkcs11:id=%01;token=softhsm;type=private?pin-value=111111\"", "ssl_certificate /path/to/cert.pem ssl_certificate_key \"engine:pkcs11:pkcs11:token=softhsm;id=%01;type=private?pin-value=111111\";", "Authentication is required to access the PC/SC daemon", "journalctl -b | grep pcsc Process 3087 (user: 1001) is NOT authorized for action: access_pcsc", "journalctl -u polkit polkitd[NNN]: Error compiling script /etc/polkit-1/rules.d/00-debug-pcscd.rules polkitd[NNN]: Operator of unix-session:c2 FAILED to authenticate to gain authorization for action org.debian.pcsc-lite.access_pcsc for unix-process:4800:14441 [/usr/libexec/gsd-smartcard] (owned by unix-user:group)", "#!/bin/bash cd /proc for p in [0-9]* do if grep libpcsclite.so.1.0.0 USDp/maps &> /dev/null then echo -n \"process: \" cat USDp/cmdline echo \" (USDp)\" fi done", "./pcsc-apps.sh process: /usr/libexec/gsd-smartcard (3048) enable-sync --auto-ssl-client-auth --enable-crashpad (4828)", "touch /etc/polkit-1/rules.d/00-test.rules", "vi /etc/polkit-1/rules.d/00-test.rules", "polkit.addRule(function(action, subject) { if (action.id == \"org.debian.pcsc-lite.access_pcsc\" || action.id == \"org.debian.pcsc-lite.access_card\") { polkit.log(\"action=\" + action); polkit.log(\"subject=\" + subject); } });", "systemctl restart pcscd.service pcscd.socket polkit.service", "journalctl -u polkit --since \"1 hour ago\" polkitd[1224]: <no filename>:4: action=[Action id='org.debian.pcsc-lite.access_pcsc'] polkitd[1224]: <no filename>:5: subject=[Subject pid=2020481 user=user' groups=user,wheel,mock,wireshark seat=null session=null local=true active=true]", "wget -O - https://www.redhat.com/security/data/oval/v2/RHEL9/rhel-9.oval.xml.bz2 | bzip2 --decompress > rhel-9.oval.xml", "oscap oval eval --report vulnerability.html rhel-9.oval.xml", "firefox vulnerability.html &", "wget -O - https://www.redhat.com/security/data/oval/v2/RHEL9/rhel-9.oval.xml.bz2 | bzip2 --decompress > rhel-9.oval.xml", "oscap-ssh <username> @ <hostname> <port> oval eval --report <scan-report.html> rhel-9.oval.xml", "Data stream ├── xccdf | ├── benchmark | ├── profile | | ├──rule reference | | └──variable | ├── rule | ├── human readable data | ├── oval reference ├── oval ├── ocil reference ├── ocil ├── cpe reference └── cpe └── remediation", "ls /usr/share/xml/scap/ssg/content/ ssg-rhel9-ds.xml", "oscap info /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml Profiles: ... Title: Australian Cyber Security Centre (ACSC) Essential Eight Id: xccdf_org.ssgproject.content_profile_e8 Title: Health Insurance Portability and Accountability Act (HIPAA) Id: xccdf_org.ssgproject.content_profile_hipaa Title: PCI-DSS v3.2.1 Control Baseline for Red Hat Enterprise Linux 9 Id: xccdf_org.ssgproject.content_profile_pci-dss ...", "oscap info --profile hipaa /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml ... Profile Title: Health Insurance Portability and Accountability Act (HIPAA) Description: The HIPAA Security Rule establishes U.S. national standards to protect individuals' electronic personal health information that is created, received, used, or maintained by a covered entity. ...", "oscap xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap-ssh <username> @ <hostname> <port> xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap xccdf eval --profile <profileID> --remediate /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "ansible-playbook -i localhost, -c local /usr/share/scap-security-guide/ansible/rhel9-playbook-hipaa.yml", "oscap xccdf eval --profile hipaa --report <scan-report.html> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap info <hipaa-results.xml>", "oscap xccdf generate fix --fix-type ansible --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.yml> <hipaa-results.xml>", "oscap xccdf eval --profile hipaa --results <hipaa-results.xml> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap info <hipaa-results.xml>", "oscap xccdf generate fix --fix-type bash --result-id <xccdf_org.open-scap_testresult_xccdf_org.ssgproject.content_profile_hipaa> --output <hipaa-remediations.sh> <hipaa-results.xml>", "scap-workbench &", "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "oscap xccdf eval --profile ospp --report eval_postinstall_report.html /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "wget -O - https://www.redhat.com/security/data/oval/v2/RHEL9/rhel-9.oval.xml.bz2 | bzip2 --decompress > rhel-9.oval.xml", "podman images REPOSITORY TAG IMAGE ID CREATED SIZE registry.access.redhat.com/ubi9/ubi latest 096cae65a207 7 weeks ago 239 MB", "oscap-podman 096cae65a207 oval eval --report vulnerability.html rhel-9.oval.xml", "firefox vulnerability.html &", "oscap-podman <ID> xccdf eval --report <scan-report.html> --profile <profileID> /usr/share/xml/scap/ssg/content/ssg-rhel9-ds.xml", "firefox <scan-report.html> &amp;", "dnf install keylime-verifier", "[verifier] ip = <verifier_IP_address>", "[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>", "[verifier] tls_dir = /var/lib/keylime/cv_ca server_key = </path/to/server_key> server_key_password = <passphrase1> server_cert = </path/to/server_cert> trusted_client_ca = [' </path/to/ca/cert1> ', ' </path/to/ca/cert2> '] client_key = </path/to/client_key> client_key_password = <passphrase2> client_cert = </path/to/client_cert> trusted_server_ca = [' </path/to/ca/cert3> ', ' </path/to/ca/cert4> ']", "firewall-cmd --add-port 8881/tcp firewall-cmd --runtime-to-permanent", "systemctl enable --now keylime_verifier", "systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago", "dnf install keylime-verifier", "[verifier] ip = *", "[verifier] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>", "[verifier] tls_dir = /var/lib/keylime/cv_ca server_key = </path/to/server_key> server_cert = </path/to/server_cert> trusted_client_ca = [' </path/to/ca/cert1> ', ' </path/to/ca/cert2> '] client_key = </path/to/client_key> client_cert = </path/to/client_cert> trusted_server_ca = [' </path/to/ca/cert3> ', ' </path/to/ca/cert4> ']", "firewall-cmd --add-port 8881/tcp firewall-cmd --runtime-to-permanent", "podman run --name keylime-verifier -p 8881:8881 -v /etc/keylime/verifier.conf.d:/etc/keylime/verifier.conf.d:Z -v /var/lib/keylime/cv_ca:/var/lib/keylime/cv_ca:Z -d -e KEYLIME_VERIFIER_SERVER_KEY_PASSWORD= <passphrase1> -e KEYLIME_VERIFIER_CLIENT_KEY_PASSWORD= <passphrase2> registry.access.redhat.com/rhel9/keylime-verifier", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 80b6b9dbf57c registry.access.redhat.com/rhel9/keylime-verifier:latest keylime_verifier 14 seconds ago Up 14 seconds 0.0.0.0:8881->8881/tcp keylime-verifier", "dnf install keylime-registrar", "[registrar] ip = <registrar_IP_address>", "[registrar] database_url = <protocol>://<name>:<password>@<ip_address_or_hostname>/<properties>", "[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = </path/to/server_key> server_key_password = <passphrase1> server_cert = </path/to/server_cert> trusted_client_ca = [' </path/to/ca/cert1> ', ' </path/to/ca/cert2> ']", "firewall-cmd --add-port 8890/tcp --add-port 8891/tcp firewall-cmd --runtime-to-permanent", "systemctl enable --now keylime_registrar", "systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago", "dnf install keylime-registrar", "[registrar] ip = *", "[registrar] database_url = &lt;protocol&gt;://&lt;name&gt;:&lt;password&gt;@&lt;ip_address_or_hostname&gt;/&lt;properties&gt;", "[registrar] tls_dir = /var/lib/keylime/reg_ca server_key = &lt;/path/to/server_key&gt; server_cert = &lt;/path/to/server_cert&gt; trusted_client_ca = [' &lt;/path/to/ca/cert1&gt; ', ' &lt;/path/to/ca/cert2&gt; ']", "firewall-cmd --add-port 8890/tcp --add-port 8891/tcp firewall-cmd --runtime-to-permanent", "podman run --name keylime-registrar -p 8890:8890 -p 8891:8891 -v /etc/keylime/registrar.conf.d:/etc/keylime/registrar.conf.d:Z -v /var/lib/keylime/reg_ca:/var/lib/keylime/reg_ca:Z -d -e KEYLIME_REGISTRAR_SERVER_KEY_PASSWORD= &lt;passphrase1&gt; registry.access.redhat.com/rhel9/keylime-registrar", "podman ps -a CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES 07d4b4bff1b6 localhost/keylime-registrar:latest keylime_registrar 12 seconds ago Up 12 seconds 0.0.0.0:8881->8881/tcp, 0.0.0.0:8891->8891/tcp keylime-registrar", "vi keylime-playbook.yml", "--- - name: Manage keylime servers hosts: all vars: keylime_server_verifier_ip: \"{{ ansible_host }}\" keylime_server_registrar_ip: \"{{ ansible_host }}\" keylime_server_verifier_tls_dir: <ver_tls_directory > keylime_server_verifier_server_cert: <ver_server_certfile > keylime_server_verifier_server_key: <ver_server_key > keylime_server_verifier_server_key_passphrase: <ver_server_key_passphrase > keylime_server_verifier_trusted_client_ca: <ver_trusted_client_ca_list > keylime_server_verifier_client_cert: <ver_client_certfile > keylime_server_verifier_client_key: <ver_client_key > keylime_server_verifier_client_key_passphrase: <ver_client_key_passphrase > keylime_server_verifier_trusted_server_ca: <ver_trusted_server_ca_list > keylime_server_registrar_tls_dir: <reg_tls_directory > keylime_server_registrar_server_cert: <reg_server_certfile > keylime_server_registrar_server_key: <reg_server_key > keylime_server_registrar_server_key_passphrase: <reg_server_key_passphrase > keylime_server_registrar_trusted_client_ca: <reg_trusted_client_ca_list > roles: - rhel-system-roles.keylime_server", "ansible-playbook <keylime-playbook.yml>", "systemctl status keylime_verifier ● keylime_verifier.service - The Keylime verifier Loaded: loaded (/usr/lib/systemd/system/keylime_verifier.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:08 EST; 1min 45s ago", "systemctl status keylime_registrar ● keylime_registrar.service - The Keylime registrar service Loaded: loaded (/usr/lib/systemd/system/keylime_registrar.service; disabled; vendor preset: disabled) Active: active (running) since Wed 2022-11-09 10:10:17 EST; 1min 42s ago", "dnf install keylime-tenant", "[tenant] verifier_ip = <verifier_ip>", "[tenant] registrar_ip = <registrar_ip>", "[tenant] tls_dir = /var/lib/keylime/cv_ca client_key = tenant-key.pem client_key_password = <passphrase1> client_cert = tenant-cert.pem trusted_server_ca = [' </path/to/ca/cert> ']", "keylime_tenant -c cvstatus Reading configuration from ['/etc/keylime/logging.conf'] 2022-10-14 12:56:08.155 - keylime.tpm - INFO - TPM2-TOOLS Version: 5.2 Reading configuration from ['/etc/keylime/tenant.conf'] 2022-10-14 12:56:08.157 - keylime.tenant - INFO - Setting up client TLS 2022-10-14 12:56:08.158 - keylime.tenant - INFO - Using default client_cert option for tenant 2022-10-14 12:56:08.158 - keylime.tenant - INFO - Using default client_key option for tenant 2022-10-14 12:56:08.178 - keylime.tenant - INFO - TLS is enabled. 2022-10-14 12:56:08.178 - keylime.tenant - WARNING - Using default UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 2022-10-14 12:56:08.221 - keylime.tenant - INFO - Verifier at 127.0.0.1 with Port 8881 does not have agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000.", "keylime_tenant -c regstatus Reading configuration from ['/etc/keylime/logging.conf'] 2022-10-14 12:56:02.114 - keylime.tpm - INFO - TPM2-TOOLS Version: 5.2 Reading configuration from ['/etc/keylime/tenant.conf'] 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Setting up client TLS 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Using default client_cert option for tenant 2022-10-14 12:56:02.116 - keylime.tenant - INFO - Using default client_key option for tenant 2022-10-14 12:56:02.137 - keylime.tenant - INFO - TLS is enabled. 2022-10-14 12:56:02.137 - keylime.tenant - WARNING - Using default UUID d432fbb3-d2f1-4a97-9ef7-75bd81c00000 2022-10-14 12:56:02.171 - keylime.registrar_client - CRITICAL - Error: could not get agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 data from Registrar Server: 404 2022-10-14 12:56:02.172 - keylime.registrar_client - CRITICAL - Response code 404: agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 not found 2022-10-14 12:56:02.172 - keylime.tenant - INFO - Agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 does not exist on the registrar. Please register the agent with the registrar. 2022-10-14 12:56:02.172 - keylime.tenant - INFO - {\"code\": 404, \"status\": \"Agent d432fbb3-d2f1-4a97-9ef7-75bd81c00000 does not exist on registrar 127.0.0.1 port 8891.\", \"results\": {}}", "dnf install keylime-agent", "[agent] ip = ' <agent_ip> '", "[agent] registrar_ip = ' <registrar_IP_address> '", "[agent] uuid = ' <agent_UUID> '", "[agent] server_key = ' </path/to/server_key> ' server_key_password = ' <passphrase1> ' server_cert = ' </path/to/server_cert> ' trusted_client_ca = '[ </path/to/ca/cert3> , </path/to/ca/cert4> ]'", "firewall-cmd --add-port 9002/tcp firewall-cmd --runtime-to-permanent", "systemctl enable --now keylime_agent", "keylime_tenant -c regstatus --uuid <agent_uuid> Reading configuration from ['/etc/keylime/logging.conf'] ==\\n-----END CERTIFICATE-----\\n\", \"ip\": \"127.0.0.1\", \"port\": 9002, \"regcount\": 1, \"operational_state\": \"Registered\"}}}", "PROC_SUPER_MAGIC = 0x9fa0 dont_measure fsmagic=0x9fa0 SYSFS_MAGIC = 0x62656572 dont_measure fsmagic=0x62656572 DEBUGFS_MAGIC = 0x64626720 dont_measure fsmagic=0x64626720 TMPFS_MAGIC = 0x01021994 dont_measure fsmagic=0x1021994 RAMFS_MAGIC dont_measure fsmagic=0x858458f6 DEVPTS_SUPER_MAGIC=0x1cd1 dont_measure fsmagic=0x1cd1 BINFMTFS_MAGIC=0x42494e4d dont_measure fsmagic=0x42494e4d SECURITYFS_MAGIC=0x73636673 dont_measure fsmagic=0x73636673 SELINUX_MAGIC=0xf97cff8c dont_measure fsmagic=0xf97cff8c SMACK_MAGIC=0x43415d53 dont_measure fsmagic=0x43415d53 NSFS_MAGIC=0x6e736673 dont_measure fsmagic=0x6e736673 EFIVARFS_MAGIC dont_measure fsmagic=0xde5e81e4 CGROUP_SUPER_MAGIC=0x27e0eb dont_measure fsmagic=0x27e0eb CGROUP2_SUPER_MAGIC=0x63677270 dont_measure fsmagic=0x63677270 OVERLAYFS_MAGIC when containers are used we almost always want to ignore them dont_measure fsmagic=0x794c7630 MEASUREMENTS measure func=BPRM_CHECK measure func=FILE_MMAP mask=MAY_EXEC measure func=MODULE_CHECK uid=0", "grubby --update-kernel DEFAULT --args 'ima_appraise=fix ima_canonical_fmt ima_policy=tcb ima_template=ima-ng'", "systemctl status keylime_agent ● keylime_agent.service - The Keylime compute agent Loaded: loaded (/usr/lib/systemd/system/keylime_agent.service; enabled; preset: disabled) Active: active (running) since", "/usr/share/keylime/scripts/create_allowlist.sh -o <allowlist.txt> -h sha256sum", "scp <allowlist.txt> root@ <tenant . ip> :/root/ <allowlist.txt>", "keylime_create_policy -a <allowlist.txt> -e <excludelist.txt> -o <policy.json>", "keylime_tenant -c add -t <agent_ip> -u <agent_uuid> --runtime-policy <policy.json> --cert default", "keylime_tenant -c add -t 127.0.0.1 -u d432fbb3-d2f1-4a97-9ef7-75bd81c00000 --runtime-policy policy.json --cert default", "keylime_tenant -c cvstatus -u <agent.uuid> {\" <agent.uuid> \": {\"operational_state\": \"Get Quote\"...\"attestation_count\": 5", "{\" <agent.uuid> \": {\"operational_state\": \"Invalid Quote\", ... \"ima.validation.ima-ng.not_in_allowlist\", \"attestation_count\": 5, \"last_received_quote\": 1684150329, \"last_successful_attestation\": 1684150327}}", "journalctl -u keylime_verifier keylime.tpm - INFO - Checking IMA measurement list keylime.ima - WARNING - File not found in allowlist: /root/bad-script.sh keylime.ima - ERROR - IMA ERRORS: template-hash 0 fnf 1 hash 0 good 781 keylime.cloudverifier - WARNING - agent D432FBB3-D2F1-4A97-9EF7-75BD81C00000 failed, stopping polling", "dnf -y install python3-keylime", "/usr/share/keylime/scripts/create_mb_refstate /sys/kernel/security/tpm0/binary_bios_measurements <./measured_boot_reference_state.json>", "scp root@ <agent_ip> : <./measured_boot_reference_state.json> <./measured_boot_reference_state.json>", "keylime_tenant -c add -t <agent_ip> -u <agent_uuid> --mb_refstate <./measured_boot_reference_state.json> --cert default", "keylime_tenant -c cvstatus -u <agent_uuid> {\" <agent.uuid> \": {\"operational_state\": \"Get Quote\"...\"attestation_count\": 5", "{\" <agent.uuid> \": {\"operational_state\": \"Invalid Quote\", ... \"ima.validation.ima-ng.not_in_allowlist\", \"attestation_count\": 5, \"last_received_quote\": 1684150329, \"last_successful_attestation\": 1684150327}}", "journalctl -u keylime_verifier {\"d432fbb3-d2f1-4a97-9ef7-75bd81c00000\": {\"operational_state\": \"Tenant Quote Failed\", ... \"last_event_id\": \"measured_boot.invalid_pcr_0\", \"attestation_count\": 0, \"last_received_quote\": 1684487093, \"last_successful_attestation\": 0}}", "KEYLIME _<SECTION>_<ENVIRONMENT_VARIABLE> = <value>", "dnf install aide", "aide --init Start timestamp: 2024-07-08 10:39:23 -0400 (AIDE 0.16) AIDE initialized database at /var/lib/aide/aide.db.new.gz Number of entries: 55856 --------------------------------------------------- The attributes of the (uncompressed) database(s): --------------------------------------------------- /var/lib/aide/aide.db.new.gz ... SHA512 : mZaWoGzL2m6ZcyyZ/AXTIowliEXWSZqx IFYImY4f7id4u+Bq8WeuSE2jasZur/A4 FPBFaBkoCFHdoE/FW/V94Q==", "mv /var/lib/aide/aide.db.new.gz /var/lib/aide/aide.db.gz", "aide --check Start timestamp: 2024-07-08 10:43:46 -0400 (AIDE 0.16) AIDE found differences between database and filesystem!! Summary: Total number of entries: 55856 Added entries: 0 Removed entries: 0 Changed entries: 1 --------------------------------------------------- Changed entries: --------------------------------------------------- f ... ..S : /root/.viminfo --------------------------------------------------- Detailed information about changes: --------------------------------------------------- File: /root/.viminfo SELinux : system_u:object_r:admin_home_t:s | unconfined_u:object_r:admin_home 0 | _t:s0 ...", "05 4 * * * root /usr/sbin/aide --check", "aide --update", "umount /dev/mapper/vg00-lv00", "lvextend -L+ 32M /dev/mapper/vg00-lv00", "cryptsetup reencrypt --encrypt --init-only --reduce-device-size 32M /dev/mapper/ vg00-lv00 lv00_encrypted /dev/mapper/ lv00_encrypted is now active and ready for online encryption.", "mount /dev/mapper/ lv00_encrypted /mnt/lv00_encrypted", "cryptsetup luksUUID /dev/mapper/ vg00-lv00 a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325", "vi /etc/crypttab lv00_encrypted UUID= a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 none", "dracut -f --regenerate-all", "blkid -p /dev/mapper/ lv00_encrypted /dev/mapper/ lv00-encrypted : UUID=\" 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 \" BLOCK_SIZE=\"4096\" TYPE=\"xfs\" USAGE=\"filesystem\"", "vi /etc/fstab UUID= 37bc2492-d8fa-4969-9d9b-bb64d3685aa9 /home auto rw,user,auto 0", "cryptsetup reencrypt --resume-only /dev/mapper/ vg00-lv00 Enter passphrase for /dev/mapper/ vg00-lv00 : Auto-detected active dm device ' lv00_encrypted ' for data device /dev/mapper/ vg00-lv00 . Finished, time 00:31.130, 10272 MiB written, speed 330.0 MiB/s", "cryptsetup luksDump /dev/mapper/ vg00-lv00 LUKS header information Version: 2 Epoch: 4 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: a52e2cc9-a5be-47b8-a95d-6bdf4f2d9325 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 33554432 [bytes] length: (whole device) cipher: aes-xts-plain64 [...]", "cryptsetup status lv00_encrypted /dev/mapper/ lv00_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/mapper/ vg00-lv00", "umount /dev/ nvme0n1p1", "cryptsetup reencrypt --encrypt --init-only --header /home/header /dev/ nvme0n1p1 nvme_encrypted WARNING! ======== Header file does not exist, do you want to create it? Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /home/header : Verify passphrase: /dev/mapper/ nvme_encrypted is now active and ready for online encryption.", "mount /dev/mapper/ nvme_encrypted /mnt/nvme_encrypted", "cryptsetup reencrypt --resume-only --header /home/header /dev/ nvme0n1p1 Enter passphrase for /dev/ nvme0n1p1 : Auto-detected active dm device 'nvme_encrypted' for data device /dev/ nvme0n1p1 . Finished, time 00m51s, 10 GiB written, speed 198.2 MiB/s", "cryptsetup luksDump /home/header LUKS header information Version: 2 Epoch: 88 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: c4f5d274-f4c0-41e3-ac36-22a917ab0386 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 0 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]", "cryptsetup status nvme_encrypted /dev/mapper/ nvme_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1", "cryptsetup luksFormat /dev/ nvme0n1p1 WARNING! ======== This will overwrite data on /dev/nvme0n1p1 irrevocably. Are you sure? (Type 'yes' in capital letters): YES Enter passphrase for /dev/ nvme0n1p1 : Verify passphrase:", "cryptsetup open /dev/ nvme0n1p1 nvme0n1p1_encrypted Enter passphrase for /dev/ nvme0n1p1 :", "mkfs -t ext4 /dev/mapper/ nvme0n1p1_encrypted", "mount /dev/mapper/ nvme0n1p1_encrypted mount-point", "cryptsetup luksDump /dev/ nvme0n1p1 LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 34ce4870-ffdf-467c-9a9e-345a53ed8a25 Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes] [...]", "cryptsetup status nvme0n1p1_encrypted /dev/mapper/ nvme0n1p1_encrypted is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/ nvme0n1p1 sector size: 512 offset: 32768 sectors size: 20938752 sectors mode: read/write", "ansible-vault create vault.yml New Vault password: <vault_password> Confirm New Vault password: <vault_password>", "luks_password: <password>", "--- - name: Manage local storage hosts: managed-node-01.example.com vars_files: - vault.yml tasks: - name: Create and configure a volume encrypted with LUKS ansible.builtin.include_role: name: rhel-system-roles.storage vars: storage_volumes: - name: barefs type: disk disks: - sdb fs_type: xfs fs_label: <label> mount_point: /mnt/data encryption: true encryption_password: \"{{ luks_password }}\"", "ansible-playbook --ask-vault-pass --syntax-check ~/playbook.yml", "ansible-playbook --ask-vault-pass ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksUUID /dev/sdb' 4e4e7970-1822-470e-b55a-e91efe5d0f5c", "ansible managed-node-01.example.com -m command -a 'cryptsetup status luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c' /dev/mapper/luks-4e4e7970-1822-470e-b55a-e91efe5d0f5c is active and is in use. type: LUKS2 cipher: aes-xts-plain64 keysize: 512 bits key location: keyring device: /dev/sdb", "ansible managed-node-01.example.com -m command -a 'cryptsetup luksDump /dev/sdb' LUKS header information Version: 2 Epoch: 3 Metadata area: 16384 [bytes] Keyslots area: 16744448 [bytes] UUID: 4e4e7970-1822-470e-b55a-e91efe5d0f5c Label: (no label) Subsystem: (no subsystem) Flags: (no flags) Data segments: 0: crypt offset: 16777216 [bytes] length: (whole device) cipher: aes-xts-plain64 sector: 512 [bytes]", "dnf install tang", "semanage port -a -t tangd_port_t -p tcp 7500", "firewall-cmd --add-port= 7500 /tcp firewall-cmd --runtime-to-permanent", "systemctl enable tangd.socket", "systemctl edit tangd.socket", "[Socket] ListenStream= ListenStream= 7500", "systemctl daemon-reload", "systemctl show tangd.socket -p Listen Listen=[::]:7500 (Stream)", "systemctl restart tangd.socket", "echo test | clevis encrypt tang '{\"url\":\" <tang.server.example.com:7500> \"}' -y | clevis decrypt test", "cd /var/db/tang ls -l -rw-r--r--. 1 root root 349 Feb 7 14:55 UV6dqXSwe1bRKG3KbJmdiR020hY.jwk -rw-r--r--. 1 root root 354 Feb 7 14:55 y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk mv UV6dqXSwe1bRKG3KbJmdiR020hY.jwk .UV6dqXSwe1bRKG3KbJmdiR020hY.jwk mv y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk .y9hxLTQSiSB5jSEGWnjhY8fDTJU.jwk", "ls -l total 0", "/usr/libexec/tangd-keygen /var/db/tang ls /var/db/tang 3ZWS6-cDrCG61UPJS2BMmPU4I54.jwk zyLuX6hijUy_PSeUEFDi7hi38.jwk", "tang-show-keys 7500 3ZWS6-cDrCG61UPJS2BMmPU4I54", "clevis luks list -d /dev/sda2 1: tang '{\"url\":\" http://tang.srv \"}' clevis luks report -d /dev/sda2 -s 1 Report detected that some keys were rotated. Do you want to regenerate luks metadata with \"clevis luks regen -d /dev/sda2 -s 1\"? [ynYN]", "clevis luks regen -d /dev/sda2 -s 1", "cd /var/db/tang rm .*.jwk", "tang-show-keys 7500 x100_1k6GPiDOaMlL3WbpCjHOy9ul1bSfdhI3M08wO0", "lsinitrd | grep clevis-luks lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...", "clevis encrypt tang '{\"url\":\" http://tang.srv:port \"}' < input-plain.txt > secret.jwe The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y", "curl -sfg http://tang.srv:port /adv -o adv.jws", "echo 'hello' | clevis encrypt tang '{\"url\":\" http://tang.srv:port \",\"adv\":\" adv.jws \"}'", "clevis decrypt < secret.jwe > output-plain.txt", "clevis encrypt tpm2 '{}' < input-plain.txt > secret.jwe", "clevis encrypt tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\"}' < input-plain.txt > secret.jwe", "clevis decrypt < secret.jwe > output-plain.txt", "clevis encrypt tpm2 '{\"pcr_bank\":\"sha256\",\"pcr_ids\":\"0,7\"}' < input-plain.txt > secret.jwe", "clevis encrypt tang Usage: clevis encrypt tang CONFIG < PLAINTEXT > JWE", "dnf install clevis-luks", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 11G 0 part └─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt ├─rhel-root 253:0 0 9.8G 0 lvm / └─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]", "clevis luks bind -d /dev/sda2 tang '{\"url\":\" http://tang.srv \"}' The advertisement contains the following signing keys: _OsIk0T-E2l6qjfdDiwVmidoZjA Do you wish to trust these keys? [ynYN] y You are about to initialize a LUKS device for metadata storage. Attempting to initialize it may result in data loss if data was already written into the LUKS header gap in a different format. A backup is advised before initialization is performed. Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:", "dnf install clevis-dracut", "dracut -fv --regenerate-all --hostonly-cmdline", "echo \"hostonly_cmdline=yes\" > /etc/dracut.conf.d/clevis.conf dracut -fv --regenerate-all", "grubby --update-kernel=ALL --args=\"rd.neednet=1\"", "clevis luks list -d /dev/sda2 1: tang '{\"url\":\"http://tang.srv:port\"}'", "lsinitrd | grep clevis-luks lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...", "dracut -fv --regenerate-all --kernel-cmdline \"ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100\"", "cat /etc/dracut.conf.d/static_ip.conf kernel_cmdline=\"ip=192.0.2.10::192.0.2.1:255.255.255.0::ens3:none nameserver=192.0.2.100\" dracut -fv --regenerate-all", "dnf install clevis-luks", "lsblk NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT sda 8:0 0 12G 0 disk ├─sda1 8:1 0 1G 0 part /boot └─sda2 8:2 0 11G 0 part └─luks-40e20552-2ade-4954-9d56-565aa7994fb6 253:0 0 11G 0 crypt ├─rhel-root 253:0 0 9.8G 0 lvm / └─rhel-swap 253:1 0 1.2G 0 lvm [SWAP]", "clevis luks bind -d /dev/sda2 tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\"}' Do you wish to initialize /dev/sda2? [yn] y Enter existing LUKS password:", "clevis luks bind -d /dev/sda2 tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\",\"pcr_bank\":\"sha256\",\"pcr_ids\":\"0,1\"}'", "dnf install clevis-dracut dracut -fv --regenerate-all", "clevis luks list -d /dev/sda2 1: tpm2 '{\"hash\":\"sha256\",\"key\":\"rsa\"}'", "clevis luks unbind -d /dev/sda2 -s 1", "cryptsetup luksDump /dev/sda2 LUKS header information Version: 2 Keyslots: 0: luks2 1: luks2 Key: 512 bits Priority: normal Cipher: aes-xts-plain64 Tokens: 0: clevis Keyslot: 1", "cryptsetup token remove --token-id 0 /dev/sda2", "luksmeta wipe -d /dev/sda2 -s 1", "cryptsetup luksKillSlot /dev/sda2 1", "part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --grow --encrypted --passphrase=temppass", "part /boot --fstype=\"xfs\" --ondisk=vda --size=256 part / --fstype=\"xfs\" --ondisk=vda --size=2048 --encrypted --passphrase=temppass part /var --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /tmp --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /home --fstype=\"xfs\" --ondisk=vda --size=2048 --grow --encrypted --passphrase=temppass part /var/log --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass part /var/log/audit --fstype=\"xfs\" --ondisk=vda --size=1024 --encrypted --passphrase=temppass", "%packages clevis-dracut clevis-luks clevis-systemd %end", "%post clevis luks bind -y -k - -d /dev/vda2 tang '{\"url\":\"http://tang.srv\"}' <<< \"temppass\" cryptsetup luksRemoveKey /dev/vda2 <<< \"temppass\" dracut -fv --regenerate-all %end", "%post curl -sfg http://tang.srv/adv -o adv.jws clevis luks bind -f -k - -d /dev/vda2 tang '{\"url\":\"http://tang.srv\",\"adv\":\"adv.jws\"}' <<< \"temppass\" cryptsetup luksRemoveKey /dev/vda2 <<< \"temppass\" dracut -fv --regenerate-all %end", "dnf install clevis-udisks2", "clevis luks bind -d /dev/sdb1 tang '{\"url\":\" http://tang.srv \"}'", "clevis luks unlock -d /dev/sdb1", "clevis luks bind -d /dev/sda1 sss '{\"t\":1,\"pins\":{\"tang\":[{\"url\":\" http://tang1.srv \"},{\"url\":\" http://tang2.srv \"}]}}'", "{ \"t\":1, \"pins\":{ \"tang\":[ { \"url\":\"http://tang1.srv\" }, { \"url\":\"http://tang2.srv\" } ] } }", "clevis luks bind -d /dev/sda1 sss '{\"t\":2,\"pins\":{\"tang\":[{\"url\":\" http://tang1.srv \"}], \"tpm2\": {\"pcr_ids\":\"0,7\"}}}'", "{ \"t\":2, \"pins\":{ \"tang\":[ { \"url\":\"http://tang1.srv\" } ], \"tpm2\":{ \"pcr_ids\":\"0,7\" } } }", "podman pull registry.redhat.io/rhel9/tang", "podman run -d -p 7500:7500 -v tang-keys:/var/db/tang --name tang registry.redhat.io/rhel9/tang", "podman run --rm -v tang-keys:/var/db/tang registry.redhat.io/rhel9/tang tangd-rotate-keys -v -d /var/db/tang Rotated key 'rZAMKAseaXBe0rcKXL1hCCIq-DY.jwk' -> .'rZAMKAseaXBe0rcKXL1hCCIq-DY.jwk' Rotated key 'x1AIpc6WmnCU-CabD8_4q18vDuw.jwk' -> .'x1AIpc6WmnCU-CabD8_4q18vDuw.jwk' Created new key GrMMX_WfdqomIU_4RyjpcdlXb0E.jwk Created new key _dTTfn17sZZqVAp80u3ygFDHtjk.jwk Keys rotated successfully.", "echo test | clevis encrypt tang '{\"url\":\"http://localhost:7500\"}' | clevis decrypt The advertisement contains the following signing keys: x1AIpc6WmnCU-CabD8_4q18vDuw Do you wish to trust these keys? [ynYN] y test", "--- - name: Deploy a Tang server hosts: tang.server.example.com tasks: - name: Install and configure periodic key rotation ansible.builtin.include_role: name: rhel-system-roles.nbde_server vars: nbde_server_rotate_keys: yes nbde_server_manage_firewall: true nbde_server_manage_selinux: true", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'echo test | clevis encrypt tang '{\"url\":\" <tang.server.example.com> \"}' -y | clevis decrypt' test", "--- - name: Configure clients for unlocking of encrypted volumes by Tang servers hosts: managed-node-01.example.com tasks: - name: Create NBDE client bindings ansible.builtin.include_role: name: rhel-system-roles.nbde_client vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile nbde_client_early_boot: true state: present servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'clevis luks list -d /dev/rhel/root' 1: tang '{\"url\":\" <http://server1.example.com/> \"}' 2: tang '{\"url\":\" <http://server2.example.com/> \"}'", "ansible managed-node-01.example.com -m command -a 'lsinitrd | grep clevis-luks' lrwxrwxrwx 1 root root 48 Jan 4 02:56 etc/systemd/system/cryptsetup.target.wants/clevis-luks-askpass.path -> /usr/lib/systemd/system/clevis-luks-askpass.path ...", "clients: managed-node-01.example.com: ip_v4: 192.0.2.1 gateway_v4: 192.0.2.254 netmask_v4: 255.255.255.0 interface: enp1s0 managed-node-02.example.com: ip_v4: 192.0.2.2 gateway_v4: 192.0.2.254 netmask_v4: 255.255.255.0 interface: enp1s0", "- name: Configure clients for unlocking of encrypted volumes by Tang servers hosts: managed-node-01.example.com,managed-node-02.example.com vars_files: - ~/static-ip-settings-clients.yml tasks: - name: Create NBDE client bindings ansible.builtin.include_role: name: rhel-system-roles.network vars: nbde_client_bindings: - device: /dev/rhel/root encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - device: /dev/rhel/swap encryption_key_src: /etc/luks/keyfile servers: - http://server1.example.com - http://server2.example.com - name: Configure a Clevis client with static IP address during early boot ansible.builtin.include_role: name: rhel-system-roles.bootloader vars: bootloader_settings: - kernel: ALL options: - name: ip value: \"{{ clients[inventory_hostname]['ip_v4'] }}::{{ clients[inventory_hostname]['gateway_v4'] }}:{{ clients[inventory_hostname]['netmask_v4'] }}::{{ clients[inventory_hostname]['interface'] }}:none\"", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "# service auditd start", "# systemctl enable auditd", "auditctl -w /etc/ssh/sshd_config -p warx -k sshd_config", "USD cat /etc/ssh/sshd_config", "type=SYSCALL msg=audit(1364481363.243:24287): arch=c000003e syscall=2 success=no exit=-13 a0=7fffd19c5592 a1=0 a2=7fffd19c4b50 a3=a items=1 ppid=2686 pid=3538 auid=1000 uid=1000 gid=1000 euid=1000 suid=1000 fsuid=1000 egid=1000 sgid=1000 fsgid=1000 tty=pts0 ses=1 comm=\"cat\" exe=\"/bin/cat\" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=\"sshd_config\" type=CWD msg=audit(1364481363.243:24287): cwd=\"/home/shadowman\" type=PATH msg=audit(1364481363.243:24287): item=0 name=\"/etc/ssh/sshd_config\" inode=409248 dev=fd:00 mode=0100600 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:etc_t:s0 nametype=NORMAL cap_fp=none cap_fi=none cap_fe=0 cap_fver=0 type=PROCTITLE msg=audit(1364481363.243:24287) : proctitle=636174002F6574632F7373682F737368645F636F6E666967", "# ausearch --interpret --exit -13", "# find / -inum 409248 -print /etc/ssh/sshd_config", "auditctl -w /etc/passwd -p wa -k passwd_changes", "auditctl -w /etc/selinux/ -p wa -k selinux_changes", "auditctl -a always,exit -F arch=b64 -S adjtimex -S settimeofday -k time_change", "auditctl -a always,exit -S unlink -S unlinkat -S rename -S renameat -F auid>=1000 -F auid!=4294967295 -k delete", "auditctl -a always,exit -F exe=/bin/id -F arch=b64 -S execve -k execution_bin_id", "auditctl -R /usr/share/audit/sample-rules/30-stig.rules", "cd /usr/share/audit/sample-rules/ cp 10-base-config.rules 30-stig.rules 31-privileged.rules 99-finalize.rules /etc/audit/rules.d/ augenrules --load", "augenrules --load /sbin/augenrules: No change No rules enabled 1 failure 1 pid 742 rate_limit 0", "cp -f /usr/lib/systemd/system/auditd.service /etc/systemd/system/", "vi /etc/systemd/system/auditd.service", "#ExecStartPost=-/sbin/augenrules --load ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules", "systemctl daemon-reload", "service auditd restart", "cp /usr/share/audit/sample-rules/44-installers.rules /etc/audit/rules.d/", "augenrules --load", "auditctl -l -p x-w /usr/bin/dnf-3 -k software-installer -p x-w /usr/bin/yum -k software-installer -p x-w /usr/bin/pip -k software-installer -p x-w /usr/bin/npm -k software-installer -p x-w /usr/bin/cpan -k software-installer -p x-w /usr/bin/gem -k software-installer -p x-w /usr/bin/luarocks -k software-installer", "dnf reinstall -y vim-enhanced", "ausearch -ts recent -k software-installer ---- time->Thu Dec 16 10:33:46 2021 type=PROCTITLE msg=audit(1639668826.074:298): proctitle=2F7573722F6C6962657865632F706C6174666F726D2D707974686F6E002F7573722F62696E2F646E66007265696E7374616C6C002D790076696D2D656E68616E636564 type=PATH msg=audit(1639668826.074:298): item=2 name=\"/lib64/ld-linux-x86-64.so.2\" inode=10092 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:ld_so_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1639668826.074:298): item=1 name=\"/usr/libexec/platform-python\" inode=4618433 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:bin_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=PATH msg=audit(1639668826.074:298): item=0 name=\"/usr/bin/dnf\" inode=6886099 dev=fd:01 mode=0100755 ouid=0 ogid=0 rdev=00:00 obj=system_u:object_r:rpm_exec_t:s0 nametype=NORMAL cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0 type=CWD msg=audit(1639668826.074:298): cwd=\"/root\" type=EXECVE msg=audit(1639668826.074:298): argc=5 a0=\"/usr/libexec/platform-python\" a1=\"/usr/bin/dnf\" a2=\"reinstall\" a3=\"-y\" a4=\"vim-enhanced\" type=SYSCALL msg=audit(1639668826.074:298): arch=c000003e syscall=59 success=yes exit=0 a0=55c437f22b20 a1=55c437f2c9d0 a2=55c437f2aeb0 a3=8 items=3 ppid=5256 pid=5375 auid=0 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=pts0 ses=3 comm=\"dnf\" exe=\"/usr/libexec/platform-python3.6\" subj=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023 key=\"software-installer\"", "ausearch -m USER_LOGIN -ts ' 12/02/2020 ' ' 18:00:00 ' -sv no time->Mon Nov 22 07:33:22 2021 type=USER_LOGIN msg=audit(1637584402.416:92): pid=1939 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:sshd_t:s0-s0:c0.c1023 msg='op=login acct=\"(unknown)\" exe=\"/usr/sbin/sshd\" hostname=? addr=10.37.128.108 terminal=ssh res=failed'", "ausearch --raw | aulast --stdin root ssh 10.37.128.108 Mon Nov 22 07:33 - 07:33 (00:00) root ssh 10.37.128.108 Mon Nov 22 07:33 - 07:33 (00:00) root ssh 10.22.16.106 Mon Nov 22 07:40 - 07:40 (00:00) reboot system boot 4.18.0-348.6.el8 Mon Nov 22 07:33", "aureport --login -i Login Report ============================================ date time auid host term exe success event ============================================ 1. 11/16/2021 13:11:30 root 10.40.192.190 ssh /usr/sbin/sshd yes 6920 2. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6925 3. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6930 4. 11/16/2021 13:11:31 root 10.40.192.190 ssh /usr/sbin/sshd yes 6935 5. 11/16/2021 13:11:33 root 10.40.192.190 ssh /usr/sbin/sshd yes 6940 6. 11/16/2021 13:11:33 root 10.40.192.190 /dev/pts/0 /usr/sbin/sshd yes 6945", "dnf install fapolicyd", "vi /etc/fapolicyd/fapolicyd.conf", "permissive = 1", "systemctl enable --now fapolicyd", "auditctl -w /etc/fapolicyd/ -p wa -k fapolicyd_changes service try-restart auditd", "ausearch -ts recent -m fanotify", "systemctl restart fapolicyd", "systemctl status fapolicyd ● fapolicyd.service - File Access Policy Daemon Loaded: loaded (/usr/lib/systemd/system/fapolicyd.service; enabled; preset: disabled) Active: active (running) since Tue 2024-10-08 05:53:50 EDT; 11s ago ... Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Loading trust data from rpmdb backend Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Loading trust data from file backend Oct 08 05:53:51 machine1.example.com fapolicyd[4974]: Starting to listen for events", "cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted", "cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted", "fapolicyd-cli --file add /tmp/ls --trust-file myapp", "fapolicyd-cli --update", "/tmp/ls ls", "cp /bin/ls /tmp /tmp/ls bash: /tmp/ls: Operation not permitted", "systemctl stop fapolicyd", "fapolicyd --debug-deny 2> fapolicy.output & [1] 51341", "/tmp/ls bash: /tmp/ls: Operation not permitted", "fg fapolicyd --debug 2> fapolicy.output ^C", "kill 51341", "cat fapolicy.output | grep 'deny_audit' rule=13 dec=deny_audit perm=execute auid=0 pid=6855 exe=/usr/bin/bash : path=/tmp/ls ftype=application/x-executable trust=0", "ls /etc/fapolicyd/rules.d/ 10-languages.rules 40-bad-elf.rules 72-shell.rules 20-dracut.rules 41-shared-obj.rules 90-deny-execute.rules 21-updaters.rules 42-trusted-elf.rules 95-allow-open.rules 30-patterns.rules 70-trusted-lang.rules cat /etc/fapolicyd/rules.d/90-deny-execute.rules Deny execution for anything untrusted deny_audit perm=execute all : all", "touch /etc/fapolicyd/rules.d/80-myapps.rules vi /etc/fapolicyd/rules.d/80-myapps.rules", "allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0", "allow perm=execute exe=/usr/bin/bash trust=1 : dir=/tmp/ trust=0", "sha256sum /tmp/ls 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836 ls", "allow perm=execute exe=/usr/bin/bash trust=1 : sha256hash= 780b75c90b2d41ea41679fcb358c892b1251b68d1927c80fbc0d9d148b25e836", "fagenrules --check /usr/sbin/fagenrules: Rules have changed and should be updated fagenrules --load", "fapolicyd-cli --list 13. allow perm=execute exe=/usr/bin/bash trust=1 : path=/tmp/ls ftype=application/x-executable trust=0 14. deny_audit perm=execute all : all", "systemctl start fapolicyd", "/tmp/ls ls", "vi /etc/fapolicyd/fapolicyd.conf", "integrity = sha256", "systemctl restart fapolicyd", "cp /bin/more /bin/more.bak", "cat /bin/less > /bin/more", "su example.user /bin/more /etc/redhat-release bash: /bin/more: Operation not permitted", "mv -f /bin/more.bak /bin/more", "rpm -i application .rpm", "fapolicyd-cli --update", "systemctl status fapolicyd", "fapolicyd-cli --check-config Daemon config is OK fapolicyd-cli --check-trustdb /etc/selinux/targeted/contexts/files/file_contexts miscompares: size sha256 /etc/selinux/targeted/policy/policy.31 miscompares: size sha256", "fapolicyd-cli --list 9. allow perm=execute all : trust=1 10. allow perm=open all : ftype=%languages trust=1 11. deny_audit perm=any all : ftype=%languages 12. allow perm=any all : ftype=text/x-shellscript 13. deny_audit perm=execute all : all", "systemctl stop fapolicyd", "fapolicyd --debug", "fapolicyd --debug 2> fapolicy.output", "fapolicyd --debug-deny", "fapolicyd --debug-deny --permissive", "systemctl stop fapolicyd fapolicyd-cli --delete-db", "fapolicyd-cli --dump-db", "rm -f /var/run/fapolicyd/fapolicyd.fifo", "--- - name: Configuring fapolicyd hosts: managed-node-01.example.com tasks: - name: Allow only executables installed from RPM database and specific files ansible.builtin.include_role: name: rhel-system-roles.fapolicyd vars: fapolicyd_setup_permissive: false fapolicyd_setup_integrity: sha256 fapolicyd_setup_trust: rpmdb,file fapolicyd_add_trusted_file: - <path_to_allowed_command> - <path_to_allowed_service>", "ansible-playbook ~/playbook.yml --syntax-check", "ansible-playbook ~/playbook.yml", "ansible managed-node-01.example.com -m command -a 'su -c \"/bin/not_authorized_application \" <user_name> ' bash: line 1: /bin/not_authorized_application: Operation not permitted non-zero return code", "dnf install usbguard", "usbguard generate-policy > /etc/usbguard/rules.conf", "systemctl enable --now usbguard", "systemctl status usbguard ● usbguard.service - USBGuard daemon Loaded: loaded (/usr/lib/systemd/system/usbguard.service; enabled; vendor preset: disabled) Active: active (running) since Thu 2019-11-07 09:44:07 CET; 3min 16s ago Docs: man:usbguard-daemon(8) Main PID: 6122 (usbguard-daemon) Tasks: 3 (limit: 11493) Memory: 1.2M CGroup: /system.slice/usbguard.service └─6122 /usr/sbin/usbguard-daemon -f -s -c /etc/usbguard/usbguard-daemon.conf Nov 07 09:44:06 localhost.localdomain systemd[1]: Starting USBGuard daemon Nov 07 09:44:07 localhost.localdomain systemd[1]: Started USBGuard daemon.", "usbguard list-devices 4: allow id 1d6b:0002 serial \"0000:02:00.0\" name \"xHCI Host Controller\" hash", "usbguard list-devices 1: allow id 1d6b:0002 serial \"0000:00:06.7\" name \"EHCI Host Controller\" hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" parent-hash \"4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=\" via-port \"usb1\" with-interface 09:00:00 6: block id 1b1c:1ab1 serial \"000024937962\" name \"Voyager\" hash \"CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=\" parent-hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" via-port \"1-3\" with-interface 08:06:50", "usbguard allow-device <6>", "usbguard reject-device <6>", "usbguard block-device <6>", "semanage boolean -l | grep usbguard usbguard_daemon_write_conf (off , off) Allow usbguard to daemon write conf usbguard_daemon_write_rules (on , on) Allow usbguard to daemon write rules", "semanage boolean -m --on usbguard_daemon_write_rules", "usbguard list-devices 1: allow id 1d6b:0002 serial \"0000:00:06.7\" name \"EHCI Host Controller\" hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" parent-hash \"4PHGcaDKWtPjKDwYpIRG722cB9SlGz9l9Iea93+Gt9c=\" via-port \"usb1\" with-interface 09:00:00 6 : block id 1b1c:1ab1 serial \"000024937962\" name \"Voyager\" hash \"CrXgiaWIf2bZAU+5WkzOE7y0rdSO82XMzubn7HDb95Q=\" parent-hash \"JDOb0BiktYs2ct3mSQKopnOOV2h9MGYADwhT+oUtF2s=\" via-port \"1-3\" with-interface 08:06:50", "usbguard allow-device 6 -p", "usbguard reject-device 6 -p", "usbguard block-device 6 -p", "usbguard list-rules", "usbguard generate-policy --no-hashes > ./rules.conf", "vi ./rules.conf", "allow with-interface equals { 08:*:* }", "install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf", "systemctl restart usbguard", "usbguard list-rules 4: allow with-interface 08:*:*", "usbguard generate-policy --no-hashes > ./ policy.conf", "vi ./ policy.conf allow id 04f2:0833 serial \"\" name \"USB Keyboard\" via-port \"7-2\" with-interface { 03:01:01 03:00:00 } with-connect-type \"unknown\"", "grep \" USB Keyboard \" ./ policy.conf > ./ 10keyboards.conf", "install -m 0600 -o root -g root 10keyboards.conf /etc/usbguard/rules.d/ 10keyboards.conf", "grep -v \" USB Keyboard \" ./policy.conf > ./rules.conf", "install -m 0600 -o root -g root rules.conf /etc/usbguard/rules.conf", "systemctl restart usbguard", "usbguard list-rules 15: allow id 04f2:0833 serial \"\" name \"USB Keyboard\" hash \"kxM/iddRe/WSCocgiuQlVs6Dn0VEza7KiHoDeTz0fyg=\" parent-hash \"2i6ZBJfTl5BakXF7Gba84/Cp1gslnNc1DM6vWQpie3s=\" via-port \"7-2\" with-interface { 03:01:01 03:00:00 } with-connect-type \"unknown\"", "cat /etc/usbguard/rules.conf /etc/usbguard/rules.d/*.conf", "vi /etc/usbguard/usbguard-daemon.conf", "IPCAllowGroups=wheel", "usbguard add-user joesec --devices ALL --policy modify,list --exceptions ALL", "systemctl restart usbguard", "vi /etc/usbguard/usbguard-daemon.conf", "AuditBackend=LinuxAudit", "systemctl restart usbguard", "ausearch -ts recent -m USER_DEVICE", "dnf install rsyslog-doc", "firefox /usr/share/doc/rsyslog/html/index.html &", "semanage port -a -t syslogd_port_t -p tcp 30514", "firewall-cmd --zone= <zone-name> --permanent --add-port=30514/tcp success firewall-cmd --reload", "Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides TCP syslog reception module(load=\"imtcp\") Adding this ruleset to process remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imtcp\" port=\"30514\" ruleset=\"remote1\")", "rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run rsyslogd: End of config validation run. Bye.", "systemctl status rsyslog", "systemctl restart rsyslog", "systemctl enable rsyslog", "*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\"example_fwd\" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\"example.com\" port=\"30514\" protocol=\"tcp\" )", "systemctl restart rsyslog", "logger test", "cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test", "Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/server-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/server-key.pem\" ) TCP listener module( load=\"imtcp\" PermittedPeer=[\"client1.example.com\", \"client2.example.com\"] StreamDriver.AuthMode=\"x509/name\" StreamDriver.Mode=\"1\" StreamDriver.Name=\"ossl\" ) Start up listener at port 514 input( type=\"imtcp\" port=\"514\" )", "input( type=\"imtcp\" Port=\"50515\" StreamDriver.Name=\" <driver> \" streamdriver.CAFile=\"/etc/rsyslog.d/ <ca1> .pem\" streamdriver.CertFile=\"/etc/rsyslog.d/ <server1-cert> .pem\" streamdriver.KeyFile=\"/etc/rsyslog.d/ <server1-key> .pem\" )", "rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.", "systemctl status rsyslog", "systemctl restart rsyslog", "systemctl enable rsyslog", "Set certificate files global( DefaultNetstreamDriverCAFile=\"/etc/pki/ca-trust/source/anchors/ca-cert.pem\" DefaultNetstreamDriverCertFile=\"/etc/pki/ca-trust/source/anchors/client-cert.pem\" DefaultNetstreamDriverKeyFile=\"/etc/pki/ca-trust/source/anchors/client-key.pem\" ) Set up the action for all messages *.* action( type=\"omfwd\" StreamDriver=\"ossl\" StreamDriverMode=\"1\" StreamDriverPermittedPeers=\"server.example.com\" StreamDriverAuthMode=\"x509/name\" target=\"server.example.com\" port=\"514\" protocol=\"tcp\" )", "local1.* action( type=\"omfwd\" StreamDriver=\"<driver>\" StreamDriverMode=\"1\" StreamDriverAuthMode=\"x509/certvalid\" streamDriver.CAFile=\"/etc/rsyslog.d/<ca1>.pem\" streamDriver.CertFile=\"/etc/rsyslog.d/<client1-cert>.pem\" streamDriver.KeyFile=\"/etc/rsyslog.d/<client1-key>.pem\" target=\"server.example.com\" port=\"514\" protocol=\"tcp\" )", "rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run (level 1) rsyslogd: End of config validation run. Bye.", "systemctl status rsyslog", "systemctl restart rsyslog", "systemctl enable rsyslog", "logger test", "cat /var/log/remote/msg/ <hostname> /root.log Feb 25 03:53:17 <hostname> root[6064]: test", "semanage port -a -t syslogd_port_t -p udp portno", "firewall-cmd --zone= zone --permanent --add-port= portno /udp success firewall-cmd --reload", "firewall-cmd --reload", "Define templates before the rules that use them Per-Host templates for remote systems template(name=\"TmplAuthpriv\" type=\"list\") { constant(value=\"/var/log/remote/auth/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } template(name=\"TmplMsg\" type=\"list\") { constant(value=\"/var/log/remote/msg/\") property(name=\"hostname\") constant(value=\"/\") property(name=\"programname\" SecurePath=\"replace\") constant(value=\".log\") } Provides UDP syslog reception module(load=\"imudp\") This ruleset processes remote messages ruleset(name=\"remote1\"){ authpriv.* action(type=\"omfile\" DynaFile=\"TmplAuthpriv\") *.info;mail.none;authpriv.none;cron.none action(type=\"omfile\" DynaFile=\"TmplMsg\") } input(type=\"imudp\" port=\"514\" ruleset=\"remote1\")", "rsyslogd -N 1 rsyslogd: version 8.1911.0-2.el8, config validation run", "systemctl restart rsyslog", "systemctl enable rsyslog", "*.* action(type=\"omfwd\" queue.type=\"linkedlist\" queue.filename=\" example_fwd \" action.resumeRetryCount=\"-1\" queue.saveOnShutdown=\"on\" target=\" example.com \" port=\" portno \" protocol=\"udp\" )", "systemctl restart rsyslog", "systemctl enable rsyslog", "logger test", "cat /var/log/remote/msg/ hostname /root.log Feb 25 03:53:17 hostname root[6064]: test", "action(type=\"omfwd\" protocol=\"tcp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omfwd\" protocol=\"udp\" RebindInterval=\"250\" target=\" example.com \" port=\"514\" ...) action(type=\"omrelp\" RebindInterval=\"250\" target=\" example.com \" port=\"6514\" ...)", "module(load=\"omrelp\") *.* action(type=\"omrelp\" target=\"_target_IP_\" port=\"_target_port_\")", "systemctl restart rsyslog", "systemctl enable rsyslog", "ruleset(name=\"relp\"){ *.* action(type=\"omfile\" file=\"_log_path_\") } module(load=\"imrelp\") input(type=\"imrelp\" port=\"_target_port_\" ruleset=\"relp\")", "systemctl restart rsyslog", "systemctl enable rsyslog", "logger test", "cat /var/log/remote/msg/hostname/root.log Feb 25 03:53:17 hostname root[6064]: test", "ls /usr/lib64/rsyslog/{i,o}m *", "dnf install netconsole-service", "SYSLOGADDR= 192.0.2.1", "systemctl enable --now netconsole", "--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Filter logs based on a specific value they contain ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: files_input type: basics logging_outputs: - name: files_output0 type: files property: msg property_op: contains property_value: error path: /var/log/errors.log - name: files_output1 type: files property: msg property_op: \"!contains\" property_value: error path: /var/log/others.log logging_flows: - name: flow0 inputs: [files_input] outputs: [files_output0, files_output1]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run rsyslogd: End of config validation run. Bye.", "logger error", "cat /var/log/errors.log Aug 5 13:48:31 hostname root[6778]: error", "--- - name: Deploy the logging solution hosts: managed-node-01.example.com tasks: - name: Configure the server to receive remote input ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: remote_udp_input type: remote udp_ports: [ 601 ] - name: remote_tcp_input type: remote tcp_ports: [ 601 ] logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: flow_0 inputs: [remote_udp_input, remote_tcp_input] outputs: [remote_files_output] - name: Deploy the logging solution hosts: managed-node-02.example.com tasks: - name: Configure the server to output the logs to local files in directories named by remote host names ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: forward_output0 type: forwards severity: info target: <host1.example.com> udp_port: 601 - name: forward_output1 type: forwards facility: mail target: <host1.example.com> tcp_port: 601 logging_flows: - name: flows0 inputs: [basic_input] outputs: [forward_output0, forward_output1] [basic_input] [forward_output0, forward_output1]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "rsyslogd -N 1 rsyslogd: version 8.1911.0-6.el8, config validation run (level 1), master config /etc/rsyslog.conf rsyslogd: End of config validation run. Bye.", "logger test", "cat /var/log/ <host2.example.com> /messages Aug 5 13:48:31 <host2.example.com> root[6778]: test", "--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying files input and forwards output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: files input_log_path: /var/log/containers/*.log logging_outputs: - name: output_name type: forwards target: your_target_host tcp_port: 514 tls: true pki_authmode: x509/name permitted_server: 'server.example.com' logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure remote logging solution using TLS for secure transfer of logs hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output with certs ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_certificates: - name: logging_cert dns: ['localhost', 'www.example.com'] ca: ipa logging_pki_files: - ca_cert: /local/path/to/ca_cert.pem cert: /local/path/to/logging_cert.pem private_key: /local/path/to/logging_cert.pem logging_inputs: - name: input_name type: remote tcp_ports: 514 tls: true permitted_clients: ['clients.example.com'] logging_outputs: - name: output_name type: remote_files remote_log_path: /var/log/remote/%FROMHOST%/%PROGRAMNAME:::secpath-replace%.log async_writing: true client_count: 20 io_buffer_size: 8192 logging_flows: - name: flow_name inputs: [input_name] outputs: [output_name]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure client-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploy basic input and RELP output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: basic_input type: basics logging_outputs: - name: relp_client type: relp target: logging.server.com port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/client-cert.pem private_key: /etc/pki/tls/private/client-key.pem pki_authmode: name permitted_servers: - '*.server.example.com' logging_flows: - name: example_flow inputs: [basic_input] outputs: [relp_client]", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml", "--- - name: Configure server-side of the remote logging solution using RELP hosts: managed-node-01.example.com tasks: - name: Deploying remote input and remote_files output ansible.builtin.include_role: name: rhel-system-roles.logging vars: logging_inputs: - name: relp_server type: relp port: 20514 tls: true ca_cert: /etc/pki/tls/certs/ca.pem cert: /etc/pki/tls/certs/server-cert.pem private_key: /etc/pki/tls/private/server-key.pem pki_authmode: name permitted_clients: - '*example.client.com' logging_outputs: - name: remote_files_output type: remote_files logging_flows: - name: example_flow inputs: relp_server outputs: remote_files_output", "ansible-playbook --syntax-check ~/playbook.yml", "ansible-playbook ~/playbook.yml" ]
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/9/html-single/security_hardening/index
Chapter 8. Triggering and modifying builds
Chapter 8. Triggering and modifying builds The following sections outline how to trigger builds and modify builds using build hooks. 8.1. Build triggers When defining a BuildConfig , you can define triggers to control the circumstances in which the BuildConfig should be run. The following build triggers are available: Webhook Image change Configuration change 8.1.1. Webhook triggers Webhook triggers allow you to trigger a new build by sending a request to the OpenShift Container Platform API endpoint. You can define these triggers using GitHub, GitLab, Bitbucket, or Generic webhooks. Currently, OpenShift Container Platform webhooks only support the analogous versions of the push event for each of the Git-based Source Code Management (SCM) systems. All other event types are ignored. When the push events are processed, the OpenShift Container Platform control plane host confirms if the branch reference inside the event matches the branch reference in the corresponding BuildConfig . If so, it then checks out the exact commit reference noted in the webhook event on the OpenShift Container Platform build. If they do not match, no build is triggered. Note oc new-app and oc new-build create GitHub and Generic webhook triggers automatically, but any other needed webhook triggers must be added manually. You can manually add triggers by setting triggers. For all webhooks, you must define a secret with a key named WebHookSecretKey and the value being the value to be supplied when invoking the webhook. The webhook definition must then reference the secret. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The value of the key is compared to the secret provided during the webhook invocation. For example here is a GitHub webhook with a reference to a secret named mysecret : type: "GitHub" github: secretReference: name: "mysecret" The secret is then defined as follows. Note that the value of the secret is base64 encoded as is required for any data field of a Secret object. - kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx 8.1.1.1. Using GitHub webhooks GitHub webhooks handle the call made by GitHub when a repository is updated. When defining the trigger, you must specify a secret, which is part of the URL you supply to GitHub when configuring the webhook. Example GitHub webhook definition: type: "GitHub" github: secretReference: name: "mysecret" Note The secret used in the webhook trigger configuration is not the same as secret field you encounter when configuring webhook in GitHub UI. The former is to make the webhook URL unique and hard to predict, the latter is an optional string field used to create HMAC hex digest of the body, which is sent as an X-Hub-Signature header. The payload URL is returned as the GitHub Webhook URL by the oc describe command (see Displaying Webhook URLs), and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Prerequisites Create a BuildConfig from a GitHub repository. Procedure To configure a GitHub Webhook: After creating a BuildConfig from a GitHub repository, run: USD oc describe bc/<name-of-your-BuildConfig> This generates a webhook GitHub URL that looks like: Example output <https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github Cut and paste this URL into GitHub, from the GitHub web console. In your GitHub repository, select Add Webhook from Settings Webhooks . Paste the URL output into the Payload URL field. Change the Content Type from GitHub's default application/x-www-form-urlencoded to application/json . Click Add webhook . You should see a message from GitHub stating that your webhook was successfully configured. Now, when you push a change to your GitHub repository, a new build automatically starts, and upon a successful build a new deployment starts. Note Gogs supports the same webhook payload format as GitHub. Therefore, if you are using a Gogs server, you can define a GitHub webhook trigger on your BuildConfig and trigger it by your Gogs server as well. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitHub-Event: push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github The -k argument is only necessary if your API server does not have a properly signed certificate. Note The build will only be triggered if the ref value from GitHub webhook event matches the ref value specified in the source.git field of the BuildConfig resource. Additional resources Gogs 8.1.1.2. Using GitLab webhooks GitLab webhooks handle the call made by GitLab when a repository is updated. As with the GitHub triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "GitLab" gitlab: secretReference: name: "mysecret" The payload URL is returned as the GitLab Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab Procedure To configure a GitLab Webhook: Describe the BuildConfig to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the GitLab setup instructions to paste the webhook URL into your GitLab repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-GitLab-Event: Push Hook" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab The -k argument is only necessary if your API server does not have a properly signed certificate. 8.1.1.3. Using Bitbucket webhooks Bitbucket webhooks handle the call made by Bitbucket when a repository is updated. Similar to the triggers, you must specify a secret. The following example is a trigger definition YAML within the BuildConfig : type: "Bitbucket" bitbucket: secretReference: name: "mysecret" The payload URL is returned as the Bitbucket Webhook URL by the oc describe command, and is structured as follows: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket Procedure To configure a Bitbucket Webhook: Describe the 'BuildConfig' to get the webhook URL: USD oc describe bc <name> Copy the webhook URL, replacing <secret> with your secret value. Follow the Bitbucket setup instructions to paste the webhook URL into your Bitbucket repository settings. Given a file containing a valid JSON payload, such as payload.json , you can manually trigger the webhook with curl : USD curl -H "X-Event-Key: repo:push" -H "Content-Type: application/json" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket The -k argument is only necessary if your API server does not have a properly signed certificate. 8.1.1.4. Using generic webhooks Generic webhooks are invoked from any system capable of making a web request. As with the other webhooks, you must specify a secret, which is part of the URL that the caller must use to trigger the build. The secret ensures the uniqueness of the URL, preventing others from triggering the build. The following is an example trigger definition YAML within the BuildConfig : type: "Generic" generic: secretReference: name: "mysecret" allowEnv: true 1 1 Set to true to allow a generic webhook to pass in environment variables. Procedure To set up the caller, supply the calling system with the URL of the generic webhook endpoint for your build: Example output https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The caller must invoke the webhook as a POST operation. To invoke the webhook manually you can use curl : USD curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The HTTP verb must be set to POST . The insecure -k flag is specified to ignore certificate validation. This second flag is not necessary if your cluster has properly signed certificates. The endpoint can accept an optional payload with the following format: git: uri: "<url to git repository>" ref: "<optional git reference>" commit: "<commit hash identifying a specific git commit>" author: name: "<author name>" email: "<author e-mail>" committer: name: "<committer name>" email: "<committer e-mail>" message: "<commit message>" env: 1 - name: "<variable name>" value: "<variable value>" 1 Similar to the BuildConfig environment variables, the environment variables defined here are made available to your build. If these variables collide with the BuildConfig environment variables, these variables take precedence. By default, environment variables passed by webhook are ignored. Set the allowEnv field to true on the webhook definition to enable this behavior. To pass this payload using curl , define it in a file named payload_file.yaml and run: USD curl -H "Content-Type: application/yaml" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic The arguments are the same as the example with the addition of a header and a payload. The -H argument sets the Content-Type header to application/yaml or application/json depending on your payload format. The --data-binary argument is used to send a binary payload with newlines intact with the POST request. Note OpenShift Container Platform permits builds to be triggered by the generic webhook even if an invalid request payload is presented, for example, invalid content type, unparsable or invalid content, and so on. This behavior is maintained for backwards compatibility. If an invalid request payload is presented, OpenShift Container Platform returns a warning in JSON format as part of its HTTP 200 OK response. 8.1.1.5. Displaying webhook URLs You can use the following command to display webhook URLs associated with a build configuration. If the command does not display any webhook URLs, then no webhook trigger is defined for that build configuration. Procedure To display any webhook URLs associated with a BuildConfig , run: USD oc describe bc <name> 8.1.2. Using image change triggers As a developer, you can configure your build to run automatically every time a base image changes. You can use image change triggers to automatically invoke your build when a new version of an upstream image is available. For example, if a build is based on a RHEL image, you can trigger that build to run any time the RHEL image changes. As a result, the application image is always running on the latest RHEL base image. Note Image streams that point to container images in v1 container registries only trigger a build once when the image stream tag becomes available and not on subsequent image updates. This is due to the lack of uniquely identifiable images in v1 container registries. Procedure Define an ImageStream that points to the upstream image you want to use as a trigger: kind: "ImageStream" apiVersion: "v1" metadata: name: "ruby-20-centos7" This defines the image stream that is tied to a container image repository located at <system-registry> / <namespace> /ruby-20-centos7 . The <system-registry> is defined as a service with the name docker-registry running in OpenShift Container Platform. If an image stream is the base image for the build, set the from field in the build strategy to point to the ImageStream : strategy: sourceStrategy: from: kind: "ImageStreamTag" name: "ruby-20-centos7:latest" In this case, the sourceStrategy definition is consuming the latest tag of the image stream named ruby-20-centos7 located within this namespace. Define a build with one or more triggers that point to ImageStreams : type: "ImageChange" 1 imageChange: {} type: "ImageChange" 2 imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" 1 An image change trigger that monitors the ImageStream and Tag as defined by the build strategy's from field. The imageChange object here must be empty. 2 An image change trigger that monitors an arbitrary image stream. The imageChange part, in this case, must include a from field that references the ImageStreamTag to monitor. When using an image change trigger for the strategy image stream, the generated build is supplied with an immutable docker tag that points to the latest image corresponding to that tag. This new image reference is used by the strategy when it executes for the build. For other image change triggers that do not reference the strategy image stream, a new build is started, but the build strategy is not updated with a unique image reference. Since this example has an image change trigger for the strategy, the resulting build is: strategy: sourceStrategy: from: kind: "DockerImage" name: "172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>" This ensures that the triggered build uses the new image that was just pushed to the repository, and the build can be re-run any time with the same inputs. You can pause an image change trigger to allow multiple changes on the referenced image stream before a build is started. You can also set the paused attribute to true when initially adding an ImageChangeTrigger to a BuildConfig to prevent a build from being immediately triggered. type: "ImageChange" imageChange: from: kind: "ImageStreamTag" name: "custom-image:latest" paused: true In addition to setting the image field for all Strategy types, for custom builds, the OPENSHIFT_CUSTOM_BUILD_BASE_IMAGE environment variable is checked. If it does not exist, then it is created with the immutable image reference. If it does exist, then it is updated with the immutable image reference. If a build is triggered due to a webhook trigger or manual request, the build that is created uses the <immutableid> resolved from the ImageStream referenced by the Strategy . This ensures that builds are performed using consistent image tags for ease of reproduction. Additional resources v1 container registries 8.1.3. Identifying the image change trigger of a build As a developer, if you have image change triggers, you can identify which image change initiated the last build. This can be useful for debugging or troubleshooting builds. Example BuildConfig apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: # ... triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: "2021-06-30T13:47:53Z" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1 Note This example omits elements that are not related to image change triggers. Prerequisites You have configured multiple image change triggers. These triggers have triggered one or more builds. Procedure In buildConfig.status.imageChangeTriggers to identify the lastTriggerTime that has the latest timestamp. This ImageChangeTriggerStatus Under imageChangeTriggers , compare timestamps to identify the latest Image change triggers In your build configuration, buildConfig.spec.triggers is an array of build trigger policies, BuildTriggerPolicy . Each BuildTriggerPolicy has a type field and set of pointers fields. Each pointer field corresponds to one of the allowed values for the type field. As such, you can only set BuildTriggerPolicy to only one pointer field. For image change triggers, the value of type is ImageChange . Then, the imageChange field is the pointer to an ImageChangeTrigger object, which has the following fields: lastTriggeredImageID : This field, which is not shown in the example, is deprecated in OpenShift Container Platform 4.8 and will be ignored in a future release. It contains the resolved image reference for the ImageStreamTag when the last build was triggered from this BuildConfig . paused : You can use this field, which is not shown in the example, to temporarily disable this particular image change trigger. from : You use this field to reference the ImageStreamTag that drives this image change trigger. Its type is the core Kubernetes type, OwnerReference . The from field has the following fields of note: kind : For image change triggers, the only supported value is ImageStreamTag . namespace : You use this field to specify the namespace of the ImageStreamTag . ** name : You use this field to specify the ImageStreamTag . Image change trigger status In your build configuration, buildConfig.status.imageChangeTriggers is an array of ImageChangeTriggerStatus elements. Each ImageChangeTriggerStatus element includes the from , lastTriggeredImageID , and lastTriggerTime elements shown in the preceding example. The ImageChangeTriggerStatus that has the most recent lastTriggerTime triggered the most recent build. You use its name and namespace to identify the image change trigger in buildConfig.spec.triggers that triggered the build. The lastTriggerTime with the most recent timestamp signifies the ImageChangeTriggerStatus of the last build. This ImageChangeTriggerStatus has the same name and namespace as the image change trigger in buildConfig.spec.triggers that triggered the build. Additional resources v1 container registries 8.1.4. Configuration change triggers A configuration change trigger allows a build to be automatically invoked as soon as a new BuildConfig is created. The following is an example trigger definition YAML within the BuildConfig : type: "ConfigChange" Note Configuration change triggers currently only work when creating a new BuildConfig . In a future release, configuration change triggers will also be able to launch a build whenever a BuildConfig is updated. 8.1.4.1. Setting triggers manually Triggers can be added to and removed from build configurations with oc set triggers . Procedure To set a GitHub webhook trigger on a build configuration, use: USD oc set triggers bc <name> --from-github To set an imagechange trigger, use: USD oc set triggers bc <name> --from-image='<image>' To remove a trigger, add --remove : USD oc set triggers bc <name> --from-bitbucket --remove Note When a webhook trigger already exists, adding it again regenerates the webhook secret. For more information, consult the help documentation with by running: USD oc set triggers --help 8.2. Build hooks Build hooks allow behavior to be injected into the build process. The postCommit field of a BuildConfig object runs commands inside a temporary container that is running the build output image. The hook is run immediately after the last layer of the image has been committed and before the image is pushed to a registry. The current working directory is set to the image's WORKDIR , which is the default working directory of the container image. For most images, this is where the source code is located. The hook fails if the script or command returns a non-zero exit code or if starting the temporary container fails. When the hook fails it marks the build as failed and the image is not pushed to a registry. The reason for failing can be inspected by looking at the build logs. Build hooks can be used to run unit tests to verify the image before the build is marked complete and the image is made available in a registry. If all tests pass and the test runner returns with exit code 0 , the build is marked successful. In case of any test failure, the build is marked as failed. In all cases, the build log contains the output of the test runner, which can be used to identify failed tests. The postCommit hook is not only limited to running tests, but can be used for other commands as well. Since it runs in a temporary container, changes made by the hook do not persist, meaning that running the hook cannot affect the final image. This behavior allows for, among other uses, the installation and usage of test dependencies that are automatically discarded and are not present in the final image. 8.2.1. Configuring post commit build hooks There are different ways to configure the post build hook. All forms in the following examples are equivalent and run bundle exec rake test --verbose . Procedure Shell script: postCommit: script: "bundle exec rake test --verbose" The script value is a shell script to be run with /bin/sh -ic . Use this when a shell script is appropriate to execute the build hook. For example, for running unit tests as above. To control the image entry point, or if the image does not have /bin/sh , use command and/or args . Note The additional -i flag was introduced to improve the experience working with CentOS and RHEL images, and may be removed in a future release. Command as the image entry point: postCommit: command: ["/bin/bash", "-c", "bundle exec rake test --verbose"] In this form, command is the command to run, which overrides the image entry point in the exec form, as documented in the Dockerfile reference . This is needed if the image does not have /bin/sh , or if you do not want to use a shell. In all other cases, using script might be more convenient. Command with arguments: postCommit: command: ["bundle", "exec", "rake", "test"] args: ["--verbose"] This form is equivalent to appending the arguments to command . Note Providing both script and command simultaneously creates an invalid build hook. 8.2.2. Using the CLI to set post commit build hooks The oc set build-hook command can be used to set the build hook for a build configuration. Procedure To set a command as the post-commit build hook: USD oc set build-hook bc/mybc \ --post-commit \ --command \ -- bundle exec rake test --verbose To set a script as the post-commit build hook: USD oc set build-hook bc/mybc --post-commit --script="bundle exec rake test --verbose"
[ "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "- kind: Secret apiVersion: v1 metadata: name: mysecret creationTimestamp: data: WebHookSecretKey: c2VjcmV0dmFsdWUx", "type: \"GitHub\" github: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "oc describe bc/<name-of-your-BuildConfig>", "<https://api.starter-us-east-1.openshift.com:443/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "curl -H \"X-GitHub-Event: push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/github", "type: \"GitLab\" gitlab: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "oc describe bc <name>", "curl -H \"X-GitLab-Event: Push Hook\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/gitlab", "type: \"Bitbucket\" bitbucket: secretReference: name: \"mysecret\"", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "oc describe bc <name>", "curl -H \"X-Event-Key: repo:push\" -H \"Content-Type: application/json\" -k -X POST --data-binary @payload.json https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/bitbucket", "type: \"Generic\" generic: secretReference: name: \"mysecret\" allowEnv: true 1", "https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "curl -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "git: uri: \"<url to git repository>\" ref: \"<optional git reference>\" commit: \"<commit hash identifying a specific git commit>\" author: name: \"<author name>\" email: \"<author e-mail>\" committer: name: \"<committer name>\" email: \"<committer e-mail>\" message: \"<commit message>\" env: 1 - name: \"<variable name>\" value: \"<variable value>\"", "curl -H \"Content-Type: application/yaml\" --data-binary @payload_file.yaml -X POST -k https://<openshift_api_host:port>/apis/build.openshift.io/v1/namespaces/<namespace>/buildconfigs/<name>/webhooks/<secret>/generic", "oc describe bc <name>", "kind: \"ImageStream\" apiVersion: \"v1\" metadata: name: \"ruby-20-centos7\"", "strategy: sourceStrategy: from: kind: \"ImageStreamTag\" name: \"ruby-20-centos7:latest\"", "type: \"ImageChange\" 1 imageChange: {} type: \"ImageChange\" 2 imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\"", "strategy: sourceStrategy: from: kind: \"DockerImage\" name: \"172.30.17.3:5001/mynamespace/ruby-20-centos7:<immutableid>\"", "type: \"ImageChange\" imageChange: from: kind: \"ImageStreamTag\" name: \"custom-image:latest\" paused: true", "apiVersion: build.openshift.io/v1 kind: BuildConfig metadata: name: bc-ict-example namespace: bc-ict-example-namespace spec: triggers: - imageChange: from: kind: ImageStreamTag name: input:latest namespace: bc-ict-example-namespace - imageChange: from: kind: ImageStreamTag name: input2:latest namespace: bc-ict-example-namespace type: ImageChange status: imageChangeTriggers: - from: name: input:latest namespace: bc-ict-example-namespace lastTriggerTime: \"2021-06-30T13:47:53Z\" lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input@sha256:0f88ffbeb9d25525720bfa3524cb1bf0908b7f791057cf1acfae917b11266a69 - from: name: input2:latest namespace: bc-ict-example-namespace lastTriggeredImageID: image-registry.openshift-image-registry.svc:5000/bc-ict-example-namespace/input2@sha256:0f88ffbeb9d25525720bfa3524cb2ce0908b7f791057cf1acfae917b11266a69 lastVersion: 1", "Then you use the `name` and `namespace` from that build to find the corresponding image change trigger in `buildConfig.spec.triggers`.", "type: \"ConfigChange\"", "oc set triggers bc <name> --from-github", "oc set triggers bc <name> --from-image='<image>'", "oc set triggers bc <name> --from-bitbucket --remove", "oc set triggers --help", "postCommit: script: \"bundle exec rake test --verbose\"", "postCommit: command: [\"/bin/bash\", \"-c\", \"bundle exec rake test --verbose\"]", "postCommit: command: [\"bundle\", \"exec\", \"rake\", \"test\"] args: [\"--verbose\"]", "oc set build-hook bc/mybc --post-commit --command -- bundle exec rake test --verbose", "oc set build-hook bc/mybc --post-commit --script=\"bundle exec rake test --verbose\"" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/builds/triggering-builds-build-hooks
Chapter 28. Hit policies for guided decision tables
Chapter 28. Hit policies for guided decision tables Hit policies determine the order in which rules (rows) in a guided decision table are applied, whether top to bottom, per specified priority, or other options. The following hit policies are available: None: (Default hit policy) Multiple rows can be executed and the verification warns about rows that conflict. Any decision tables that have been uploaded (using a non-guided decision table spreadsheet) will adopt this hit policy. Resolved Hit: Only one row at a time can be executed according to specified priority, regardless of list order (you can give row 10 priority over row 5, for example). This means you can keep the order of the rows you want for visual readability, but specify priority exceptions. Unique Hit: Only one row at a time can be executed, and each row must be unique, with no overlap of conditions being met. If more than one row is executed, then the verification produces a warning at development time. First Hit: Only one row at a time can be executed in the order listed in the table, top to bottom. Rule Order: Multiple rows can be executed and verification does not report conflicts between the rows since they are expected to happen. Figure 28.1. Available hit policies 28.1. Hit policy examples: Decision table for discounts on movie tickets The following is part of an example decision table for discounts on movie tickets based on customer age, student status, or military status, or all three. Table 28.1. Example decision table for available discounts on movie tickets Row Number Discount Type Discount 1 Senior citizen (age 60+) 10% 2 Student 10% 3 Military 10% In this example, the total discount to be applied in the end will vary depending on the hit policy specified for the table: None/Rule Order: With both None and Rule Order hit policies, all applicable rules are incorporated, in this case allowing discounts to be stacked for each customer. Example: A senior citizen who is also a student and a military veteran will receive all three discounts, totaling 30%. Key difference: With None , warnings are created for multiple rows applied. With Rule Order , those warnings are not created. First Hit/Resolved Hit: With both First Hit and Resolved Hit policies, only one of the discounts can be applied. For First Hit , the discount that is satisfied first in the list is applied and the others are ignored. Example: A senior citizen who is also a student and a military veteran will receive only the senior citizen discount of 10%, since that is listed first in the table. For Resolved Hit , a modified table is required. The discount that you assign a priority exception to in the table, regardless of listed order, will be applied first. To assign this exception, include a new column that specifies the priority of one discount (row) over others. Example: If military discounts are prioritized higher than age or student discounts, despite the listed order, then a senior citizen who is also a student and a military veteran will receive only the military discount of 10%, regardless of age or student status. Consider the following modified decision table that accommodates a Resolved Hit policy: Table 28.2. Modified decision table that accommodates a Resolved Hit policy Row Number Discount Type Has Priority over Row Discount 1 Senior citizen (age 60+) 10% 2 Student 10% 3 Military 1 10% In this modified table, the military discount is essentially the new row 1 and therefore takes priority over both age and student discounts, and any other discounts added later. You do not need to specify priority over rows "1 and 2", only over row "1". This changes the row hit order to 3 1 2 ... and so on as the table grows. Note The row order would be changed in the same way if you actually moved the military discount to row 1 and applied a First Hit policy to the table instead. However, if you want the rules listed in a certain way and applied differently, such as in an alphabetized table, the Resolved Hit policy is useful. Key difference: With First Hit , rules are applied strictly in the listed order. With Resolved Hit , rules are applied in the listed order unless priority exceptions are specified. Unique Hit: A modified table is required. With a Unique Hit policy, rows must be created in a way that it is impossible to satisfy multiple rules at one time. However, you can still specify row-by-row whether to apply one rule or multiple. In this way, with a Unique Hit policy you can make decision tables more granular and prevent overlap warnings. Consider the following modified decision table that accommodates a Unique Hit policy: Table 28.3. Modified decision table that accommodates a Unique Hit policy Row Number Is Senior Citizen (age 65+) Is Student Is Military Discount 1 yes no no 10% 2 no yes no 10% 3 no no yes 10% 4 yes yes no 20% 5 yes no yes 20% 6 no yes yes 20% 7 yes yes yes 30% In this modified table, each row is unique, with no allowance of overlap, and any single discount or any combination of discounts is accommodated. 28.1.1. Types of guided decision tables Two types of decision tables are supported in Red Hat Decision Manager: Extended entry and Limited entry tables. Extended entry: An Extended Entry decision table is one for which the column definitions specify Pattern, Field, and Operator but not value. The values, or states, are themselves held in the body of the decision table. Limited entry: A Limited Entry decision table is one for which the column definitions specify value in addition to Pattern, Field, and Operator. The decision table states, held in the body of the table, are boolean where a positive value (a marked check box) has the effect of meaning the column should apply, or be matched. A negative value (a cleared check box) means the column does not apply.
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/developing_decision_services_in_red_hat_decision_manager/hit-policies-con
Chapter 1. Support policy for Cryostat
Chapter 1. Support policy for Cryostat Red Hat supports a major version of Cryostat for a minimum of 6 months. Red Hat bases this figure on the time that the product gets released on the Red Hat Customer Portal. You can install and deploy Cryostat on Red Hat OpenShift Container Platform 4.12 or a later version that runs on an x86_64 or ARM64 architecture. Additional resources For more information about the Cryostat life cycle policy, see Red Hat build of Cryostat on the Red Hat OpenShift Container Platform Life Cycle Policy web page.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_cryostat/3/html/release_notes_for_the_red_hat_build_of_cryostat_3.0/cryostat-support-policy_cryostat
Chapter 38. AWS XRay Component
Chapter 38. AWS XRay Component Available as of Camel 2.21 The camel-aws-xray component is used for tracing and timing incoming and outgoing Camel messages using AWS XRay . Events (subsegments) are captured for incoming and outgoing messages being sent to/from Camel. 38.1. Dependency In order to include AWS XRay support into Camel, the archive containing the Camel related AWS XRay related classes need to be added to the project. In addition to that, AWS XRay libraries also need to be available. To include both, AWS XRay and Camel, dependencies use the following Maven imports: <dependencyManagement> <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-xray-recorder-sdk-bom</artifactId> <version>1.3.1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws-xray</artifactId> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-xray-recorder-sdk-core</artifactId> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-xray-recorder-sdk-aws-sdk</artifactId> </dependency> <dependencies> 38.2. Configuration The configuration properties for the AWS XRay tracer are: Option Default Description addExcludePatterns Sets exclude pattern(s) that will disable tracing for Camel messages that matches the pattern. The content is a Set<String> where the key is a pattern matching routeId's. The pattern uses the rules from Intercept. setTracingStrategy NoopTracingStrategy Allows a custom Camel InterceptStrategy to be provided in order to track invoked processor definitions like BeanDefinition or ProcessDefinition . TraceAnnotatedTracingStrategy will track any classes invoked via .bean(... ) or .process(... ) that contain a @XRayTrace annotation at class level. There is currently only one way an AWS XRay tracer can be configured to provide distributed tracing for a Camel application: 38.2.1. Explicit Include the camel-aws-xray component in your POM, along with any specific dependencies associated with the AWS XRay Tracer. To explicitly configure AWS XRay support, instantiate the XRayTracer and initialize the camel context. You can optionally specify a Tracer , or alternatively it can be implicitly discovered using the Registry or ServiceLoader . XRayTracer xrayTracer = new XRayTracer(); // By default it uses a NoopTracingStrategy, but you can override it with a specific InterceptStrategy implementation. xrayTracer.setTracingStrategy(...); // And then initialize the context xrayTracer.init(camelContext); To use XRayTracer in XML, all you need to do is to define the AWS XRay tracer bean. Camel will automatically discover and use it. <bean id="tracingStrategy" class="..."/> <bean id="aws-xray-tracer" class="org.apache.camel.component.aws.xray.XRayTracer" /> <property name="tracer" ref="tracingStrategy"/> </bean> In case of the default NoopTracingStrategy only the creation and deletion of exchanges is tracked but not the invocation of certain beans or EIP patterns. 38.2.2. Tracking of comprehensive route execution In order to track the execution of an exchange among multiple routes, on exchange creation a unique trace ID is generated and stored in the headers if no corresponding value was yet available. This trace ID is copied over to new exchanges in order to keep a consistent view of the processed exchange. As AWS XRay traces work on a thread-local basis the current sub/segment should be copied over to the new thread and set as explained in in the AWS XRay documentation . The Camel AWS XRay component therefore provides an additional header field that the component will use in order to set the passed AWS XRay Entity to the new thread and thus keep the tracked data to the route rather than exposing a new segment which seems uncorrelated with any of the executed routes. The component will use the following constants found in the headers of the exchange: Header Description Camel-AWS-XRay-Trace-ID Contains a reference to the AWS XRay TraceID object to provide a comprehensive view of the invoked routes Camel-AWS-XRay-Trace-Entity Contains a reference to the actual AWS XRay Segment or Subsegment which is copied over to the new thread. This header should be set in case a new thread is spawned and the performed tasks should be exposed as part of the executed route instead of creating a new unrelated segment. Note that the AWS XRay Entity (i.e., Segment and Subsegment ) are not serializable and therefore should not get passed to other JVM processes. 38.3. Example You can find an example demonstrating the way to configure AWS XRay tracing within the tests accompanying this project.
[ "<dependencyManagement> <dependencies> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-xray-recorder-sdk-bom</artifactId> <version>1.3.1</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>org.apache.camel</groupId> <artifactId>camel-aws-xray</artifactId> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-xray-recorder-sdk-core</artifactId> </dependency> <dependency> <groupId>com.amazonaws</groupId> <artifactId>aws-xray-recorder-sdk-aws-sdk</artifactId> </dependency> <dependencies>", "XRayTracer xrayTracer = new XRayTracer(); // By default it uses a NoopTracingStrategy, but you can override it with a specific InterceptStrategy implementation. xrayTracer.setTracingStrategy(...); // And then initialize the context xrayTracer.init(camelContext);", "<bean id=\"tracingStrategy\" class=\"...\"/> <bean id=\"aws-xray-tracer\" class=\"org.apache.camel.component.aws.xray.XRayTracer\" /> <property name=\"tracer\" ref=\"tracingStrategy\"/> </bean>" ]
https://docs.redhat.com/en/documentation/red_hat_fuse/7.13/html/apache_camel_component_reference/awsxray-awsxraycomponent
Chapter 4. Understanding OpenShift Container Platform update duration
Chapter 4. Understanding OpenShift Container Platform update duration OpenShift Container Platform update duration varies based on the deployment topology. This page helps you understand the factors that affect update duration and estimate how long the cluster update takes in your environment. 4.1. Prerequisites You are familiar with OpenShift Container Platform architecture and OpenShift Container Platform updates . 4.2. Factors affecting update duration The following factors can affect your cluster update duration: The reboot of compute nodes to the new machine configuration by Machine Config Operator (MCO) The value of MaxUnavailable in the machine config pool The minimum number or percentages of replicas set in pod disruption budget (PDB) The number of nodes in the cluster The health of the cluster nodes 4.3. Cluster update phases In OpenShift Container Platform, the cluster update happens in two phases: Cluster Version Operator (CVO) target update payload deployment Machine Config Operator (MCO) node updates 4.3.1. Cluster Version Operator target update payload deployment The Cluster Version Operator (CVO) retrieves the target update release image and applies to the cluster. All components which run as pods are updated during this phase, whereas the host components are updated by the Machine Config Operator (MCO). This process might take 60 to 120 minutes. Note The CVO phase of the update does not restart the nodes. Additional resources Introduction to OpenShift Updates 4.3.2. Machine Config Operator node updates The Machine Config Operator (MCO) applies a new machine configuration to each control plane and compute node. During this process, the MCO performs the following sequential actions on each node of the cluster: Cordon and drain all the nodes Update the operating system (OS) Reboot the nodes Uncordon all nodes and schedule workloads on the node Note When a node is cordoned, workloads cannot be scheduled to it. The time to complete this process depends on several factors including the node and infrastructure configuration. This process might take 5 or more minutes to complete per node. In addition to MCO, you should consider the impact of the following parameters: The control plane node update duration is predictable and oftentimes shorter than compute nodes, because the control plane workloads are tuned for graceful updates and quick drains. You can update the compute nodes in parallel by setting the maxUnavailable field to greater than 1 in the Machine Config Pool (MCP). The MCO cordons the number of nodes specified in maxUnavailable and marks them unavailable for update. When you increase maxUnavailable on the MCP, it can help the pool to update more quickly. However, if maxUnavailable is set too high, and several nodes are cordoned simultaneously, the pod disruption budget (PDB) guarded workloads could fail to drain because a schedulable node cannot be found to run the replicas. If you increase maxUnavailable for the MCP, ensure that you still have sufficient schedulable nodes to allow PDB guarded workloads to drain. Before you begin the update, you must ensure that all the nodes are available. Any unavailable nodes can significantly impact the update duration because the node unavailability affects the maxUnavailable and pod disruption budgets. To check the status of nodes from the terminal, run the following command: USD oc get node Example Output NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb If the status of the node is NotReady or SchedulingDisabled , then the node is not available and this impacts the update duration. You can check the status of nodes from the Administrator perspective in the web console by expanding Compute Nodes . Additional resources Machine config overview Pod disruption budget 4.4. Estimating cluster update time Historical update duration of similar clusters provides you the best estimate for the future cluster updates. However, if the historical data is not available, you can use the following convention to estimate your cluster update time: A node update iteration consists of one or more nodes updated in parallel. The control plane nodes are always updated in parallel with the compute nodes. In addition, one or more compute nodes can be updated in parallel based on the maxUnavailable value. For example, to estimate the update time, consider an OpenShift Container Platform cluster with three control plane nodes and six compute nodes and each host takes about 5 minutes to reboot. Note The time it takes to reboot a particular node varies significantly. In cloud instances, the reboot might take about 1 to 2 minutes, whereas in physical bare metal hosts the reboot might take more than 15 minutes. Scenario-1 When you set maxUnavailable to 1 for both the control plane and compute nodes Machine Config Pool (MCP), then all the six compute nodes will update one after another in each iteration: Scenario-2 When you set maxUnavailable to 2 for the compute node MCP, then two compute nodes will update in parallel in each iteration. Therefore it takes total three iterations to update all the nodes. Important The default setting for maxUnavailable is 1 for all the MCPs in OpenShift Container Platform. It is recommended that you do not change the maxUnavailable in the control plane MCP. 4.5. Red Hat Enterprise Linux (RHEL) compute nodes Red Hat Enterprise Linux (RHEL) compute nodes require an additional usage of openshift-ansible to update node binary components. The actual time spent updating RHEL compute nodes should not be significantly different from Red Hat Enterprise Linux CoreOS (RHCOS) compute nodes. Additional resources Updating RHEL compute machines
[ "oc get node", "NAME STATUS ROLES AGE VERSION ip-10-0-137-31.us-east-2.compute.internal Ready,SchedulingDisabled worker 12d v1.23.5+3afdacb ip-10-0-151-208.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-176-138.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-183-194.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb ip-10-0-204-102.us-east-2.compute.internal Ready master 12d v1.23.5+3afdacb ip-10-0-207-224.us-east-2.compute.internal Ready worker 12d v1.23.5+3afdacb", "Cluster update time = CVO target update payload deployment time + (# node update iterations x MCO node update time)", "Cluster update time = 60 + (6 x 5) = 90 minutes", "Cluster update time = 60 + (3 x 5) = 75 minutes" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.12/html/updating_clusters/understanding-openshift-update-duration
Preface
Preface This tutorial demonstrates how to use Debezium to capture updates in a MySQL database. As the data in the database changes, you can see the resulting event streams. Providing feedback on Red Hat documentation We appreciate your feedback on our documentation. To propose improvements, open a Jira issue and describe your suggested changes. Provide as much detail as possible to enable us to address your request quickly. Prerequisite You have a Red Hat Customer Portal account. This account enables you to log in to the Red Hat Jira Software instance. If you do not have an account, you will be prompted to create one. Procedure Click the following link: Create issue . In the Summary text box, enter a brief description of the issue. In the Description text box, provide the following information: The URL of the page where you found the issue. A detailed description of the issue. You can leave the information in any other fields at their default values. Click Create to submit the Jira issue to the documentation team. Thank you for taking the time to provide feedback.
null
https://docs.redhat.com/en/documentation/red_hat_build_of_debezium/2.7.3/html/getting_started_with_debezium/pr01
38.3. Systems Registered with Satellite
38.3. Systems Registered with Satellite For a Satellite registration on the server, locate the system in the Systems tab and delete the profile.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/unregister-satellite
9.20. Installation Complete
9.20. Installation Complete Congratulations! Your Red Hat Enterprise Linux installation is now complete! The installation program prompts you to prepare your system for reboot. Remember to remove any installation media if it is not ejected automatically upon reboot. After your computer's normal power-up sequence has completed, Red Hat Enterprise Linux loads and starts. By default, the start process is hidden behind a graphical screen that displays a progress bar. Eventually, a login: prompt or a GUI login screen (if you installed the X Window System and chose to start X automatically) appears. The first time you start your Red Hat Enterprise Linux system in run level 5 (the graphical run level), the FirstBoot tool appears, which guides you through the Red Hat Enterprise Linux configuration. Using this tool, you can set your system time and date, install software, register your machine with Red Hat Network, and more. FirstBoot lets you configure your environment at the beginning, so that you can get started using your Red Hat Enterprise Linux system quickly. Chapter 34, Firstboot will guide you through the configuration process.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/installation_guide/s1-complete-x86
Chapter 9. Importing projects from Git repositories
Chapter 9. Importing projects from Git repositories Git is a distributed version control system. It implements revisions as commit objects. When you save your changes to a repository, a new commit object in the Git repository is created. Business Central uses Git to store project data, including assets such as rules and processes. When you create a project in Business Central, it is added to a Git repository that is connected to Business Central. If you have projects in Git repositories, you can import the project's master branch or import the master branch along with other specific branches into the Business Central Git repository through Business Central spaces. Prerequisites Red Hat Decision Manager projects exist in an external Git repository. You have the credentials required for read access to that external Git repository. Procedure In Business Central, go to Menu Design Projects . Select or create the space into which you want to import the projects. The default space is MySpace . In the upper-right corner of the screen, click the arrow to Add Project and select Import Project . In the Import Project window, enter the URL and credentials for the Git repository that contains the project that you want to import and click Import . The Import Projects page is displayed. Optional: To import master and specific branches, do the following tasks: On the Import Projects page, click the branches icon. In the Branches to be imported window, select branches from the list. Note You must select the master branch as a minimum. Click Ok . On the Import Projects page, ensure the project is highlighted and click Ok .
null
https://docs.redhat.com/en/documentation/red_hat_decision_manager/7.13/html/deploying_and_managing_red_hat_decision_manager_services/git-import-project
Appendix C. Locations of cryptographic keys in RHEL 8
Appendix C. Locations of cryptographic keys in RHEL 8 After you upgrade a system that is running in Federal Information Processing Standard (FIPS) mode, you must regenerate and otherwise ensure the FIPS compliance of all cryptographic keys. Some well-known locations for such keys are in the following table. Note that the list is not complete, and you might check also other locations. Table C.1. Locations of cryptographic keys in RHEL 8 Application Locations of keys Notes Apache mod_ssl /etc/pki/tls/private/localhost.key The /usr/lib/systemd/system/httpd-init.service service runs the /usr/libexec/httpd-ssl-gencerts file if the /etc/pki/tls/private/localhost.key does not exist. Bind9 RNDC /etc/rndc.key The named-setup-rndc.service service runs the /usr/libexec/generate-rndc-key.sh script, which generates the /etc/rndc.key file. Cyrus IMAPd /etc/pki/cyrus-imapd/cyrus-imapd-key.pem The cyrus-imapd-init.service service generates the /etc/pki/cyrus-imapd/cyrus-imapd-key.pem file on its startup. DNSSEC-Trigger /etc/dnssec-trigger/dnssec_trigger_control.key The dnssec-triggerd-keygen.service service generates the /etc/dnssec-trigger/dnssec_trigger_control.key file. Dovecot /etc/pki/dovecot/private/dovecot.pem The dovecot-init.service service generates the /etc/pki/dovecot/private/dovecot.pem file on its startup. OpenPegasus /etc/pki/Pegasus/file.pem The tog-pegasus.service service generates the /etc/pki/Pegasus/file.pem private key file. OpenSSH /etc/ssh/ssh_host*_key Ed25519 and DSA keys are not FIPS-compliant. Custom Diffie-Hellman (DH) parameters are not supported in FIPS mode. Comment out the ModuliFile option in the sshd_config file to ensure compatibility with FIPS mode. You can keep the moduli file ( /etc/ssh/moduli by default) in place. Postfix /etc/pki/tls/private/postfix.key The post-installation script contained in the postfix package generates the /etc/pki/tls/private/postfix.key file. RHEL web console /etc/cockpit/ws-certs.d/ The web console runs the /usr/libexec/cockpit-certificate-ensure -for-cockpit-tls file, which creates keys in the /etc/cockpit/ws-certs.d/ directory. Sendmail /etc/pki/tls/private/sendmail.key The post-installation script contained in the sendmail package generates the /etc/pki/tls/private/sendmail.key file. To ensure the FIPS compliance of cryptographic keys of third-party applications, refer to the corresponding documentation of the respective applications. Furthermore: Any service that opens a port might use a TLS certificate. Not all services generate cryptographic keys automatically, but many services that start up automatically by default often do so. Focus also on services that use any cryptographic libraries such as NSS, GnuTLS, OpenSSL, and libgcrypt. Check also backup, disk-encryption, file-encryption, and similar applications. Important Because FIPS mode in RHEL 8 restricts DSA keys, DH parameters, RSA keys shorter than 1024 bits, and some other ciphers, old cryptographic keys stop working after the upgrade from RHEL 7. See the Changes in core cryptographic components section in the Considerations in adopting RHEL 8 document and the Using system-wide cryptographic policies chapter in the RHEL 8 Security hardening document for more information. Additional resources Switching the system to FIPS mode in the RHEL 8 Security hardening document update-crypto-policies(8) and fips-mode-setup(8) man pages on your system
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/8/html/upgrading_from_rhel_7_to_rhel_8/locations-of-cryptographic-keys-in-rhel-8_upgrading-from-rhel-7-to-rhel-8
Chapter 22. Deprecated Functionality in Red Hat Enterprise Linux 7
Chapter 22. Deprecated Functionality in Red Hat Enterprise Linux 7 Symbols from libraries linked as dependencies no longer resolved by ld Previously, the ld linker resolved any symbols present in any linked library, even if some libraries were linked only implicitly as dependencies of other libraries. This allowed developers to use symbols from the implicitly linked libraries in application code and omit explicitly specifying these libraries for linking. For security reasons, ld has been changed to not resolve references to symbols in libraries linked implicitly as dependencies. As a result, linking with ld fails when application code attempts to use symbols from libraries not declared for linking and linked only implicitly as dependencies. To use symbols from libraries linked as dependencies, developers must explicitly link against these libraries as well. To restore the behavior of ld , use the -copy-dt-needed-entries command-line option. (BZ# 1292230 ) Windows guest virtual machine support limited As of Red Hat Enterprise Linux 7, Windows guest virtual machines are supported only under specific subscription programs, such as Advanced Mission Critical (AMC).
null
https://docs.redhat.com/en/documentation/Red_Hat_Enterprise_Linux/7/html/7.0_release_notes/ch22
Appendix C. Configuring a Host for PCI Passthrough
Appendix C. Configuring a Host for PCI Passthrough Note This is one in a series of topics that show how to set up and configure SR-IOV on Red Hat Virtualization. For more information, see Setting Up and Configuring SR-IOV Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Manager already, ensure you place the host into maintenance mode first. Prerequisites Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information. Configuring a Host for PCI Passthrough Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Red Hat Enterprise Linux Virtualization Deployment and Administration Guide for more information. Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Manager or by editing the grub configuration file manually. To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the Red Hat Virtualization Manager and Kernel Settings Explained . To edit the grub configuration file manually, see Enabling IOMMU Manually . For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information. Enabling IOMMU Manually Enable IOMMU by editing the grub configuration file. Note If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default. For Intel, boot the machine, and append intel_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. For AMD, boot the machine, and append amd_iommu=on to the end of the GRUB_CMDLINE_LINUX line in the grub configuration file. Note If intel_iommu=on or amd_iommu=on works, you can try adding iommu=pt or amd_iommu=pt . The pt option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to option if the pt option doesn't work for your host. If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the allow_unsafe_interrupts option if the virtual machines are trusted. The allow_unsafe_interrupts is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option: Refresh the grub.cfg file and reboot the host for these changes to take effect: To enable SR-IOV and assign dedicated virtual NICs to virtual machines, see https://access.redhat.com/articles/2335291 .
[ "vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... intel_iommu=on", "vi /etc/default/grub GRUB_CMDLINE_LINUX=\"nofb splash=quiet console=tty0 ... amd_iommu=on", "vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1", "grub2-mkconfig -o /boot/grub2/grub.cfg", "reboot" ]
https://docs.redhat.com/en/documentation/red_hat_virtualization/4.3/html/installing_red_hat_virtualization_as_a_standalone_manager_with_local_databases/Configuring_a_Host_for_PCI_Passthrough_SM_localDB_deploy
5.2.29. /proc/sysrq-trigger
5.2.29. /proc/sysrq-trigger Using the echo command to write to this file, a remote root user can execute most System Request Key commands remotely as if at the local terminal. To echo values to this file, the /proc/sys/kernel/sysrq must be set to a value other than 0 . For more information about the System Request Key, refer to Section 5.3.9.3, " /proc/sys/kernel/ " . Although it is possible to write to this file, it cannot be read, even by the root user.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/4/html/reference_guide/s2-proc-sysrq-trigger
Chapter 2. Configuring single-supplier replication using the web console
Chapter 2. Configuring single-supplier replication using the web console In a single-supplier replication environment, one writable supplier replicates data to one or multiple read-only consumers. For example, set up single-supplier replication if a suffix receives a large number of search requests but only a small number of write requests. To distribute the load, clients can then search for the suffix on read-only consumers and send write requests to the supplier. This section assumes that you have an existing Directory Server instance running on a host named supplier.example.com that will act as a supplier in the replication topology to be set up. The procedures describe how to add a read-only consumer named consumer.example.com to the topology, and how to configure single-supplier replication for the dc=example,dc=com suffix. 2.1. Preparing the new consumer using the web console To prepare the consumer.example.com host, enable replication. This process: Configures the role of this server in the replication topology Defines the suffix that is replicated Creates the replication manager account the supplier uses to connect to this host Perform this procedure on the consumer that you want to add to the replication topology. Prerequisites You installed the Directory Server instance. For details, see Setting up a new instance using the web console . The database for the dc=example,dc=com suffix exists. You are logged in to the instance in the web console. Procedure Open the Replication menu. Select the dc=example,dc=com suffix. Click Enable Replication . Select Consumer in the Replication Role field, and enter the replication manager account and the password to create: These settings configure the host as a consumer for the dc=example,dc=com suffix. Additionally, the server creates the cn=replication manager,cn=config user with the specified password and allows this account to replicate changes for the suffix to this host. Click Enable Replication . Verification Open the Replication menu. Select the dc=example,dc=com suffix. If the Replica Role field contains the value Consumer , replication is enabled, and the host is configured as a consumer. Additional resources Installing Red Hat Directory Server Storing suffixes in separate databases 2.2. Configuring the existing server as a supplier to the consumer using the web console To prepare the supplier.example.com host, you need to: Enable replication for the suffix. Create a replication agreement to the consumer. Initialize the consumer. Perform this procedure on the existing supplier in the replication topology. Prerequisites You enabled replication for the dc=example,dc=com suffix on the consumer. You are logged in to the instance in the web console. Procedure Open the Replication menu. Select the dc=example,dc=com suffix. Enable replication: Click Enable Replication . Select Supplier in the Replication Role field, enter a replica ID, replication manager credentials, and leave the Bind Group DN field empty: These settings configure the host as a supplier for the dc=example,dc=com suffix and set the replica ID of this entry to 1 . Important The replica ID must be a unique integer between 1 and 65534 for a suffix across all suppliers in the topology. Click Enable Replication . Add a replication agreement and initialize the consumer: On the Agreements tab, click Create Agreement , and fill the fields: These settings create a replication agreement named example-agreement . The replication agreement defines settings, such as the consumer's host name, protocol, and authentication information that the supplier uses when connecting and replicating data to this consumer. Select Do Online Initialization in the Consumer Initialization field to automatically initialize the consumer after saving the agreement. Click Save Agreement . After the agreement was created, Directory Server initializes consumer.example.com . Depending on the amount of data to replicate, initialization can be time-consuming. Verification Open the Replication menu. Select the dc=example,dc=com suffix. On the Agreements tab, verify the status of the agreement in the State column of the table.
null
https://docs.redhat.com/en/documentation/red_hat_directory_server/12/html/configuring_and_managing_replication/assembly_configuring-single-supplier-replication-using-the-web-console_configuring-and-managing-replication
Chapter 3. Optimize workload performance domains
Chapter 3. Optimize workload performance domains One of the key benefits of Ceph storage is the ability to support different types of workloads within the same cluster using Ceph performance domains. Dramatically different hardware configurations can be associated with each performance domain. Ceph system administrators can deploy storage pools on the appropriate performance domain, providing applications with storage tailored to specific performance and cost profiles. Selecting appropriately sized and optimized servers for these performance domains is an essential aspect of designing a Red Hat Ceph Storage cluster. The following lists provide the criteria Red Hat uses to identify optimal Red Hat Ceph Storage cluster configurations on storage servers. These categories are provided as general guidelines for hardware purchases and configuration decisions, and can be adjusted to satisfy unique workload blends. Actual hardware configurations chosen will vary depending on specific workload mix and vendor capabilities. IOPS optimized An IOPS-optimized storage cluster typically has the following properties: Lowest cost per IOPS. Highest IOPS per GB. 99th percentile latency consistency. Typically uses for an IOPS-optimized storage cluster are: Typically block storage. 3x replication for hard disk drives (HDDs) or 2x replication for solid state drives (SSDs). MySQL on OpenStack clouds. Throughput optimized A throughput-optimized storage cluster typically has the following properties: Lowest cost per MBps (throughput). Highest MBps per TB. Highest MBps per BTU. Highest MBps per Watt. 97th percentile latency consistency. Typically uses for an throughput-optimized storage cluster are: Block or object storage. 3x replication. Active performance storage for video, audio, and images. Streaming media. Cost and capacity optimized A cost- and capacity-optimized storage cluster typically has the following properties: Lowest cost per TB. Lowest BTU per TB. Lowest Watts required per TB. Typically uses for an cost- and capacity-optimized storage cluster are: Typically object storage. Erasure coding common for maximizing usable capacity Object archive. Video, audio, and image object repositories. How performance domains work To the Ceph client interface that reads and writes data, a Ceph storage cluster appears as a simple pool where the client stores data. However, the storage cluster performs many complex operations in a manner that is completely transparent to the client interface. Ceph clients and Ceph object storage daemons (Ceph OSDs, or simply OSDs) both use the controlled replication under scalable hashing (CRUSH) algorithm for storage and retrieval of objects. OSDs run on OSD hosts-the storage servers within the cluster. A CRUSH map describes a topography of cluster resources, and the map exists both on client nodes as well as Ceph Monitor (MON) nodes within the cluster. Ceph clients and Ceph OSDs both use the CRUSH map and the CRUSH algorithm. Ceph clients communicate directly with OSDs, eliminating a centralized object lookup and a potential performance bottleneck. With awareness of the CRUSH map and communication with their peers, OSDs can handle replication, backfilling, and recovery-allowing for dynamic failure recovery. Ceph uses the CRUSH map to implement failure domains. Ceph also uses the CRUSH map to implement performance domains, which simply take the performance profile of the underlying hardware into consideration. The CRUSH map describes how Ceph stores data, and it is implemented as a simple hierarchy (acyclic graph) and a ruleset. The CRUSH map can support multiple hierarchies to separate one type of hardware performance profile from another. The following examples describe performance domains. Hard disk drives (HDDs) are typically appropriate for cost- and capacity-focused workloads. Throughput-sensitive workloads typically use HDDs with Ceph write journals on solid state drives (SSDs). IOPS-intensive workloads such as MySQL and MariaDB often use SSDs. All of these performance domains can coexist in a Ceph storage cluster.
null
https://docs.redhat.com/en/documentation/red_hat_ceph_storage/8/html/hardware_guide/optimize-workload-performance-domains_hw
Chapter 9. Deploying your Jboss EAP application on the OpenShift Container Platform
Chapter 9. Deploying your Jboss EAP application on the OpenShift Container Platform 9.1. JBoss EAP operator for automating application deployment on OpenShift EAP operator is a JBoss EAP-specific controller that extends the OpenShift API. You can use the EAP operator to create, configure, manage, and seamlessly upgrade instances of complex stateful applications. The EAP operator manages multiple JBoss EAP Java application instances across the cluster. It also ensures safe transaction recovery in your application cluster by verifying all transactions are completed before scaling down the replicas and marking a pod as clean for termination. The EAP operator uses StatefulSet for the appropriate handling of Jakarta Enterprise Beans remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted. You must install the EAP operator using OperatorHub, which can be used by OpenShift cluster administrators to discover, install, and upgrade operators. In OpenShift Container Platform 4, you can use the Operator Lifecycle Manager (OLM) to install, update, and manage the lifecycle of all operators and their associated services running across multiple clusters. The OLM runs by default in OpenShift Container Platform 4. It aids cluster administrators in installing, upgrading, and granting access to operators running on their cluster. The OpenShift Container Platform web console provides management screens for cluster administrators to install operators, as well as grant specific projects access to use the catalog of operators available on the cluster. For more information about operators and the OLM, see the OpenShift documentation . 9.1.1. Installing EAP operator using the web console As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform web console. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster. Here are a few points you must be aware of before installing the EAP operator using the web console: Installation Mode: Choose All namespaces on the cluster (default) to have the operator installed on all namespaces or choose individual namespaces, if available, to install the operator only on selected namespaces. Update Channel: If the EAP operator is available through multiple channels, you can choose which channel you want to subscribe to. For example, to deploy from the stable channel, if available, select it from the list. Approval Strategy: You can choose automatic or manual updates. If you choose automatic updates for the EAP operator, when a new version of the operator is available, the Operator Lifecycle Manager (OLM) automatically upgrades the running instance of EAP operator. If you choose manual updates, when a newer version of the operator is available, the OLM creates an update request. You must then manually approve the update request to have the operator updated to the new version. Note The following procedure might change in accordance with the modifications in the OpenShift Container Platform web console. For the latest and most accurate procedure, see the Installing from the OperatorHub using the web console section in the latest version of the Working with Operators in OpenShift Container Platform guide. Prerequisites Access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. Procedure In the OpenShift Container Platform web console, navigate to Operators -> OperatorHub . Scroll down or type EAP into the Filter by keyword box to find the EAP operator. Select JBoss EAP operator and click Install . On the Create Operator Subscription page: Select one of the following: All namespaces on the cluster (default) installs the operator in the default openshift-operators namespace to watch and be made available to all namespaces in the cluster. This option is not always available. A specific namespace on the cluster installs the operator in a specific, single namespace that you choose. The operator is made available for use only in this single namespace. Select an Update Channel . Select Automatic or Manual approval strategy, as described earlier. Click Subscribe to make the EAP operator available to the selected namespaces on this OpenShift Container Platform cluster. If you selected a manual approval strategy, the subscription's upgrade status remains Upgrading until you review and approve its install plan. After you approve the install plan on the Install Plan page, the subscription upgrade status moves to Up to date . If you selected an automatic approval strategy, the upgrade status moves to Up to date without intervention. After the subscription's upgrade status is Up to date , select Operators Installed Operators to verify that the EAP ClusterServiceVersion (CSV) shows up and its Status changes to InstallSucceeded in the relevant namespace. Note For the All namespaces... installation mode, the status displayed is InstallSucceeded in the openshift-operators namespace. In other namespaces the status displayed is Copied . . If the Status field does not change to InstallSucceeded , check the logs in any pod in the openshift-operators project (or other relevant namespace if A specific namespace... installation mode was selected) on the Workloads Pods page that are reporting issues to troubleshoot further. 9.1.2. Installing EAP operator using the CLI As a JBoss EAP cluster administrator, you can install an EAP operator from Red Hat OperatorHub using the OpenShift Container Platform CLI. You can then subscribe the EAP operator to one or more namespaces to make it available for developers on your cluster. When installing the EAP operator from the OperatorHub using the CLI, use the oc command to create a Subscription object. Prerequisites You have access to an OpenShift Container Platform cluster using an account with cluster-admin permissions. You have installed the oc tool in your local system. Procedure View the list of operators available to the cluster from the OperatorHub: Create a Subscription object YAML file (for example, eap-operator-sub.yaml ) to subscribe a namespace to your EAP operator. The following is an example Subscription object YAML file: 1 Name of the operator to subscribe to. 2 The EAP operator is provided by the redhat-operators CatalogSource. For information about channels and approval strategy, see the web console version of this procedure. Create the Subscription object from the YAML file: The EAP operator is successfully installed. At this point, the OLM is aware of the EAP operator. A ClusterServiceVersion (CSV) for the operator appears in the target namespace, and APIs provided by the EAP operator is available for creation. 9.1.3. Deploying a Java application on OpenShift using the EAP operator The EAP operator helps automate Java application deployment on OpenShift. For information about the EAP operator APIs, see EAP Operator: API Information . Prerequisites You have installed EAP operator. For more information about installing the EAP operator, see Installing EAP operator using the web console and Installing EAP operator using the CLI . You have built a Docker image of the user application using JBoss EAP for OpenShift Source-to-Image (S2I) build image. You have created a Secret object, if your application's CustomResourceDefinition (CRD) file references one. For more information about creating a new Secret object, see Creating a Secret . You have created a ConfigMap , if your application's CRD file references one. For information about creating a ConfigMap , see Creating a ConfigMap . You have created a ConfigMap from the standalone.xml file, if you choose to do so. For information about creating a ConfigMap from the standalone.xml file, see Creating a ConfigMap from a standalone.xml File . Note Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 8.0. Procedure Open your web browser and log on to OperatorHub. Select the Project or namespace you want to use for your Java application. Navigate to Installed Operator and select JBoss EAP operator . On the Overview tab, click the Create Instance link. Specify the application image details. The application image specifies the Docker image that contains the Java application. The image must be built using the JBoss EAP for OpenShift Source-to-Image (S2I) build image. If the applicationImage field corresponds to an imagestreamtag, any change to the image triggers an automatic upgrade of the application. You can provide any of the following references of the JBoss EAP for OpenShift application image: The name of the image: mycomp/myapp A tag: mycomp/myapp:1.0 A digest: mycomp/myapp:@sha256:0af38bc38be93116b6a1d86a9c78bd14cd527121970899d719baf78e5dc7bfd2 An imagestreamtag: my-app:latest Specify the size of the application. For example: Configure the application environment using the env spec . The Environment variables can come directly from values, such as POSTGRESQL_SERVICE_HOST or from Secret objects, such as POSTGRESQL_USER. For example: Complete the following optional configurations that are relevant to your application deployment: Specify the storage requirements for the server data directory. For more information, see Configuring Persistent Storage for Applications . Specify the name of the Secret you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example: The Secret is mounted at /etc/secrets/<secret name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The Secret is mounted as a volume inside the pod. The following example demonstrates commands that you can use to find key values: Note Modifying a Secret object might lead to project inconsistencies. Instead of modifying an existing Secret object, Red Hat recommends creating a new object with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded. Specify the name of the ConfigMap you created in WildFlyServerSpec to mount it as a volume in the pods running the application. For example: The ConfigMap is mounted at /etc/configmaps/<configmap name> and each key/value is stored as a file. The name of the file is the key and the content is the value. The ConfigMap is mounted as a volume inside the pod. To find the key values: Note Modifying a ConfigMap might lead to project inconsistencies. Instead of modifying an existing ConfigMap , Red Hat recommends creating a new ConfigMap with the same content as that of the old one. You can then update the content as required and change the reference in operator custom resource (CR) from old to new. This is considered a new CR update and the pods are reloaded. If you choose to have your own standalone ConfigMap , provide the name of the ConfigMap as well as the key for the standalone.xml file: Note Creating a ConfigMap from the standalone.xml file is not supported in JBoss EAP 8.0. If you want to disable the default HTTP route creation in OpenShift, set disableHTTPRoute to true : 9.1.3.1. Creating a secret If your application's CustomResourceDefinition (CRD) file references a Secret , you must create the Secret before deploying your application on OpenShift using the EAP operator. Procedure To create a Secret : 9.1.3.2. Creating a configMap If your application's CustomResourceDefinition (CRD) file references a ConfigMap in the spec.ConfigMaps field, you must create the ConfigMap before deploying your application on OpenShift using the EAP operator. Procedure To create a configmap: 9.1.3.3. Creating a configMap from a standalone.xml File You can create your own JBoss EAP standalone configuration instead of using the one in the application image that comes from JBoss EAP for OpenShift Source-to-Image (S2I). The standalone.xml file must be put in a ConfigMap that is accessible by the operator. Note Providing a standalone.xml file from the ConfigMap is not supported in JBoss EAP 8.0. Procedure To create a ConfigMap from the standalone.xml file: 9.1.3.4. Configuring persistent storage for applications If your application requires persistent storage for some data, such as, transaction or messaging logs that must persist across pod restarts, configure the storage spec. If the storage spec is empty, an EmptyDir volume is used by each pod of the application. However, this volume does not persist after its corresponding pod is stopped. Procedure Specify volumeClaimTemplate to configure resources requirements to store the JBoss EAP standalone data directory. The name of the template is derived from the name of JBoss EAP. The corresponding volume is mounted in ReadWriteOnce access mode. The persistent volume that meets this storage requirement is mounted on the /eap/standalone/data directory. 9.1.4. Viewing metrics of an application using the EAP operator You can view the metrics of an application deployed on OpenShift using the EAP operator. When your cluster administrator enables metrics monitoring in your project, the EAP operator automatically displays the metrics on the OpenShift console. Prerequisites Your cluster administrator has enabled monitoring for your project. For more information, see Enabling monitoring for user-defined projects . Procedure In the OpenShift Container Platform web console, navigate to Monitoring -> Metrics . On the Metrics screen, type the name of your application in the text box to select your application. The metrics for your application appear on the screen. 9.1.5. Uninstalling EAP operator using web console You can delete, or uninstall, EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator's ClusterServiceVersion (CSV) and deployment. Note To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator. You can uninstall the EAP operator using the web console. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. Procedure From the Operators -> Installed Operators page, select JBoss EAP . On the right-hand side of the Operator Details page, select Uninstall Operator from the Actions drop-down menu. When prompted by the Remove Operator Subscription window, optionally select the Also completely remove the Operator from the selected namespace check box if you want all components related to the installation to be removed. This removes the CSV, which in turn removes the pods, deployments, custom resource definitions (CRDs), and custom resources (CRs) associated with the operator. Click Remove . The EAP operator stops running and no longer receives updates. 9.1.6. Uninstalling JBoss EAP operator using the CLI You can delete, or uninstall, the EAP operator from your cluster, you can delete the subscription to remove it from the subscribed namespace. You can also remove the EAP operator's ClusterServiceVersion (CSV) and deployment. Note To ensure data consistency and safety, scale down the number of pods in your cluster to 0 before uninstalling the EAP operator. You can uninstall the EAP operator using the command line. When using the command line, you uninstall the operator by deleting the subscription and CSV from the target namespace. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. Procedure Check the current version of the EAP operator subscription in the currentCSV field: Delete the EAP operator's subscription: Delete the CSV for the EAP operator in the target namespace using the currentCSV value from the step: 9.1.7. JBoss EAP operator for safe transaction recovery JBoss EAP operator ensures data consistency before terminating your application cluster. To do this, the operator verifies that all transactions are completed before scaling down the replicas and marking a pod as clean for termination. This means that if you want to remove the deployment safely without data inconsistencies, you must first scale down the number of pods to 0, wait until all pods are terminated, and only then delete the wildflyserver instance. Warning If you decide to delete the entire wildflyserver definition ( oc delete wildflyserver <deployment_name> ), then no transaction recovery process is started and the pod is terminated regardless of unfinished transactions. The unfinished work that results from this operation might block the data changes that you later initiate. The data changes for other JBoss EAP instances involved in transactional enterprise bean remote calls with this wildflyserver might also be blocked. When the scaledown process begins the pod state ( oc get pod <pod_name> ) is still marked as Running , because the pod must complete all the unfinished transactions, including the remote enterprise beans calls that target it. If you want to monitor the state of the scaledown process, observe the status of the wildflyserver instance. For more information, see Monitoring the Scaledown Process . For information about pod statuses during scaledown, see Pod Status During Scaledown . 9.1.7.1. StatefulSets for stable network host names The EAP operator that manages the wildflyserver creates a StatefulSet as an underlying object managing the JBoss EAP pods. A StatefulSet is the workload API object that manages stateful applications. It manages the deployment and scaling of a set of pods, and provides guarantees about the ordering and uniqueness of these pods. The StatefulSet ensures that the pods in a cluster are named in a predefined order. It also ensures that pod termination follows the same order. For example, let us say, pod-1 has a transaction with heuristic outcome, and so is in the state of SCALING_DOWN_RECOVERY_DIRTY . Even if pod-0 is in the state of SCALING_DOWN_CLEAN , it is not terminated before pod-1. Until pod-1 is clean and is terminated, pod-0 remains in the SCALING_DOWN_CLEAN state. However, even if pod-0 is in the SCALING_DOWN_CLEAN state, it does not receive any new request and is practically idle. Note Decreasing the replica size of the StatefulSet or deleting the pod itself has no effect and such changes are reverted. 9.1.7.2. Monitoring the scaledown process If you want to monitor the state of the scaledown process, you must observe the status of the wildflyserver instance. For more information about the different pod statuses during scaledown, see Pod Status During Scaledown . Procedure To observe the state of the scaledown process: The WildFlyServer.Status.Scalingdown Pods and WildFlyServer.Status.Replicas fields shows the overall state of the active and non-active pods. The Scalingdown Pods field shows the number of pods which are about to be terminated when all the unfinished transactions are complete. The WildFlyServer.Status.Replicas field shows the current number of running pods. The WildFlyServer.Spec.Replicas field shows the number of pods in ACTIVE state. If there are no pods in scaledown process the numbers of pods in the WildFlyServer.Status.Replicas and WildFlyServer.Spec.Replicas fields are equal. 9.1.7.2.1. Pod status during Scaledown The following table describes the different pod statuses during scaledown: Table 9.1. Pod Status Description Pod Status Description ACTIVE The pod is active and processing requests. SCALING_DOWN_RECOVERY_INVESTIGATION The pod is about to be scaled down. The scale-down process is under investigation about the state of transactions in JBoss EAP. SCALING_DOWN_RECOVERY_DIRTY JBoss EAP contains some incomplete transactions. The pod cannot be terminated until they are cleaned. The transaction recovery process is periodically run at JBoss EAP and it waits until the transactions are completed. SCALING_DOWN_CLEAN The pod is processed by transaction scaled down processing and is marked as clean to be removed from the cluster. 9.1.7.3. Scaling down during transactions with heuristic outcomes When the outcome of a transaction is unknown, automatic transaction recovery is impossible. You must then manually recover your transactions. Prerequisites The status of your pod is stuck at SCALING_DOWN_RECOVERY_DIRTY . Procedure Access your JBoss EAP instance using CLI. Resolve all the heuristics transaction records in the transaction object store. For more information, see Recovering Heuristic Outcomes in the Managing Transactions on JBoss EAP . Remove all records from the enterprise bean client recovery folder. Remove all files from the pod enterprise bean client recovery directory: The status of your pod changes to SCALING_DOWN_CLEAN and the pod is terminated. 9.1.7.4. Configuring the transactions subsystem to use the JDBC storage for transaction log In cases where the system does not provide a file system to store transaction logs , use the JBoss EAP S2I image to configure the JDBC object store. Important S2I environment variables are not usable when JBoss EAP is deployed as a bootable JAR. In this case, you must create a Galleon layer or configure a CLI script to make the necessary configuration changes. The JDBC object store can be set up with the environment variable TX_DATABASE_PREFIX_MAPPING . This variable has the same structure as DB_SERVICE_PREFIX_MAPPING . Prerequisite You have created a datasource based on the value of the environment variables. You have ensured consistent data reads and writes permissions exist between the database and the transaction manager communicating over the JDBC object store. For more information see configuring JDBC data sources Procedure Set up and configure the JDBC object store through the S2I environment variable. Example Verification You can verify both the datasource configuration and transaction subsystem configuration by checking the standalone.xml configuration file oc rsh <podname> cat /opt/server/standalone/configuration/standalone.xml . Expected output: 9.1.7.5. Transaction recovery during scaledown When you deploy applications using transactions in a JBoss EAP application server, it's crucial to understand what happens during a cluster scaledown. Decreasing the number of active JBoss EAP replicas can leave in-doubt (or heuristic) transactions that need to be completed (or solved, in case of heuristic). This situation is a consequence of the XA standard, where transactions declared as prepared promise to complete successfully. Also, XA transactions can complete with a heuristic outcome, which then needs to be manually solved. Shutting down pods that are managing such transactions (i.e. in-doubt or heuristic transactions) can lead to data inconsistencies/loss or data locks. The JBoss EAP operator provides a scaledown functionality to ensure all transactions finish before reducing the number of replicas. This functionality verifies that all transactions in a pod are completed/solved and only then the operator marks the pod as clean for termination. For more information, see WildFly Operator User Guide . Procedure To decrease the replica size in your JBoss EAP application server, do one of the following: Patch the replica size: Manually edit the replica size: Note Directly decreasing the replica size at the StatefulSet , or deleting the pod, will have no effect. Such changes will revert automatically. Important Deleting the entire JBoss EAP server definition ( oc delete wildflyserver <deployment_name>` ) does not initiate a transaction recovery process. The pod terminates regardless of unfinished transactions. To remove the deployment safely without data inconsistencies, first scale down the number of pods to zero, wait until all pods terminate, and then delete the JBoss EAP instance. Important Ensure you enable the Narayana recovery listener in the JBoss EAP transaction subsystem. Without this, the scaledown transaction recovery processing skips for that particular JBoss EAP pod. 9.1.7.6. Scaledown process When the scaledown process begins, you will notice that the pod state oc get pod <pod_name> still shows as Running . In this state, the operator allows the pod to complete all unfinished transactions, including remote EJB calls targeting it. To observe the scaledown process, you can monitor the status of the JBoss EAP instance. Use oc describe wildflyserver <name> to see the pod statuses. Name Description ACTIVE The pod actively processes requests. SCALING_DOWN_RECOVERY_INVESTIGATION The pod is under investigation to find out if there are transactions that did not complete their lifecycle successfully. SCALING_DOWN_RECOVERY_PROCESSING There are in-doubt transactions in the log store. The pod cannot be terminated until these transactions are either completed or cleaned. SCALING_DOWN_RECOVERY_HEURISTICS There are heuristic transactions in the log store. The pod cannot be terminated until these transactions are either manually solved or cleaned. SCALING_DOWN_CLEAN The pod has completed the transaction scaledown process and is clean for removal from the cluster. 9.1.7.7. Disabling Transaction Recovery During Scaledown If you want to disable transaction recovery during scale-down, you can configure the property WildFlyServerSpec.DeactivateTransactionRecovery to true (by default, it's set to false). When you enable DeactivateTransactionRecovery , in-doubt and heuristic transactions won't be finalized or reported, potentially leading to data inconsistency or loss when you employ distributed transactions. Heuristic Transactions The outcome of XA transactions can be commit , roll-back , or heuristic . The latter outcome represents the acknowledgment that some participants of the distributed transactions didn't complete according to the outcome of the first phase of the two-phase protocol (which is used to complete XA transactions). As a consequence, heuristic transactions require manual intervention to enforce the correct outcome (which the transaction coordinator enforced to all participants during the first phase). If an JBoss EAP pod is handling a heuristic transaction, then that pod will be labeled as SCALING_DOWN_RECOVERY_HEURISTICS . The administrator must manually connect to the specific JBoss EAP pod (using jboss-cli ) and manually resolve the heuristic transaction. Once all these records are solved/removed from the transaction object store, the operator will label the pod as SCALING_DOWN_CLEAN , and the pod will be terminated. StatefulSet Behavior The StatefulSet ensures stable network hostnames, which depend on the ordering of pods. Pods are named in a defined order, requiring the termination of pod-1 before pod-0. If pod-1 is in SCALING_DOWN_RECOVERY_HEURISTICS and pod-0 is in SCALING_DOWN_CLEAN , pod-0 will linger in its state until pod-1 is terminated. Even if the pod is in SCALING_DOWN_CLEAN , it does not receive new requests and remains idle. 9.1.8. Automatically scaling pods with the horizontal pod autoscaler HPA With EAP operator, you can use a horizontal pod autoscaler HPA to automatically increase or decrease the scale of an EAP application based on metrics collected from the pods that belong to that EAP application. Note Using HPA ensures that transaction recovery is still handled when a pod is scaled down. Procedure Configure the resources: Important You must specify the resource limits and requests for containers in a pod for autoscaling to work as expected. Create the Horizontal pod autoscaler: Verification You can verify the HPA behavior by checking the replicas. The number of replicas increase or decrease depending on the increase or decrease of the workload. Additional resources Automatically scaling pods with the horizontal pod autoscaler 9.1.9. Jarkarta enterprise beans remoting on OpenShift 9.1.9.1. Jakarta Enterprise Beans remoting on openShift For JBoss EAP to work correctly with enterprise bean remoting calls between different JBoss EAP clusters on OpenShift, you must understand the enterprise bean remoting configuration options on OpenShift. Note When deploying on OpenShift, consider the use of the EAP operator. The EAP operator uses StatefulSet for the appropriate handling of enterprise bean remoting and transaction recovery processing. The StatefulSet ensures persistent storage and network hostname stability even after pods are restarted. Network hostname stability is required when the JBoss EAP instance is contacted using an enterprise bean remote call with transaction propagation. The JBoss EAP instance must be reachable under the same hostname even if the pod restarts. The transaction manager, which is a stateful component, binds the persisted transaction data to a particular JBoss EAP instance. Because the transaction log is bound to a specific JBoss EAP instance, it must be completed in the same instance. To prevent data loss when the JDBC transaction log store is used, make sure your database provides data-consistent reads and writes. Consistent data reads and writes are important when the database is scaled horizontally with multiple instances. An enterprise bean remote caller has two options to configure the remote calls: Define a remote outbound connection. Use a programmatic JNDI lookup for the bean at the remote server. For more information, see Using Remote Jakarta Enterprise Beans Clients . You must reconfigure the value representing the address of the target node depending on the enterprise bean remote call configuration method. Note The name of the target enterprise bean for the remote call must be the DNS address of the first pod. The StatefulSet behaviour depends on the ordering of the pods. The pods are named in a predefined order. For example, if you scale your application to three replicas, your pods have names such as eap-server-0 , eap-server-1 , and eap-server-2 . The EAP operator also uses a headless service that ensures a specific DNS hostname is assigned to the pod. If the application uses the EAP operator, a headless service is created with a name such as eap-server-headless . In this case, the DNS name of the first pod is eap-server-0.eap-server-headless . The use of the hostname eap-server-0.eap-server-headless ensures that the enterprise bean call reaches any EAP instance connected to the cluster. A bootstrap connection is used to initialize the Jakarta Enterprise Beans client, which gathers the structure of the EAP cluster as the step. 9.1.9.1.1. Configuring Jakarta Enterprise Beans on OpenShift You must configure the JBoss EAP servers that act as callers for enterprise bean remoting. The target server must configure a user with permission to receive the enterprise bean remote calls. Prerequisites You have used the EAP operator and the supported JBoss EAP for OpenShift S2I image for deploying and managing the JBoss EAP application instances on OpenShift. The clustering is set correctly. For more information about JBoss EAP clustering, see the Clustering section. Procedure Create a user in the target server with permission to receive the enterprise bean remote calls: Configure the caller JBoss EAP application server. Create the eap-config.xml file in USDJBOSS_HOME/standalone/configuration using the custom configuration functionality. For more information, see Custom Configuration . Configure the caller JBoss EAP application server with the wildfly.config.url property: Note If you use the following example for your configuration, replace the >>PASTE_... _HERE<< with username and password you configured. Example Configuration <configuration> <authentication-client xmlns="urn:elytron:1.0"> <authentication-rules> <rule use-configuration="jta"> <match-abstract-type name="jta" authority="jboss" /> </rule> </authentication-rules> <authentication-configurations> <configuration name="jta"> <sasl-mechanism-selector selector="DIGEST-MD5" /> <providers> <use-service-loader /> </providers> <set-user-name name="PASTE_USER_NAME_HERE" /> <credentials> <clear-password password="PASTE_PASSWORD_HERE" /> </credentials> <set-mechanism-realm name="ApplicationRealm" /> </configuration> </authentication-configurations> </authentication-client> </configuration>
[ "oc get packagemanifests -n openshift-marketplace | grep eap NAME CATALOG AGE eap Red Hat Operators 43d", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: eap namespace: openshift-operators spec: channel: stable installPlanApproval: Automatic name: eap 1 source: redhat-operators 2 sourceNamespace: openshift-marketplace", "oc apply -f eap-operator-sub.yaml oc get csv -n openshift-operators NAME DISPLAY VERSION REPLACES PHASE eap-operator.v1.0.0 JBoss EAP 1.0.0 Succeeded", "spec: replicas:2", "spec: env: - name: POSTGRESQL_SERVICE_HOST value: postgresql - name: POSTGRESQL_SERVICE_PORT value: '5432' - name: POSTGRESQL_DATABASE valueFrom: secretKeyRef: key: database-name name: postgresql - name: POSTGRESQL_USER valueFrom: secretKeyRef: key: database-user name: postgresql - name: POSTGRESQL_PASSWORD valueFrom: secretKeyRef: key: database-password name: postgresql", "spec: secrets: - my-secret", "ls /etc/secrets/my-secret/ my-key my-password cat /etc/secrets/my-secret/my-key devuser cat /etc/secrets/my-secret/my-password my-very-secure-pasword", "spec: configMaps: - my-config", "ls /etc/configmaps/my-config/ key1 key2 cat /etc/configmaps/my-config/key1 value1 cat /etc/configmaps/my-config/key2 value2", "standaloneConfigMap: name: clusterbench-config-map key: standalone.xml", "spec: disableHTTPRoute: true", "oc create secret generic my-secret --from-literal=my-key=devuser --from-literal=my-password='my-very-secure-pasword'", "oc create configmap my-config --from-literal=key1=value1 --from-literal=key2=value2 configmap/my-config created", "oc create configmap clusterbench-config-map --from-file examples/clustering/config/standalone.xml configmap/clusterbench-config-map created", "spec: storage: volumeClaimTemplate: spec: resources: requests: storage: 3Gi", "oc get subscription eap-operator -n openshift-operators -o yaml | grep currentCSV currentCSV: eap-operator.v1.0.0", "oc delete subscription eap-operator -n openshift-operators subscription.operators.coreos.com \"eap-operator\" deleted", "oc delete clusterserviceversion eap-operator.v1.0.0 -n openshift-operators clusterserviceversion.operators.coreos.com \"eap-operator.v1.0.0\" deleted", "describe wildflyserver <name>", "USDJBOSS_HOME/standalone/data/ejb-xa-recovery exec <podname> rm -rf USDJBOSS_HOME/standalone/data/ejb-xa-recovery", "Narayana JDBC objectstore configuration via s2i env variables - name: TX_DATABASE_PREFIX_MAPPING value: 'PostgresJdbcObjectStore-postgresql=PG_OBJECTSTORE' - name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_HOST value: 'postgresql' - name: POSTGRESJDBCOBJECTSTORE_POSTGRESQL_SERVICE_PORT value: '5432' - name: PG_OBJECTSTORE_JNDI value: 'java:jboss/datasources/PostgresJdbc' - name: PG_OBJECTSTORE_DRIVER value: 'postgresql' - name: PG_OBJECTSTORE_DATABASE value: 'sampledb' - name: PG_OBJECTSTORE_USERNAME value: 'admin' - name: PG_OBJECTSTORE_PASSWORD value: 'admin'", "<datasource jta=\"false\" jndi-name=\"java:jboss/datasources/PostgresJdbcObjectStore\" pool-name=\"postgresjdbcobjectstore_postgresqlObjectStorePool\" enabled=\"true\" use-java-context=\"true\" statistics-enabled=\"USD{wildfly.datasources.statistics-enabled:USD{wildfly.statistics-enabled:false}}\"> <connection-url>jdbc:postgresql://postgresql:5432/sampledb</connection-url> <driver>postgresql</driver> <security> <user-name>admin</user-name> <password>admin</password> </security> </datasource> <!-- under subsystem urn:jboss:domain:transactions --> <jdbc-store datasource-jndi-name=\"java:jboss/datasources/PostgresJdbcObjectStore\"> <!-- the pod name was named transactions-xa-0 --> <action table-prefix=\"ostransactionsxa0\"/> <communication table-prefix=\"ostransactionsxa0\"/> <state table-prefix=\"ostransactionsxa0\"/> </jdbc-store>", "patch wildflyserver <name> -p '[{\"op\":\"replace\", \"path\":\"/spec/replicas\", \"value\":0}]' --type json", "edit wildflyserver <name>", "apiVersion: wildfly.org/v1alpha1 kind: WildFlyServer metadata: name: eap-helloworld spec: applicationImage: 'eap-helloworld:latest' replicas: 1 resources: limits: cpu: 500m memory: 2Gi requests: cpu: 100m memory: 1Gi", "autoscale wildflyserver/eap-helloworld --cpu-percent=50 --min=1 --max=10", "get hpa -w NAME REFERENCE TARGETS MINPODS MAXPODS REPLICAS AGE eap-helloworld WildFlyServer/eap-helloworld 217%/50% 1 10 1 4s eap-helloworld WildFlyServer/eap-helloworld 217%/50% 1 10 4 17s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 8 32s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 10 47s eap-helloworld WildFlyServer/eap-helloworld 139%/50% 1 10 10 62s eap-helloworld WildFlyServer/eap-helloworld 180%/50% 1 10 10 92s eap-helloworld WildFlyServer/eap-helloworld 133%/50% 1 10 10 2m2s", "USDJBOSS_HOME/bin/add-user.sh", "JAVA_OPTS_APPEND=\"-Dwildfly.config.url=USDJBOSS_HOME/standalone/configuration/eap-config.xml\"", "<configuration> <authentication-client xmlns=\"urn:elytron:1.0\"> <authentication-rules> <rule use-configuration=\"jta\"> <match-abstract-type name=\"jta\" authority=\"jboss\" /> </rule> </authentication-rules> <authentication-configurations> <configuration name=\"jta\"> <sasl-mechanism-selector selector=\"DIGEST-MD5\" /> <providers> <use-service-loader /> </providers> <set-user-name name=\"PASTE_USER_NAME_HERE\" /> <credentials> <clear-password password=\"PASTE_PASSWORD_HERE\" /> </credentials> <set-mechanism-realm name=\"ApplicationRealm\" /> </configuration> </authentication-configurations> </authentication-client> </configuration>" ]
https://docs.redhat.com/en/documentation/red_hat_jboss_enterprise_application_platform/8.0/html/using_jboss_eap_on_openshift_container_platform/assembly_deploying-your-jboss-eap-application-on-the-openshift-container-platform_default
Chapter 16. Understanding and managing pod security admission
Chapter 16. Understanding and managing pod security admission Pod security admission is an implementation of the Kubernetes pod security standards . Use pod security admission to restrict the behavior of pods. 16.1. About pod security admission OpenShift Container Platform includes Kubernetes pod security admission . Pods that do not comply with the pod security admission defined globally or at the namespace level are not admitted to the cluster and cannot run. Globally, the privileged profile is enforced, and the restricted profile is used for warnings and audits. You can also configure the pod security admission settings at the namespace level. Important Do not run workloads in or share access to default projects. Default projects are reserved for running core cluster components. The following default projects are considered highly privileged: default , kube-public , kube-system , openshift , openshift-infra , openshift-node , and other system-created projects that have the openshift.io/run-level label set to 0 or 1 . Functionality that relies on admission plugins, such as pod security admission, security context constraints, cluster resource quotas, and image reference resolution, does not work in highly privileged projects. 16.1.1. Pod security admission modes You can configure the following pod security admission modes for a namespace: Table 16.1. Pod security admission modes Mode Label Description enforce pod-security.kubernetes.io/enforce Rejects a pod from admission if it does not comply with the set profile audit pod-security.kubernetes.io/audit Logs audit events if a pod does not comply with the set profile warn pod-security.kubernetes.io/warn Displays warnings if a pod does not comply with the set profile 16.1.2. Pod security admission profiles You can set each of the pod security admission modes to one of the following profiles: Table 16.2. Pod security admission profiles Profile Description privileged Least restrictive policy; allows for known privilege escalation baseline Minimally restrictive policy; prevents known privilege escalations restricted Most restrictive policy; follows current pod hardening best practices 16.1.3. Privileged namespaces The following system namespaces are always set to the privileged pod security admission profile: default kube-public kube-system You cannot change the pod security profile for these privileged namespaces. 16.1.4. Pod security admission and security context constraints Pod security admission standards and security context constraints are reconciled and enforced by two independent controllers. The two controllers work independently using the following processes to enforce security policies: The security context constraint controller may mutate some security context fields per the pod's assigned SCC. For example, if the seccomp profile is empty or not set and if the pod's assigned SCC enforces seccompProfiles field to be runtime/default , the controller sets the default type to RuntimeDefault . The security context constraint controller validates the pod's security context against the matching SCC. The pod security admission controller validates the pod's security context against the pod security standard assigned to the namespace. 16.2. About pod security admission synchronization In addition to the global pod security admission control configuration, a controller applies pod security admission control warn and audit labels to namespaces according to the SCC permissions of the service accounts that are in a given namespace. The controller examines ServiceAccount object permissions to use security context constraints in each namespace. Security context constraints (SCCs) are mapped to pod security profiles based on their field values; the controller uses these translated profiles. Pod security admission warn and audit labels are set to the most privileged pod security profile in the namespace to prevent displaying warnings and logging audit events when pods are created. Namespace labeling is based on consideration of namespace-local service account privileges. Applying pods directly might use the SCC privileges of the user who runs the pod. However, user privileges are not considered during automatic labeling. 16.2.1. Pod security admission synchronization namespace exclusions Pod security admission synchronization is permanently disabled on most system-created namespaces. Synchronization is also initially disabled on user-created openshift-* prefixed namespaces, but you can enable synchronization on them later. Important If a pod security admission label ( pod-security.kubernetes.io/<mode> ) is manually modified from the automatically labeled value on a label-synchronized namespace, synchronization is disabled for that label. If necessary, you can enable synchronization again by using one of the following methods: By removing the modified pod security admission label from the namespace By setting the security.openshift.io/scc.podSecurityLabelSync label to true If you force synchronization by adding this label, then any modified pod security admission labels will be overwritten. Permanently disabled namespaces Namespaces that are defined as part of the cluster payload have pod security admission synchronization disabled permanently. The following namespaces are permanently disabled: default kube-node-lease kube-system kube-public openshift All system-created namespaces that are prefixed with openshift- , except for openshift-operators Initially disabled namespaces By default, all namespaces that have an openshift- prefix have pod security admission synchronization disabled initially. You can enable synchronization for user-created openshift-* namespaces and for the openshift-operators namespace. Note You cannot enable synchronization for any system-created openshift-* namespaces, except for openshift-operators . If an Operator is installed in a user-created openshift-* namespace, synchronization is enabled automatically after a cluster service version (CSV) is created in the namespace. The synchronized label is derived from the permissions of the service accounts in the namespace. 16.3. Controlling pod security admission synchronization You can enable or disable automatic pod security admission synchronization for most namespaces. Important You cannot enable pod security admission synchronization on some system-created namespaces. For more information, see Pod security admission synchronization namespace exclusions . Procedure For each namespace that you want to configure, set a value for the security.openshift.io/scc.podSecurityLabelSync label: To disable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to false . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false To enable pod security admission label synchronization in a namespace, set the value of the security.openshift.io/scc.podSecurityLabelSync label to true . Run the following command: USD oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true Note Use the --overwrite flag to overwrite the value if this label is already set on the namespace. Additional resources Pod security admission synchronization namespace exclusions 16.4. Configuring pod security admission for a namespace You can configure the pod security admission settings at the namespace level. For each of the pod security admission modes on the namespace, you can set which pod security admission profile to use. Procedure For each pod security admission mode that you want to set on a namespace, run the following command: USD oc label namespace <namespace> \ 1 pod-security.kubernetes.io/<mode>=<profile> \ 2 --overwrite 1 Set <namespace> to the namespace to configure. 2 Set <mode> to enforce , warn , or audit . Set <profile> to restricted , baseline , or privileged . 16.5. About pod security admission alerts A PodSecurityViolation alert is triggered when the Kubernetes API server reports that there is a pod denial on the audit level of the pod security admission controller. This alert persists for one day. View the Kubernetes API server audit logs to investigate alerts that were triggered. As an example, a workload is likely to fail admission if global enforcement is set to the restricted pod security level. For assistance in identifying pod security admission violation audit events, see Audit annotations in the Kubernetes documentation. 16.5.1. Identifying pod security violations The PodSecurityViolation alert does not provide details on which workloads are causing pod security violations. You can identify the affected workloads by reviewing the Kubernetes API server audit logs. This procedure uses the must-gather tool to gather the audit logs and then searches for the pod-security.kubernetes.io/audit-violations annotation. Prerequisites You have installed jq . You have access to the cluster as a user with the cluster-admin role. Procedure To gather the audit logs, enter the following command: USD oc adm must-gather -- /usr/bin/gather_audit_logs To output the affected workload details, enter the following command: USD zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz \ | jq -r 'select((.annotations["pod-security.kubernetes.io/audit-violations"] != null) and (.objectRef.resource=="pods")) | .objectRef.namespace + " " + .objectRef.name' \ | sort | uniq -c Replace <archive_id> and <image_digest_id> with the actual path names. Example output 1 test-namespace my-pod 16.6. Additional resources Viewing audit logs Managing security context constraints
[ "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=false", "oc label namespace <namespace> security.openshift.io/scc.podSecurityLabelSync=true", "oc label namespace <namespace> \\ 1 pod-security.kubernetes.io/<mode>=<profile> \\ 2 --overwrite", "oc adm must-gather -- /usr/bin/gather_audit_logs", "zgrep -h pod-security.kubernetes.io/audit-violations must-gather.local.<archive_id>/<image_digest_id>/audit_logs/kube-apiserver/*log.gz | jq -r 'select((.annotations[\"pod-security.kubernetes.io/audit-violations\"] != null) and (.objectRef.resource==\"pods\")) | .objectRef.namespace + \" \" + .objectRef.name' | sort | uniq -c", "1 test-namespace my-pod" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.17/html/authentication_and_authorization/understanding-and-managing-pod-security-admission
Chapter 10. Backing up Satellite Server and Capsule Server
Chapter 10. Backing up Satellite Server and Capsule Server You can back up your Satellite deployment to ensure the continuity of your Red Hat Satellite deployment and associated data in the event of a disaster. If your deployment uses custom configurations, you must consider how to handle these custom configurations when you plan your backup and disaster recovery policy. Note If you create a new instance of the Satellite Server, decommission the old instances after restoring the backup. Cloned instances are not supposed to run in parallel in a production environment. To create a backup of your Satellite Server or Capsule Server and all associated data, use the satellite-maintain backup command. Backing up to a separate storage device on a separate system is highly recommended. Satellite services are unavailable during the backup. Therefore, you must ensure that no other tasks are scheduled by other administrators. You can schedule a backup by using cron . For more information, see the Section 10.5, "Example of a weekly full backup followed by daily incremental backups" . During offline backups, the services are inactive and Satellite is in a maintenance mode. All the traffic from outside on port 443 is rejected by a firewall to ensure there are no modifications triggered. A backup contains sensitive information from the /root/ssl-build directory. For example, it can contain hostnames, ssh keys, request files and SSL certificates. You must encrypt or move the backup to a secure location to minimize the risk of damage or unauthorized access to the hosts. Conventional backup methods You can also use conventional backup methods. For more information, see Recovering and restoring a system in Red Hat Enterprise Linux 8 Configuring basic system settings . Note If you plan to use the satellite-maintain backup command to create a backup, do not stop Satellite services. When creating a snapshot or conventional backup, you must stop all services as follows: Start the services after creating a snapshot or conventional backup: 10.1. Estimating the size of a backup The full backup creates uncompressed archives of PostgreSQL and Pulp database files, and Satellite configuration files. Compression occurs after the archives are created to decrease the time when Satellite services are unavailable. A full backup requires space to store the following data: Uncompressed Satellite database and configuration files Compressed Satellite database and configuration files An extra 20% of the total estimated space to ensure a reliable backup Procedure Enter the du command to estimate the size of uncompressed directories containing Satellite database and configuration files: Calculate how much space is required to store the compressed data. The following table describes the compression ratio of all data items included in the backup: Table 10.1. Backup data compression ratio Data type Directory Ratio Example results PostgreSQL database files /var/lib/pgsql/data 80 - 85% 100 GB 20 GB Pulp RPM files /var/lib/pulp (not compressed) 100 GB Configuration files /var/lib/tftpboot /etc /root/ssl-build /var/www/html/pub /opt/puppetlabs 85% 942 MB 141 MB In this example, the compressed backup data occupies 120 GB in total. To calculate the amount of available space you require to store a backup, calculate the sum of the estimated values of compressed and uncompressed backup data, and add an extra 20% to ensure a reliable backup. This example requires 201 GB plus 120 GB for the uncompressed and compressed backup data, 321 GB in total. With 64 GB of extra space, 385 GB must be allocated for the backup location. 10.2. Performing a full backup of Satellite Server or Capsule Server Red Hat Satellite uses the satellite-maintain backup command to make backups. There are two main methods of backing up Satellite Server: Offline backup All Satellite services are shut down during an offline backup. Online backup Only Satellite services that affect the consistency of the backup are shut down while the backup process is running. Online backups check for consistency and require more time than offline backups. For more information about each of these methods, you can view the usage statements for each backup method. Offline backups Online backups Directory creation The satellite-maintain backup command creates a time-stamped subdirectory in the backup directory that you specify. The satellite-maintain backup command does not overwrite backups, therefore you must select the correct directory or subdirectory when restoring from a backup or an incremental backup. The satellite-maintain backup command stops and restarts services as required. When you run the satellite-maintain backup offline command, the following default backup directories are created: satellite-backup on Satellite foreman-proxy-backup on Capsule If you want to set a custom directory name, add the --preserve-directory option and add a directory name. The backup is then stored in the directory you provide in the command line. If you use the --preserve-directory option, no data is removed if the backup fails. Note that if you use a local PostgreSQL database, the postgres user requires write access to the backup directory. Remote databases You can use the satellite-maintain backup command to back up remote databases. You can use both online and offline methods to back up remote databases, but if you use offline method, the satellite-maintain backup command performs a database dump. Backing up to a remote NFS share To enable Satellite to save the backup to an NFS share, ensure that the root user of your Satellite Server or Capsule Server can write to the NFS share. NFS export options such as root_squash and all_squash are known to prevent this. For more information, see Red Hat Enterprise Linux Configuring and using network files services and Red Hat Enterprise Linux Securing network services . Prerequisites Ensure that your backup location has sufficient available disk space to store the backup. For more information, see Section 10.1, "Estimating the size of a backup" . Warning Request other users of Satellite Server or Capsule Server to save any changes and warn them that Satellite services are unavailable for the duration of the backup. Ensure no other tasks are scheduled for the same time as the backup. Procedure On Satellite Server, enter the following command: On Capsule Server, enter the following command: 10.3. Performing a backup without Pulp content You can perform an offline backup that excludes the contents of the Pulp directory. The backup without Pulp content is useful for debugging purposes and is only intended to provide access to configuration files without backing up the Pulp database. For production usecases, do not restore from a directory that does not contain Pulp content. Warning Request other users of Satellite Server or Capsule Server to save any changes and warn them that Satellite services are unavailable for the duration of the backup. Ensure no other tasks are scheduled for the same time as the backup. Prerequisites Ensure that your backup location has sufficient available disk space to store the backup. For more information, see Section 10.1, "Estimating the size of a backup" . Procedure To perform an offline backup without Pulp content, enter the following command: 10.4. Performing an incremental backup Use this procedure to perform an offline backup of any changes since a backup. To perform incremental backups, you must perform a full backup as a reference to create the first incremental backup of a sequence. Keep the most recent full backup and a complete sequence of incremental backups to restore from. Warning Request other users of Satellite Server or Capsule Server to save any changes and warn them that Satellite services are unavailable for the duration of the backup. Ensure no other tasks are scheduled for the same time as the backup. Prerequisites Ensure that your backup location has sufficient available disk space to store the backup. For more information, see Section 10.1, "Estimating the size of a backup" . Procedure To perform a full offline backup, enter the following command: To create a directory within your backup directory to store the first incremental back up, enter the satellite-maintain backup command with the --incremental option: To create the second incremental backup, enter the satellite-maintain backup command with the --incremental option and include the path to the first incremental backup to indicate the starting point for the increment. This creates a directory for the second incremental backup in your backup directory: Optional: If you want to point to a different version of the backup, and make a series of increments with that version of the backup as the starting point, you can do this at any time. For example, if you want to make a new incremental backup from the full backup rather than the first or second incremental backup, point to the full backup directory: 10.5. Example of a weekly full backup followed by daily incremental backups The following script performs a full backup on a Sunday followed by incremental backups for each of the following days. A new subdirectory is created for each day that an incremental backup is performed. The script requires a daily cron job. #!/bin/bash -e PATH=/sbin:/bin:/usr/sbin:/usr/bin DESTINATION=/var/backup_directory if [[ USD(date +%w) == 0 ]]; then satellite-maintain backup offline --assumeyes USDDESTINATION else LAST=USD(ls -td -- USDDESTINATION/*/ | head -n 1) satellite-maintain backup offline --assumeyes --incremental "USDLAST" USDDESTINATION fi exit 0 Note that the satellite-maintain backup command requires /sbin and /usr/sbin directories to be in PATH and the --assumeyes option is used to skip the confirmation prompt. 10.6. Performing an online backup Perform an online backup only for debugging purposes. Risks associated with online backups When performing an online backup, if there are procedures affecting the Pulp database, the Pulp part of the backup procedure repeats until it is no longer being altered. Because the backup of the Pulp database is the most time consuming part of backing up Satellite, if you make a change that alters the Pulp database during this time, the backup procedure keeps restarting. For production environments, use the offline method. For more information, see Section 10.2, "Performing a full backup of Satellite Server or Capsule Server" . If you want to use the online backup method in production, proceed with caution and ensure that no modifications occur during the backup. Warning Request other users of Satellite Server or Capsule Server to save any changes and warn them that Satellite services are unavailable for the duration of the backup. Ensure no other tasks are scheduled for the same time as the backup. Prerequisites Ensure that your backup location has sufficient available disk space to store the backup. For more information, see Section 10.1, "Estimating the size of a backup" . Procedure To perform an online backup, enter the following command: 10.7. Skipping steps when performing backups A backup using the satellite-maintain backup command proceeds in a sequence of steps. To skip part of the backup, add the --whitelist option to the command and the step label that you want to omit. Procedure To display a list of available step labels, enter the following command: To skip a step of the backup, enter the satellite-maintain backup command with the --whitelist option. For example:
[ "satellite-maintain service stop", "satellite-maintain service start", "du -sh /var/lib/pgsql/data /var/lib/pulp 100G /var/lib/pgsql/data 100G /var/lib/pulp du -csh /var/lib/tftpboot /etc /root/ssl-build /var/www/html/pub /opt/puppetlabs 16M /var/lib/tftpboot 37M /etc 900K /root/ssl-build 100K /var/www/html/pub 2M /opt/puppetlabs 942M total", "satellite-maintain backup offline --help", "satellite-maintain backup online --help", "satellite-maintain backup offline /var/satellite-backup", "satellite-maintain backup offline /var/foreman-proxy-backup", "satellite-maintain backup offline --skip-pulp-content /var/backup_directory", "satellite-maintain backup offline /var/backup_directory", "satellite-maintain backup offline --incremental /var/backup_directory/full_backup /var/backup_directory", "satellite-maintain backup offline --incremental /var/backup_directory/first_incremental_backup /var/backup_directory", "satellite-maintain backup offline --incremental /var/backup_directory/full_backup /var/backup_directory", "#!/bin/bash -e PATH=/sbin:/bin:/usr/sbin:/usr/bin DESTINATION=/var/backup_directory if [[ USD(date +%w) == 0 ]]; then satellite-maintain backup offline --assumeyes USDDESTINATION else LAST=USD(ls -td -- USDDESTINATION/*/ | head -n 1) satellite-maintain backup offline --assumeyes --incremental \"USDLAST\" USDDESTINATION fi exit 0", "satellite-maintain backup online /var/backup_directory", "satellite-maintain advanced procedure run -h", "satellite-maintain backup online --whitelist backup-metadata /var/backup_directory" ]
https://docs.redhat.com/en/documentation/red_hat_satellite/6.16/html/administering_red_hat_satellite/backing-up-satellite-server-and-capsule_admin
Chapter 3. Node Feature Discovery Operator
Chapter 3. Node Feature Discovery Operator Learn about the Node Feature Discovery (NFD) Operator and how you can use it to expose node-level information by orchestrating Node Feature Discovery, a Kubernetes add-on for detecting hardware features and system configuration. The Node Feature Discovery Operator (NFD) manages the detection of hardware features and configuration in an OpenShift Container Platform cluster by labeling the nodes with hardware-specific information. NFD labels the host with node-specific attributes, such as PCI cards, kernel, operating system version, and so on. The NFD Operator can be found on the Operator Hub by searching for "Node Feature Discovery". 3.1. Installing the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the NFD daemon set. As a cluster administrator, you can install the NFD Operator by using the OpenShift Container Platform CLI or the web console. 3.1.1. Installing the NFD Operator using the CLI As a cluster administrator, you can install the NFD Operator using the CLI. Prerequisites An OpenShift Container Platform cluster Install the OpenShift CLI ( oc ). Log in as a user with cluster-admin privileges. Procedure Create a namespace for the NFD Operator. Create the following Namespace custom resource (CR) that defines the openshift-nfd namespace, and then save the YAML in the nfd-namespace.yaml file. Set cluster-monitoring to "true" . apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: "true" Create the namespace by running the following command: USD oc create -f nfd-namespace.yaml Install the NFD Operator in the namespace you created in the step by creating the following objects: Create the following OperatorGroup CR and save the YAML in the nfd-operatorgroup.yaml file: apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd Create the OperatorGroup CR by running the following command: USD oc create -f nfd-operatorgroup.yaml Create the following Subscription CR and save the YAML in the nfd-sub.yaml file: Example Subscription apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: "stable" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace Create the subscription object by running the following command: USD oc create -f nfd-sub.yaml Change to the openshift-nfd project: USD oc project openshift-nfd Verification To verify that the Operator deployment is successful, run: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m A successful deployment shows a Running status. 3.1.2. Installing the NFD Operator using the web console As a cluster administrator, you can install the NFD Operator using the web console. Procedure In the OpenShift Container Platform web console, click Operators OperatorHub . Choose Node Feature Discovery from the list of available Operators, and then click Install . On the Install Operator page, select A specific namespace on the cluster , and then click Install . You do not need to create a namespace because it is created for you. Verification To verify that the NFD Operator installed successfully: Navigate to the Operators Installed Operators page. Ensure that Node Feature Discovery is listed in the openshift-nfd project with a Status of InstallSucceeded . Note During installation an Operator might display a Failed status. If the installation later succeeds with an InstallSucceeded message, you can ignore the Failed message. Troubleshooting If the Operator does not appear as installed, troubleshoot further: Navigate to the Operators Installed Operators page and inspect the Operator Subscriptions and Install Plans tabs for any failure or errors under Status . Navigate to the Workloads Pods page and check the logs for pods in the openshift-nfd project. 3.2. Using the Node Feature Discovery Operator The Node Feature Discovery (NFD) Operator orchestrates all resources needed to run the Node-Feature-Discovery daemon set by watching for a NodeFeatureDiscovery custom resource (CR). Based on the NodeFeatureDiscovery CR, the Operator creates the operand (NFD) components in the selected namespace. You can edit the CR to use another namespace, image, image pull policy, and nfd-worker-conf config map, among other options. As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift CLI ( oc ) or the web console. Note Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly. 3.2.1. Creating a NodeFeatureDiscovery CR by using the CLI As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Note The spec.operand.image setting requires a -rhel9 image to be defined for use with OpenShift Container Platform releases 4.13 and later. The following example shows the use of -rhel9 to acquire the correct image. Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: "" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.16 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] 1 The operand.image field is mandatory. Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check that the NodeFeatureDiscovery CR was created by running the following command: USD oc get pods Example output NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s A successful deployment shows a Running status. 3.2.2. Creating a NodeFeatureDiscovery CR by using the CLI in a disconnected environment As a cluster administrator, you can create a NodeFeatureDiscovery CR instance by using the OpenShift CLI ( oc ). Prerequisites You have access to an OpenShift Container Platform cluster You installed the OpenShift CLI ( oc ). You logged in as a user with cluster-admin privileges. You installed the NFD Operator. You have access to a mirror registry with the required images. You installed the skopeo CLI tool. Procedure Determine the digest of the registry image: Run the following command: USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version> Example command USD skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12 Inspect the output to identify the image digest: Example output { ... "Digest": "sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", ... } Use the skopeo CLI tool to copy the image from registry.redhat.io to your mirror registry, by running the following command: skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> Example command skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef Create a NodeFeatureDiscovery CR: Example NodeFeatureDiscovery CR apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - "BMI1" - "BMI2" - "CLMUL" - "CMOV" - "CX16" - "ERMS" - "F16C" - "HTT" - "LZCNT" - "MMX" - "MMXEXT" - "NX" - "POPCNT" - "RDRAND" - "RDSEED" - "RDTSCP" - "SGX" - "SSE" - "SSE2" - "SSE3" - "SSE4.1" - "SSE4.2" - "SSSE3" attributeWhitelist: kernel: kconfigFile: "/path/to/kconfig" configOpts: - "NO_HZ" - "X86" - "DMI" pci: deviceClassWhitelist: - "0200" - "03" - "12" deviceLabelFields: - "class" customConfig: configData: | - name: "more.kernel.features" matchOn: - loadedKMod: ["example_kmod3"] 1 The operand.image field is mandatory. Create the NodeFeatureDiscovery CR by running the following command: USD oc apply -f <filename> Verification Check the status of the NodeFeatureDiscovery CR by running the following command: USD oc get nodefeaturediscovery nfd-instance -o yaml Check that the pods are running without ImagePullBackOff errors by running the following command: USD oc get pods -n <nfd_namespace> 3.2.3. Creating a NodeFeatureDiscovery CR by using the web console As a cluster administrator, you can create a NodeFeatureDiscovery CR by using the OpenShift Container Platform web console. Prerequisites You have access to an OpenShift Container Platform cluster You logged in as a user with cluster-admin privileges. You installed the NFD Operator. Procedure Navigate to the Operators Installed Operators page. In the Node Feature Discovery section, under Provided APIs , click Create instance . Edit the values of the NodeFeatureDiscovery CR. Click Create . Note Starting with version 4.12, the operand.image field in the NodeFeatureDiscovery CR is mandatory. If the NFD Operator is deployed by using Operator Lifecycle Manager (OLM), OLM automatically sets the operand.image field. If you create the NodeFeatureDiscovery CR by using the OpenShift Container Platform CLI or the OpenShift Container Platform web console, you must set the operand.image field explicitly. 3.3. Configuring the Node Feature Discovery Operator 3.3.1. core The core section contains common configuration settings that are not specific to any particular feature source. core.sleepInterval core.sleepInterval specifies the interval between consecutive passes of feature detection or re-detection, and thus also the interval between node re-labeling. A non-positive value implies infinite sleep interval; no re-detection or re-labeling is done. This value is overridden by the deprecated --sleep-interval command line flag, if specified. Example usage core: sleepInterval: 60s 1 The default value is 60s . core.sources core.sources specifies the list of enabled feature sources. A special value all enables all feature sources. This value is overridden by the deprecated --sources command line flag, if specified. Default: [all] Example usage core: sources: - system - custom core.labelWhiteList core.labelWhiteList specifies a regular expression for filtering feature labels based on the label name. Non-matching labels are not published. The regular expression is only matched against the basename part of the label, the part of the name after '/'. The label prefix, or namespace, is omitted. This value is overridden by the deprecated --label-whitelist command line flag, if specified. Default: null Example usage core: labelWhiteList: '^cpu-cpuid' core.noPublish Setting core.noPublish to true disables all communication with the nfd-master . It is effectively a dry run flag; nfd-worker runs feature detection normally, but no labeling requests are sent to nfd-master . This value is overridden by the --no-publish command line flag, if specified. Example: Example usage core: noPublish: true 1 The default value is false . core.klog The following options specify the logger configuration, most of which can be dynamically adjusted at run-time. The logger options can also be specified using command line flags, which take precedence over any corresponding config file options. core.klog.addDirHeader If set to true , core.klog.addDirHeader adds the file directory to the header of the log messages. Default: false Run-time configurable: yes core.klog.alsologtostderr Log to standard error as well as files. Default: false Run-time configurable: yes core.klog.logBacktraceAt When logging hits line file:N, emit a stack trace. Default: empty Run-time configurable: yes core.klog.logDir If non-empty, write log files in this directory. Default: empty Run-time configurable: no core.klog.logFile If not empty, use this log file. Default: empty Run-time configurable: no core.klog.logFileMaxSize core.klog.logFileMaxSize defines the maximum size a log file can grow to. Unit is megabytes. If the value is 0 , the maximum file size is unlimited. Default: 1800 Run-time configurable: no core.klog.logtostderr Log to standard error instead of files Default: true Run-time configurable: yes core.klog.skipHeaders If core.klog.skipHeaders is set to true , avoid header prefixes in the log messages. Default: false Run-time configurable: yes core.klog.skipLogHeaders If core.klog.skipLogHeaders is set to true , avoid headers when opening log files. Default: false Run-time configurable: no core.klog.stderrthreshold Logs at or above this threshold go to stderr. Default: 2 Run-time configurable: yes core.klog.v core.klog.v is the number for the log level verbosity. Default: 0 Run-time configurable: yes core.klog.vmodule core.klog.vmodule is a comma-separated list of pattern=N settings for file-filtered logging. Default: empty Run-time configurable: yes 3.3.2. sources The sources section contains feature source specific configuration parameters. sources.cpu.cpuid.attributeBlacklist Prevent publishing cpuid features listed in this option. This value is overridden by sources.cpu.cpuid.attributeWhitelist , if specified. Default: [BMI1, BMI2, CLMUL, CMOV, CX16, ERMS, F16C, HTT, LZCNT, MMX, MMXEXT, NX, POPCNT, RDRAND, RDSEED, RDTSCP, SGX, SGXLC, SSE, SSE2, SSE3, SSE4.1, SSE4.2, SSSE3] Example usage sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT] sources.cpu.cpuid.attributeWhitelist Only publish the cpuid features listed in this option. sources.cpu.cpuid.attributeWhitelist takes precedence over sources.cpu.cpuid.attributeBlacklist . Default: empty Example usage sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL] sources.kernel.kconfigFile sources.kernel.kconfigFile is the path of the kernel config file. If empty, NFD runs a search in the well-known standard locations. Default: empty Example usage sources: kernel: kconfigFile: "/path/to/kconfig" sources.kernel.configOpts sources.kernel.configOpts represents kernel configuration options to publish as feature labels. Default: [NO_HZ, NO_HZ_IDLE, NO_HZ_FULL, PREEMPT] Example usage sources: kernel: configOpts: [NO_HZ, X86, DMI] sources.pci.deviceClassWhitelist sources.pci.deviceClassWhitelist is a list of PCI device class IDs for which to publish a label. It can be specified as a main class only (for example, 03 ) or full class-subclass combination (for example 0300 ). The former implies that all subclasses are accepted. The format of the labels can be further configured with deviceLabelFields . Default: ["03", "0b40", "12"] Example usage sources: pci: deviceClassWhitelist: ["0200", "03"] sources.pci.deviceLabelFields sources.pci.deviceLabelFields is the set of PCI ID fields to use when constructing the name of the feature label. Valid fields are class , vendor , device , subsystem_vendor and subsystem_device . Default: [class, vendor] Example usage sources: pci: deviceLabelFields: [class, vendor, device] With the example config above, NFD would publish labels such as feature.node.kubernetes.io/pci-<class-id>_<vendor-id>_<device-id>.present=true sources.usb.deviceClassWhitelist sources.usb.deviceClassWhitelist is a list of USB device class IDs for which to publish a feature label. The format of the labels can be further configured with deviceLabelFields . Default: ["0e", "ef", "fe", "ff"] Example usage sources: usb: deviceClassWhitelist: ["ef", "ff"] sources.usb.deviceLabelFields sources.usb.deviceLabelFields is the set of USB ID fields from which to compose the name of the feature label. Valid fields are class , vendor , and device . Default: [class, vendor, device] Example usage sources: pci: deviceLabelFields: [class, vendor] With the example config above, NFD would publish labels like: feature.node.kubernetes.io/usb-<class-id>_<vendor-id>.present=true . sources.custom sources.custom is the list of rules to process in the custom feature source to create user-specific labels. Default: empty Example usage source: custom: - name: "my.custom.feature" matchOn: - loadedKMod: ["e1000e"] - pciId: class: ["0200"] vendor: ["8086"] 3.4. About the NodeFeatureRule custom resource NodeFeatureRule objects are a NodeFeatureDiscovery custom resource designed for rule-based custom labeling of nodes. Some use cases include application-specific labeling or distribution by hardware vendors to create specific labels for their devices. NodeFeatureRule objects provide a method to create vendor- or application-specific labels and taints. It uses a flexible rule-based mechanism for creating labels and optionally taints based on node features. 3.5. Using the NodeFeatureRule custom resource Create a NodeFeatureRule object to label nodes if a set of rules match the conditions. Procedure Create a custom resource file named nodefeaturerule.yaml that contains the following text: apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: "example rule" labels: "example-custom-feature": "true" # Label is created if all of the rules below match matchFeatures: # Match if "veth" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: ["8086"]} This custom resource specifies that labelling occurs when the veth module is loaded and any PCI device with vendor code 8086 exists in the cluster. Apply the nodefeaturerule.yaml file to your cluster by running the following command: USD oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml The example applies the feature label on nodes with the veth module loaded and any PCI device with vendor code 8086 exists. Note A relabeling delay of up to 1 minute might occur. 3.6. Using the NFD Topology Updater The Node Feature Discovery (NFD) Topology Updater is a daemon responsible for examining allocated resources on a worker node. It accounts for resources that are available to be allocated to new pod on a per-zone basis, where a zone can be a Non-Uniform Memory Access (NUMA) node. The NFD Topology Updater communicates the information to nfd-master, which creates a NodeResourceTopology custom resource (CR) corresponding to all of the worker nodes in the cluster. One instance of the NFD Topology Updater runs on each node of the cluster. To enable the Topology Updater workers in NFD, set the topologyupdater variable to true in the NodeFeatureDiscovery CR, as described in the section Using the Node Feature Discovery Operator . 3.6.1. NodeResourceTopology CR When run with NFD Topology Updater, NFD creates custom resource instances corresponding to the node resource hardware topology, such as: apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: ["SingleNUMANodeContainerLevel"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 3.6.2. NFD Topology Updater command line flags To view available command line flags, run the nfd-topology-updater -help command. For example, in a podman container, run the following command: USD podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help -ca-file The -ca-file flag is one of the three flags, together with the -cert-file and `-key-file`flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS root certificate that is used for verifying the authenticity of nfd-master. Default: empty Important The -ca-file flag must be specified together with the -cert-file and -key-file flags. Example USD nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -cert-file The -cert-file flag is one of the three flags, together with the -ca-file and -key-file flags , that controls mutual TLS authentication on the NFD Topology Updater. This flag specifies the TLS certificate presented for authenticating outgoing requests. Default: empty Important The -cert-file flag must be specified together with the -ca-file and -key-file flags. Example USD nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt -h, -help Print usage and exit. -key-file The -key-file flag is one of the three flags, together with the -ca-file and -cert-file flags, that controls the mutual TLS authentication on the NFD Topology Updater. This flag specifies the private key corresponding the given certificate file, or -cert-file , that is used for authenticating outgoing requests. Default: empty Important The -key-file flag must be specified together with the -ca-file and -cert-file flags. Example USD nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt -kubelet-config-file The -kubelet-config-file specifies the path to the Kubelet's configuration file. Default: /host-var/lib/kubelet/config.yaml Example USD nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml -no-publish The -no-publish flag disables all communication with the nfd-master, making it a dry run flag for nfd-topology-updater. NFD Topology Updater runs resource hardware topology detection normally, but no CR requests are sent to nfd-master. Default: false Example USD nfd-topology-updater -no-publish 3.6.2.1. -oneshot The -oneshot flag causes the NFD Topology Updater to exit after one pass of resource hardware topology detection. Default: false Example USD nfd-topology-updater -oneshot -no-publish -podresources-socket The -podresources-socket flag specifies the path to the Unix socket where kubelet exports a gRPC service to enable discovery of in-use CPUs and devices, and to provide metadata for them. Default: /host-var/liblib/kubelet/pod-resources/kubelet.sock Example USD nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock -server The -server flag specifies the address of the nfd-master endpoint to connect to. Default: localhost:8080 Example USD nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443 -server-name-override The -server-name-override flag specifies the common name (CN) which to expect from the nfd-master TLS certificate. This flag is mostly intended for development and debugging purposes. Default: empty Example USD nfd-topology-updater -server-name-override=localhost -sleep-interval The -sleep-interval flag specifies the interval between resource hardware topology re-examination and custom resource updates. A non-positive value implies infinite sleep interval and no re-detection is done. Default: 60s Example USD nfd-topology-updater -sleep-interval=1h -version Print version and exit. -watch-namespace The -watch-namespace flag specifies the namespace to ensure that resource hardware topology examination only happens for the pods running in the specified namespace. Pods that are not running in the specified namespace are not considered during resource accounting. This is particularly useful for testing and debugging purposes. A * value means that all of the pods across all namespaces are considered during the accounting process. Default: * Example USD nfd-topology-updater -watch-namespace=rte
[ "apiVersion: v1 kind: Namespace metadata: name: openshift-nfd labels: name: openshift-nfd openshift.io/cluster-monitoring: \"true\"", "oc create -f nfd-namespace.yaml", "apiVersion: operators.coreos.com/v1 kind: OperatorGroup metadata: generateName: openshift-nfd- name: openshift-nfd namespace: openshift-nfd spec: targetNamespaces: - openshift-nfd", "oc create -f nfd-operatorgroup.yaml", "apiVersion: operators.coreos.com/v1alpha1 kind: Subscription metadata: name: nfd namespace: openshift-nfd spec: channel: \"stable\" installPlanApproval: Automatic name: nfd source: redhat-operators sourceNamespace: openshift-marketplace", "oc create -f nfd-sub.yaml", "oc project openshift-nfd", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 10m", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance namespace: openshift-nfd spec: instance: \"\" # instance is empty by default topologyupdater: false # False by default operand: image: registry.redhat.io/openshift4/ose-node-feature-discovery-rhel9:v4.16 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get pods", "NAME READY STATUS RESTARTS AGE nfd-controller-manager-7f86ccfb58-vgr4x 2/2 Running 0 11m nfd-master-hcn64 1/1 Running 0 60s nfd-master-lnnxx 1/1 Running 0 60s nfd-master-mp6hr 1/1 Running 0 60s nfd-worker-vgcz9 1/1 Running 0 60s nfd-worker-xqbws 1/1 Running 0 60s", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:<openshift_version>", "skopeo inspect docker://registry.redhat.io/openshift4/ose-node-feature-discovery:v4.12", "{ \"Digest\": \"sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef\", }", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@<image_digest> docker://<mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest>", "skopeo copy docker://registry.redhat.io/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef docker://<your-mirror-registry>/openshift4/ose-node-feature-discovery@sha256:1234567890abcdef1234567890abcdef1234567890abcdef1234567890abcdef", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureDiscovery metadata: name: nfd-instance spec: operand: image: <mirror_registry>/openshift4/ose-node-feature-discovery@<image_digest> 1 imagePullPolicy: Always workerConfig: configData: | core: # labelWhiteList: # noPublish: false sleepInterval: 60s # sources: [all] # klog: # addDirHeader: false # alsologtostderr: false # logBacktraceAt: # logtostderr: true # skipHeaders: false # stderrthreshold: 2 # v: 0 # vmodule: ## NOTE: the following options are not dynamically run-time configurable ## and require a nfd-worker restart to take effect after being changed # logDir: # logFile: # logFileMaxSize: 1800 # skipLogHeaders: false sources: cpu: cpuid: # NOTE: whitelist has priority over blacklist attributeBlacklist: - \"BMI1\" - \"BMI2\" - \"CLMUL\" - \"CMOV\" - \"CX16\" - \"ERMS\" - \"F16C\" - \"HTT\" - \"LZCNT\" - \"MMX\" - \"MMXEXT\" - \"NX\" - \"POPCNT\" - \"RDRAND\" - \"RDSEED\" - \"RDTSCP\" - \"SGX\" - \"SSE\" - \"SSE2\" - \"SSE3\" - \"SSE4.1\" - \"SSE4.2\" - \"SSSE3\" attributeWhitelist: kernel: kconfigFile: \"/path/to/kconfig\" configOpts: - \"NO_HZ\" - \"X86\" - \"DMI\" pci: deviceClassWhitelist: - \"0200\" - \"03\" - \"12\" deviceLabelFields: - \"class\" customConfig: configData: | - name: \"more.kernel.features\" matchOn: - loadedKMod: [\"example_kmod3\"]", "oc apply -f <filename>", "oc get nodefeaturediscovery nfd-instance -o yaml", "oc get pods -n <nfd_namespace>", "core: sleepInterval: 60s 1", "core: sources: - system - custom", "core: labelWhiteList: '^cpu-cpuid'", "core: noPublish: true 1", "sources: cpu: cpuid: attributeBlacklist: [MMX, MMXEXT]", "sources: cpu: cpuid: attributeWhitelist: [AVX512BW, AVX512CD, AVX512DQ, AVX512F, AVX512VL]", "sources: kernel: kconfigFile: \"/path/to/kconfig\"", "sources: kernel: configOpts: [NO_HZ, X86, DMI]", "sources: pci: deviceClassWhitelist: [\"0200\", \"03\"]", "sources: pci: deviceLabelFields: [class, vendor, device]", "sources: usb: deviceClassWhitelist: [\"ef\", \"ff\"]", "sources: pci: deviceLabelFields: [class, vendor]", "source: custom: - name: \"my.custom.feature\" matchOn: - loadedKMod: [\"e1000e\"] - pciId: class: [\"0200\"] vendor: [\"8086\"]", "apiVersion: nfd.openshift.io/v1 kind: NodeFeatureRule metadata: name: example-rule spec: rules: - name: \"example rule\" labels: \"example-custom-feature\": \"true\" # Label is created if all of the rules below match matchFeatures: # Match if \"veth\" kernel module is loaded - feature: kernel.loadedmodule matchExpressions: veth: {op: Exists} # Match if any PCI device with vendor 8086 exists in the system - feature: pci.device matchExpressions: vendor: {op: In, value: [\"8086\"]}", "oc apply -f https://raw.githubusercontent.com/kubernetes-sigs/node-feature-discovery/v0.13.6/examples/nodefeaturerule.yaml", "apiVersion: topology.node.k8s.io/v1alpha1 kind: NodeResourceTopology metadata: name: node1 topologyPolicies: [\"SingleNUMANodeContainerLevel\"] zones: - name: node-0 type: Node resources: - name: cpu capacity: 20 allocatable: 16 available: 10 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3 - name: node-1 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic2 capacity: 6 allocatable: 6 available: 6 - name: node-2 type: Node resources: - name: cpu capacity: 30 allocatable: 30 available: 15 - name: vendor/nic1 capacity: 3 allocatable: 3 available: 3", "podman run gcr.io/k8s-staging-nfd/node-feature-discovery:master nfd-topology-updater -help", "nfd-topology-updater -ca-file=/opt/nfd/ca.crt -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key", "nfd-topology-updater -cert-file=/opt/nfd/updater.crt -key-file=/opt/nfd/updater.key -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -key-file=/opt/nfd/updater.key -cert-file=/opt/nfd/updater.crt -ca-file=/opt/nfd/ca.crt", "nfd-topology-updater -kubelet-config-file=/var/lib/kubelet/config.yaml", "nfd-topology-updater -no-publish", "nfd-topology-updater -oneshot -no-publish", "nfd-topology-updater -podresources-socket=/var/lib/kubelet/pod-resources/kubelet.sock", "nfd-topology-updater -server=nfd-master.nfd.svc.cluster.local:443", "nfd-topology-updater -server-name-override=localhost", "nfd-topology-updater -sleep-interval=1h", "nfd-topology-updater -watch-namespace=rte" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/specialized_hardware_and_driver_enablement/psap-node-feature-discovery-operator
Chapter 3. Creating an IBM Power Virtual Server workspace
Chapter 3. Creating an IBM Power Virtual Server workspace 3.1. Creating an IBM Power Virtual Server workspace Use the following procedure to create an IBM Power(R) Virtual Server workspace. Procedure To create an IBM Power(R) Virtual Server workspace, complete step 1 to step 5 from the IBM Cloud(R) documentation for Creating an IBM Power(R) Virtual Server . After it has finished provisioning, retrieve the 32-character alphanumeric Globally Unique Identifier (GUID) of your new workspace by entering the following command: USD ibmcloud resource service-instance <workspace name> 3.2. steps Installing a cluster on IBM Power(R) Virtual Server with customizations
[ "ibmcloud resource service-instance <workspace name>" ]
https://docs.redhat.com/en/documentation/openshift_container_platform/4.16/html/installing_on_ibm_power_virtual_server/creating-ibm-power-vs-workspace
Chapter 4. Advisories related to this release
Chapter 4. Advisories related to this release The following advisories are issued to document bug fixes and CVE fixes included in this release: RHSA-2024:1819 RHSA-2024:1820 RHSA-2024:1821 RHSA-2024:1822 Revised on 2024-04-19 15:25:32 UTC
null
https://docs.redhat.com/en/documentation/red_hat_build_of_openjdk/11/html/release_notes_for_red_hat_build_of_openjdk_11.0.23/rn-openjdk11023-advisory_openjdk
3.2.2. Modifying a Fence Device
3.2.2. Modifying a Fence Device To modify a fence device, follow these steps: From the Fence Devices configuration page, click on the name of the fence device to modify. This displays the dialog box for that fence device, with the values that have been configured for the device. To modify the fence device, enter changes to the parameters displayed. Click Apply and wait for the configuration to be updated.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/fence_configuration_guide/s2-modify-fence-devices-conga-ca
2.8. Installing Ansible to Support Gdeploy
2.8. Installing Ansible to Support Gdeploy Note Consult with your IT department to confirm your organization's supported download instructions for Ansible. gDeploy depends on Ansible to execute the playbooks and modules. You must install Ansible 2.5 or higher to use gDeploy. Execute the following command to enable the repository required to install Ansible: For Red Hat Enterprise Linux 8 For Red Hat Enterprise Linux 7 Install ansible by executing the following command:
[ "subscription-manager repos --enable=ansible-2-for-rhel-8-x86_64-rpms", "subscription-manager repos --enable=rhel-7-server-ansible-2-rpms", "yum install ansible" ]
https://docs.redhat.com/en/documentation/red_hat_gluster_storage/3.5/html/installation_guide/gdeploy_support_install_ansible
probe::stap.pass0
probe::stap.pass0 Name probe::stap.pass0 - Starting stap pass0 (parsing command line arguments) Synopsis stap.pass0 Values session the systemtap_session variable s Description pass0 fires after command line arguments have been parsed.
null
https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/7/html/systemtap_tapset_reference/api-stap-pass0